How RFID Can Error-Proof Appliance Assembly

Today, appliance manufactures are using RFID more frequently for error proofing applications and quality control processes.

Whether the appliance assembly process is automatic or semiautomatic, error-proofing processes using RFID are as important as the overall assembly processes. Now, RFID systems can be used to tell a PLC how well things are moving, and if the products and parts are within spec. This information is provided as an integral part of each step in the manufacturing process.

RFID systems installed throughout the manufacturing process provide a way of tracking not only what has happened, but what has gone right. RFID records where something has gone wrong, and what needs to be done to correct the problem.

Appliance manufacturers often need to assemble different product versions on the same production line. The important features of each part must be identified, tracked and communicated to the control system. This is most effectively done with an RFID system that stores build data on a small RFID tag attached to a build pallet. Before assembly begins, the RFID tag is loaded with the information that will instruct all downstream processes the correct parts that need to be installed.

Each part that goes into the appliance also has a RFID tag attached to it. As the build pallet moves down the assembly conveyor to each station, the tag on the build pallet is read to determine what assembly and error proofing steps are required. Often, this is displayed on an HMI for the operator. If the assembly requires testing, the results of those tests can be loaded into the data carrier for subsequent archiving. The operator scans the tag on each part as it is being installed. That data is then written to the tag on the build pallet. For example, in the washing machine assembly process, the washing machine body sits on the build pallet, and as it moves from station to station, the operators install different components like electronic boards, wiring harnesses, and motors. As each one of these components is installed, its RFID tag is scanned to make sure it is the correct part. If they install the wrong part, the HMI will signal the error.

RFID technology can also be used to reduce errors in the rework process. RFID tags, located on either on the assembly or the pallet, store information on what has been done to the appliance and what needs to be done. When an unacceptable subassembly reaches the rework area, the RFID tag provides details for the operator on what needs to be corrected. At the same time, the tag can signal a controller to configure sensors and tools, such as torque wrenches, to perform the corrective operations.

These are just a few examples of how appliance manufactures are using RFID for error proofing.

For more information, visit https://www.balluff.com/local/us/products/product-overview/rfid/.

Machine Vision: 5 Simple Steps to Choose the Right Camera

The machine vision and industrial camera market is offering thousands of models with different resolutionssizes, speeds, colors, interfaces, prices, etc. So, how do you choose? Let’s go through 5 simple steps which will ensure easy selection of the right camera for your application. 

1.  Defined task: color or monochrome camera  

2.  Amount of information: minimum of pixels per object details 

3.  Sensor resolution: formula for calculating the image sensor 

4.  Shutter technology: moving or static object 

5.  Interfaces and camera selector: lets pick the right model 

STEP 1 – Defined task  

It is always necessary to start with the size of the scanned object (X, Y), or you can determine the smallest possible value (d) that you want to distinguish with the camera.

For easier explanation, you can choose the option of solving the measurement task. However, the basic functionality can be used for any other applications.

In the task, the distance (D) between the centers of both holes is determined with the measurement accuracy (d). Using these values, we then determine the parameter for selecting the right image sensor and camera.

Example:
Distance (D) between 2 points with measuring accuracy (d) of 0.05 mm. Object size X = 48 mm (monochrome sensor, because color is not relevant here)

Note: Monochrome or color?
Color sensors use a Bayer color filter, which allows only one basic color to reach each pixel. The missing colors are determined using interpolation of the neighboring pixels. Monochrome sensors are twice as light sensitive as color sensors and lead to a sharper image by acquiring more details within the same number of pixels. For this reason, monochrome sensors are recommended if no color information is needed.

STEP 2 – Amount of information

Each type of application needs a different size of information to solve. This is differentiated by the minimum number of pixels. Lets again use monochrome options.

Minimum of pixels per object details

  • Object detail measuring / detection       3
  • Barcode line width                                           2
  • Datamatrix code module width                4
  • OCR character height                                    16

Example:
The measuring needs 3 pixels for the necessary accuracy (object detail size d). As necessary accuracy (d) which is 0.05 mm in this example, is imaged on 3 pixels.

Note:
Each characteristic or application type presupposes a minimum number of pixels. It avoids the loss of information through sampling blurs.

STEP 3 – Sensor resolution

We already defined the object size as well as resolution accuracy. As a next step, we are going to define resolution of the camera. It is simple formula to calculate the image sensor.

S = (N x O) / d = (min. number of pixels per object detail x object size) / object detail size

Object size (O) can be describe horizontally as well as vertically. Some of sensors are square and this problem is eliminated 😊

Example:
S = (3 x 48 mm) / 0.05 mm = 2880 pixels

We looked at the available image sensors and the closest is a model with resolution 3092 x 2080 => 6.4Mpixels image sensor.

Note:
Pay attention to the format of the sensor.

For a correct calculation, it is necessary to check the resolution, not only in the horizontal but also in the vertical axis.

 

STEP 4 – Shutter technology

Global shutter versus rolling shutter.

These technologies are standard in machine vision and you are able to find hundreds of cameras with both.

Rolling shutter: exposes the motive line-by-line. This procedure results in a time delay for each acquired line. Thus, moving objects are displayed blurrily in the resulting motive through the generated “object time offset” (compare to the image).

Pros:

    • More light sensitive
    • Less expensive
    • Smaller pixel size provides higher resolution with the same image format.

Cons:

    • Image distortion occurs on moving objects

Global shutter: used to get distortion-free images by exposing all pixels at the same time.

Pros:

    • Great for fast processes
    • Sharp images with no blur on moving objects.

Cons:

    • More expensive
    • Larger image format

Note:
The newest rolling shutter sensors have a feature called global reset mode, which starts the exposure of all rows simultaneously and the reset of each row is released simultaneously, also. However, the readout of the lines is equal to the readout of the rolling shutter: line by line.

This means the bottom lines of the sensor will be exposed to light longer! For this reason, this mode will only make sense, if there is no extraneous light and the flash duration is shorter or equal to the exposure time.

STEP 5 – Interfaces and camera selector

Final step is here:

You must consider the possible speed (bandwidth) as well as cable length of camera technology.

USB2
Small, handy and cost-effective, USB 2.0 industrial cameras have become integral parts in the area of medicine and microscopy. You can get a wide range of different variants, including with or without housings, as board-level or single-board, or with or without digital I/Os.

USB3/GigE Vision
Without standards every manufacturer does their own thing and many advantages customers learned to love with the GigE Vision standard would be lost. Like GigE Vision, USB3 Vision also defines:

    • a transport layer, which controls the detection of a device (Device Detection)
    • the configuration (Register Access)
    • the data streaming (Streaming Data)
    • the handling of events (Event Handling)
    • established interface to GenICam. GenICam abstracts the access to the camera features for the user. The features are standardized (name and behavior) by the standard feature naming convention (SFNC). Additionally, it is possible to create specific features in addition to the SFNC to differentiate from other vendors (quality of implementation). In contrast to GigE Vision, this time the mechanics (e.g. lockable cable connectors) are part of the standard which leads to a more robust interface.

I believe that these five points will help you choose the most suitable camera. Are you still unclear? Do not hesitate to contact us or contact me directly: I will be happy to consult your project, needs or any questions.

 

 

Be Driven by Data and Decrease Downtime

Being “driven by data” is simply the act of making decisions based on real data instead of guessing or basing them on theoretical outcomes. Why one should do that, especially in manufacturing operations, is obvious. How it is done is not always so clear.

Here is how you can use a sensor, indicator light, and RFID to provide feedback that drives overall quality and efficiency.

 

Machine Condition Monitoring

You’ve heard the saying, “if it ain’t broke, don’t fix it.” However, broken machines cause downtime. What if there was a way to know when a machine is getting ready to fail, and you could fix it before it caused downtime? You can do that now!

The two main types of data measured in manufacturing applications are temperature and vibration. A sudden or gradual increase in either of these is typically an indicator that something is going wrong. Just having access to that data won’t stop the machine from failing, though. Combined with an indicator light and RFID, the sensor can provide real-time feedback to the operator, and the event can be documented on the RFID tag. The machine can then be adjusted or repaired during a planned maintenance period.

Managing Quality – A machine on its way to failure can produce parts that don’t meet quality standards. Fixing the problem before it affects production prevents scrap and rework and ensures the customer is getting a product with the quality they expect.

Managing Efficiency– Unplanned downtime costs thousands of dollars per minute in some industries. The time and resources required to deal with a failed machine far exceed the cost of the entire system designed to produce an early warning, provide indication, and document the event.

Quality and efficiency are the difference makers in manufacturing. That is, whoever makes the highest quality products most efficiently usually has the most profitable and sustainable business. Again, why is obvious, but how is the challenge. Hopefully, you can use the above data to make higher quality products more efficiently.

 

More to come! Here are the data-driven topics I will cover in my next blogs:

  • Part inspection and data collection for work in process
  • Using data to manage molds, dies, and machine tools

Buying a Machine Vision System? Focus on Capabilities, Not Cost

Gone are the days when an industrial camera was used only to take a picture and send it to a control PC. Machine vision systems are a much more sophisticated solution. Projects are increasingly demanding image processing, speed, size, complexity, defect recognition and so much more.

This, of course, adds to the new approach in the field of software, where deep learning and artificial intelligence play a bigger and bigger role. There is often a lot of effort behind improved image processing, however,  some people, if only a few, have realized that part of it can already be processed by that little “dummy” industrial camera.

I will try to briefly explain to you in the next few paragraphs how to achieve this in your application. Thanks to that, you will be able to get some of these benefits:

  • Reduce the amount of data
  • Relieve the entire system
  • Generate the maximum performance potential
  • Simplify the hardware structure
  • Reduce the installation work required
  • Reduce your hardware costs
  • Reduce your software costs
  • Reduce your development expenses

How to achieve it?  

Try to use more intelligent industrial cameras, which have a built-in internal memory sometimes called a buffer. Together with FPGA (field programmable gate array) they will do a lot of work that will appreciate your software for image processing. These functions are often also called pre-processing features.

What if you have a project where the camera must send images much faster than the USB or Ethernet interface allows?

For simple cameras, this would mean using a much faster interface, which of course would make the complete solution more expensive. Instead, you can use the Smart Framer Recall function in standard USB and GigE cameras, which generates small preview images with reduced resolution (thumbnails) with an extremely accelerated number of frames per second, which are transferred to the host PC with IDs. At the same time, the corresponding image in full resolution is archived in the camera’s image memory. If the image is required in full resolution, the application sends a request and the image is transferred in the same data stream as the preview image.

The function is explained in this video.

Is there a simpler option than a line scan camera? Yes!

Many people struggle to use line scan cameras and it is understandable. They are not easy to configurate, are hard to install, difficult to properly set and few people can modify them. You can use an area scan camera in line scan mode. The biggest benefit is standard interface: USB3 Vision and GigE Vision instead of CoaXPress and Cameralink. This enables inspection of round/rotating bodies or long/endless materials at high speed (like line scan cameras). Block scan mode acquires an Area of Interest (AOI) block which consists of several lines. The user defines the number of AOI blocks which are used to create one image. This minimizes the overhead, which you would have instead when transferring AOI blocks as single images using the USB3 Vision and GigE Vision protocols.

The function is explained in this video.

Polarization has never been easier

Sony came with a completely new approach to — a polarized filter . Until this new approach was developed, everyone just used a polarization filter in front of the lens and combined it with polarized lighting. With the polarized filter, above the pixel array is a polarizer array and each pixel square contains 0°, 45°, 90°, and 135° of polarization.

 

What is the best part of it? It doesn’t matter if you need a color or monochrome version. There are at least 5️ applications when you want to use it:

  • Remove reflection – > multi-plane surfaces or bruise/defect detection
  • Visual inspection – > detect fine scratches or dust
  • Contrast improvement -> recognize similar objects or colors
  • 3D/Stress recognition -> quality analysis
  • People/vehicle detection -> using your phone while driving

Liquid lens is very popular in smart sensor technology. When and why do you want to use it with an Industrial camera?  

 

Liquid lens is a single optical element like a traditional lens made from glass. However, it also includes a cable to control the focal length. In addition, it contains a sealed cell with water and oil inside. The technology uses an electrowetting process to achieve superior autofocus capabilities.

Benefits to the traditional lenses are obvious. It doesn’t have any moving mechanical parts. Thanks to that, they are highly resistant to shocks and vibrations. Liquid lens is a perfect fit for applications where you need to observe or inspect objects with different sizes and/or working distances and you need to react very quickly. One  liquid lens can do the work of multiple-image systems.

To connect the liquid lens, it requires the RS232 port in the camera plus a DC power from 5 to 24 Volt. An intelligent industrial camera is able to connect with the camera directly and the lens uses the power supply of the camera.

 

Reduce Packaging Downtime with Machine Vision

Packaging encompasses many different industries and typically has several stages in its process. Each industry uses packaging to accomplish specific tasks, well beyond just acting as a container for a product. The pharmaceutical industry for example, typically uses its packaging as a means of dispensing as well as containing. The food and beverage industry uses packaging as a means of preventing contamination and creating differentiation from similar products. Consumer goods typically require unique product containment methods and have a need for “eye-catching” differentiation.

The packaging process typically has several stages. For example, you have primary packaging where the product is first placed in a package, whether that is form-fill-seal bagging or bottle fill and capping. Then secondary packaging that the consumer may see on the shelf, like cereal boxes or display containers, and finally tertiary packaging or transport packaging where the primary or secondary packaging is put into shipping form. Each of these stages require verification or inspection to ensure the process is running properly, and products are properly packaged.

1

Discrete vs. Vision-Based Error Proofing

With the use of machine vision technology, greater flexibility and more reliable operation of the packaging process can be achieved. Typically, in the past and still today, discrete sensors have been used to look for errors and manage product change-over detection. But with these simple discrete sensing solutions come limitations in flexibility, time consuming fixture change-overs and more potential for errors, costing thousands of dollars in lost product and production time. This can translate to more expensive and less competitively priced products on the store selves.

There are two ways implementing machine vision can have a benefit toward improving the scheduled line time. The first is reducing planned downtime by reducing product change over and fixturing change time. The other is to decrease unplanned downtime by catching errors right away and dynamically rejecting them or bringing attention to line issues requiring correction and preventing waste. The greatest benefit vision can have for production line time is in reducing the planned downtime for things like product changeovers. This is a repeatable benefit that can dramatically reduce operating costs and increase the planned runtime. The opportunities for vision to reduce unplanned downtime could include the elimination of line jams due to incorrectly fed packaging materials, misaligned packages or undetected open flaps on cartons. Others include improperly capped bottles causing jams or spills and improper adjustments or low ink causing illegible labeling and barcodes.

Cost and reliability of any technology that improves the packaging process should always be proportional to the benefit it provides. Vision technologies today, like smart cameras, offer the advantages of lower costs and simpler operation, especially compared to the older, more expensive and typically purpose-built vision system counterparts. These new vision technologies can also replace entire sensor arrays, and, in many cases, most of the fixturing at or even below the same costs, while providing significantly greater flexibility. They can greatly reduce or eliminate manual labor costs for inspection and enable automated changeovers. This reduces planned and unplanned downtime, providing longer actual runtime production with less waste during scheduled operation for greater product throughput.

Solve Today’s Packaging Challenges

Using machine vision in any stage of the packaging process can provide the flexibility to dramatically reduce planned downtime with a repeatable decrease in product changeover time, while also providing reliable and flexible error proofing that can significantly reduce unplanned downtime and waste with examples like in-line detection and rejection to eliminate jams and prevent product loss. This technology can also help reduce or eliminate product or shipment rejection by customers at delivery. In today’s competitive market with constant pressure to reduce operating costs, increase quality and minimize waste, look at your process today and see if machine vision can make that difference for your packaging process.

RFID Minimizes Errors, Downtime During Format Change

Today’s consumer packaged goods (CPG) market is driving the need for greater agility and flexibility in packaging machinery. Shorter, more customized runs create more frequent machine changeover. Consequently, reducing planned and unplanned downtime at changeover is one of the key challenges CPG companies are working to improve.

In an earlier post, I discussed operator guided changeover for reducing time and errors associated with parts that must be repositioned during format change.

In this post, I will discuss how machine builders and end users are realizing the benefits of automated identification and validation of mechanical change parts.

In certain machines, there are parts that must be changed as part of a format change procedure. For example, cartoning machines could have 20-30 change parts that must be removed and replaced during this procedure.

This can be a time consuming and error-prone process. Operators can forget to change a part or install the wrong part, which causes downtime during the startup process while the error is located and corrected. In the worst scenarios, machines can crash if incorrect parts are left in the machine causing machine damage and significant additional downtime.

To prevent these mistakes, CPG companies have embraced RFID as a way to identify change parts and validate that the correct parts have been installed in the machine prior to startup. By doing so, these companies have reduced downtime that can be caused by mistakes. It has also helped them train new operators on changeover procedures as the risk of making a mistake is significantly reduced.

Selecting the correct system

When looking to add RFID for change part validation, the number of change parts that need to be identified and validated is a key consideration. RFID operating on the 13.56 MHz (HF) frequency has proven to be very reliable in these applications. The read range between a read head and tag is virtually guaranteed in a proper installation. However, a read head can read only a single tag, so an installation could need a high number of read heads on a machine with a lot of change parts.

1

It is also possible to use the 900 MHz (UHF) frequency for change part ID. This allows a single head to read multiple tags at once. This can be more challenging to implement, as UHF is more susceptible to environmental factors when determining read range and guaranteeing consistent readability. With testing and planning, UHF has been successfully and reliably implemented on packaging machines.

2

Available mounting space and environmental conditions should also be taken into consideration when selecting the correct devices. RFID readers and tags with enhanced IP ratings are available for washdown harsh environmental conditions. Additionally, there are a wide range of RFID read head and tag form factors and sizes to accommodate different sized machines and change parts.

 

 

Manufacturers Track Goods, Reduce Errors, Decrease Workload with RFID

More and more, retailer sellers are starting to require that manufacturers place RFID tags on their products before they leave the production facility and are shipped to those retail locations. From high-end electronics all the way down to socks and underwear are being tagged.

These tags are normally supplied by the retailer or through a contracted third party. Typically disposable UHF paper tags, they are only printed with a TID number and a unique EPC that may or may not correspond to the UPC and barcode that was used in the past. Most cases I have seen require that the UPC and a barcode be printed on these RFID tags so there is information available to the human eye and a barcode scanner when used.

While this is being asked for by the retailers, manufacturers can use these tags to their own advantage to track what products are going out to their shipping departments and in what quantities. This eliminates human error in the tracking process, something that has been a problem in the past, while also reducing workload as boxes of finished goods no longer must be opened, counted and inspected for accuracy.

A well-designed RFID portal for these items to pass through can scan for quantities and variances in types of items in boxes as they pass through the portal. Boxes that do not pass the scan criteria are then directed off to another area for rework and reevaluation. Using human inspection for just the boxes that do not pass the RFID scan greatly reduces the labor effort and expedites the shipping process.

I recently assisted with a manufacturer in the garment industry who was having to tag his garments for a major retailer with RFID tags that had the UPC and a barcode printed on them. The tags were supplied through the retailer and the EPCs on the tags were quite different then the UPC numbers printed on them.

The manufacturer wanted to know how many garments of each type were in each box. Testing showed that this could be done by creating a check point on his conveyor system and placing UHF RFID antennas in appropriate locations to ensure that all the garments in the box were detected and identified.

In this case, the manufacturer wanted was a simple stand-alone system that would display a count of different types of garments. An operator reviewed the results on a display and decided based on the results whether to accept the box and let the conveyor forward it to shipping or reject it and divert it to another conveyor line for inspection and adjustment.

While this system proved to be relatively simple and inexpensive, it satisfied the desires of the manufacturer. It is, however, possible to connect an RFID inspection station to a manufacturing information system that would know what to expect in each box and could automatically accept or reject boxes based on the results of the scans without human intervention and/or human error.

Beyond the Human Eye

Have you ever had to squint, strain, adjust your glasses, or just ask for someone with better vision to help read something for you? Now imagine having to adjust your eyesight 10 times a second. This is the power of machine vision. It can adjust, illuminate, filter, focus, read, and relay information that our eyes struggle with. Although the technology is 30 years old, machine vision is still in its early stages of adoption within the industrial space. In the past, machine vision was ‘nice to have’ but not really a ‘need to have’ technology because of costs, and the technology still not being refined. As traceability, human error proofing, and advanced applications grow more common, machine vision has found its rhythm within factory automation. It has evolved into a robust technology eager to solve advanced applications.

Take, for example, the accurate reading, validation, and logging of a date located on the concaved bottom of an aluminum can. Sometimes, nearly impossible to see with the human eye without some straining involved, it is completely necessary to ensure it is there to be able to sell the product. What would be your solution to ensuring the date stamp is there? Having the employee with the best eyes validate each can off the line? Using more ink and taking longer to print a larger code? Maybe adding a step by putting a black on white contrasting sticker on the bottom that could fall off? All of these would work but at what cost? A better solution is using a device easily capable of reading several cans a second even on a shiny, poor angled surface and saving a ton of unnecessary time and steps.

Machine vison is not magic; it is science. By combining high end image sensors, advanced algorithms, and trained vision specialists, an application like our aluminum can example can be solved in minutes and run forever, all while saving you time and money. In Figure 1 you can see the can’s code is lightly printed and overcome by any lighting due to hotspots from the angle of the can. In Figure 2 we have filtered out some of the glare, better defined the date through software, and validate the date is printed and correct.

Take a moment to imagine all the possibilities machine vision can open for your production process and the pain points it can alleviate. The technology is ready, are you?

Figure 1
Figure 1
Figure 2
Figure 2

Palletized Automation with Inductive Coupling

RFID is an excellent way to track material on a pallet through a warehouse. A data tag is placed on the pallet and is read by a read/write head when it comes in range. Commonly used to identify when the pallet goes through the different stages of its scheduled process, RFID provides an easy way to know where material is throughout a process and learn how long it takes for product to go through each stage. But what if you need I/O on the pallet itself or an interchangeable end-of-arm tool?

Inductive Coupling

1

Inductive coupling delivers reliable transmission of data without contact. It is the same technology used to charge a cell phone wirelessly. There is a base and a remote, and when they are aligned within a certain distance, power and signal can be transferred between them as if it was a standard wire connection.

2

When a robot is changing end-of-arm tooling, inductive couplers can be used to power the end of arm tool without the worry of the maintenance that comes with a physical connection wearing out over time.

For another example of how inductive couplers can be used in a process like this, let’s say your process requires a robot to place parts on a metal product and weld them together. You want I/O on the pallet to tell the robot that the parts are in the right place before it welds them to the product. This requires the sensors to be powered on the pallet while also communicating back to the robot. Inductive couplers are a great solution because by communicating over an air gap, they do not need to be connected and disconnected when the pallet arrives or leaves the station. When the pallet comes into the station, the base and remote align, and all the I/O on the pallet is powered and can communicate to the robot so it can perform the task.

Additionally, Inductive couplers can act as a unique identifier, much like an RFID system. For example,  when a pallet filled with product A comes within range of the robot, the base and remote align telling the robot to perform action A. Conversely, when a pallet loaded with product B comes into range, the robot communicates with the pallet and knows to perform a different task. This allows multiple products to go down the same line without as much changeover, thereby reducing errors and downtime.

What Machine Vision Tool is Right for Your Application?

Machine vision is an inherent terminology in factory automation but selecting the most efficient and cost-effective vision product for your project or application can be tricky.

We can see machine vision from many angles of view, for example market segment and application or image processing deliver different perspectives. In this article I will focus on the “sensing element” itself, which scan your application.

The sensing element is a product which observes the application, analyzes it and forwards an evaluation. PC is a part of machine vision that can be embedded with the imager or separated like the controller. We could take many different approaches, but let’s look at the project according to the complexity of the application. The basic machine vision hardware comparison is

  1. smart sensors
  2. smart cameras
  3. vision systems

Each of these products are used in a different way and they fit different applications, but what do they all have in common? They must have components like an imager, lens, lighting, SW, processor and output HW. All major manufacturing companies, regardless of their focus or market segment, use these products, but what purpose and under what circumstances are they used?

Smart Sensors

Smart sensors are dedicated to detecting basic machine vision applications. There are hundreds of different types on the market and they must quickly provide standard performance in machine vision. Don’t make me wrong, this is not necessarily a negative. These sensors are used for simple applications. You do not want to wait seconds to detect QR code; you need a response time in milliseconds. Smart sensors typically include basic functions like:

  • data matrix, barcode and 2D code reading
  • presence of the object,
  • shape, color, thickness, distance

They are typically used in single purpose process and you cannot combine all the features.

Smart Cameras

Smart cameras are used in more complex projects. They provide all the function of smart sensors, but with more complex functions like:

  • find and check object
  • blob detection
  • edge detection
  • metrology
  • robot navigation
  • sorting
  • pattern recognition
  • complex optical character recognition

Due to their complexity, you can use them to find products with higher resolution , however it is not a requirement. Smart cameras can combine more programs and can do parallel several functions together. Image processing is more sophisticated, and limits may occur in processing speed, because of embedded PC.

Vision Systems

Typically, machine vision systems are used in applications where a smart camera is not enough.

Vision system consists of industrial cameras, controller, separated lighting and lens system, and it is therefore important to have knowledge of different types of lighting and lenses. Industrial cameras provide resolution from VGA up to 30Mpxl and they are easy connected to controller.

Vision systems are highly flexible systems. They provide all the functions from smart sensors and cameras. They bring complexity as well as flexibility. With a vision system, you are not limited by resolution or speed. Thanks to the controller, you have dedicated and incomparable processing power which provides multi-speed acceleration.

And the most important information at the end. How does it look with pricing?

You can be sure that smart sensor is the most inexpensive solution. Basic pricing is in the range of $500 – $1500. Smart cameras can cost $2000 – $5000, while a vision system cost would start closer to $6000. It may look like an easy calculation, but you need to take into consideration the complexity of your project to determine which is best for you.

Pros Cons Cost
Smart sensor
    • Easy integration
    • Simple configuration
    • Included lightning and lenses
    • Limited functions
    • Closed SW
    • Limited programs/memory
$
Smart camera
    • Combine more programs together
    • Available functions
    • Limited resolution
    • Slower speed due to embedded PC
$$
Vision system
    • Connect more cameras(up to 8)
    • Open SW
    • Different resolution options
    • Requires skilled machine vision specialist
    • Requires knowledge of lightning and lenses
    • Increased integration time
$$$

Capture