The Benefits of Mobile Handheld and Stationary Code Readers

Ensuring reliable traceability of products and assembly is critical in industries such as automotive, pharmaceuticals, and electronics. Code readers are essential in achieving this, with stationary and mobile handheld readers being the two most popular options. In what situations is it more appropriate to use one type over the other?

Stationary optical ID sensors

Stationary optical ID sensors offer simple and reliable code reading, making them an excellent option for ensuring traceability. They can read various codes, including barcodes, 2D codes, and DMC codes, and are permanently installed in the plant. Additionally, with their standardized automation and IT interfaces, the information readout can be passed on to the PLC or IT systems. Some variants also come with an IO-Link interface for extremely simple integration. The modern solution offers additional condition monitoring information, such as vibration, temperature, code quality, and operating time, making them a unique multi-talent within optical identification.

Portable code readers

Portable code readers provide maximum freedom of movement and can quickly and reliably read common 1D, 2D, and stacked barcodes on documents and directly on items. Various applications use them for controlling supply processes, production control, component tracking, quality control, and inventory. The wireless variants of handheld code readers with Bluetooth technology allow users to move around freely within a range of up to 100 meters around the base station. They also have a reliable read confirmation system via acoustic signal, LEDs, and a light spot projected onto the read code. Furthermore, the ergonomic design and highly visible laser marking frames ensure fatigue-free work.

Both stationary and mobile handheld barcode readers play an essential role in ensuring reliable traceability of products and assembly in various industries. Choosing the right type of barcode reader for your application is crucial to ensure optimal performance and efficiency. While stationary code readers are ideal for constant scanning in production lines, mobile handheld readers offer flexibility and reliability for various applications. Regardless of your choice, both devices offer simple operation and standardized automation and IT interfaces, making them essential tools for businesses that rely on efficient code reading.

Using Vision Sensors to Conquer 1D and 2D Barcode Reading Applications

As many industries trend towards the adoption and use of two-dimensional barcodes and readers, the growth in popularity, acceptance of use, and positive track record of these 2D code readers offer a better way to track data. Vision-sensing code readers have many benefits, such as higher read rate performance, multi-directional code detection, simultaneous multiple codes reads, and more information storage.

While traditional red line laser scanners or cameras with decoding and positioning software are commonly used to read barcodes, there are three main types of barcodes: 1D, 2D, and QR codes. Each type has different attributes and ways of reading.

1D barcodes are the traditional ladder line barcodes typically seen in grocery stores and on merchandise and packaging. On the other hand, 2D Data Matrix codes are smaller than 1D barcodes but can hold quite a bit more information with built-in redundancy in case of scratches or defacement. QR codes, which were initially developed for the automotive industry, can hold even more information than Data Matrix codes, were initially developed for the automotive industry to track parts during vehicle manufacturing and are now widely used in business and advertising.

There are various types of vision sensors for reading different types of barcodes. QR codes are often used in business and advertising, while micro QR codes are typically seen in industrial applications such as camshafts, crankshafts, pistons, and circuit boards. Deciphering micro QR codes typically require an industrial sensor.

The need to easily track products and collect information about their whereabouts has been a long-standing problem in manufacturing and industrial automation. While one-dimensional barcodes have been the traditional solution, advances in one-dimensional code reading continue to improve. New hardware, code readers, and symbology, however, have made an emergence, and new image-based scanners are becoming a popular alternative for data capture solutions.

In summary, vision sensors are becoming increasingly important in 1D and 2D barcode reading applications due to their higher read rate performance, multi-directional code detection, simultaneous multiple codes read, and more information storage. As the need for tracking products and collecting information about their whereabouts continues to grow, industries will benefit from the use of vision sensors to improve efficiency and accuracy.

The 5 Most Common Types of Fixed Industrial Robots

The International Federation of Robotics (IFR) defines five types of fixed industrial robots: Cartesian/Gantry, SCARA, Articulated, Parallel/Delta and Cylindrical (mobile robots are not included in the “fixed” robot category). These types are generally classified by their mechanical structure, which dictates the ways they can move.

Based on the current market situation and trends, we have modified this list by removing Cylindrical robots and adding Power & Force Limited Collaborative robots. Cylindrical robots have a small, declining share of the market and some industry analysts predict that they will be completely replaced by SCARA robots, which can cover similar applications at higher speed and performance. On the other hand, use of collaborative robots has grown rapidly since their first commercial sale by Universal Robots in 2008. This is why collaborative robots are on our list and cylindrical/spherical robots are not.

Therefore, our list of the top five industrial robot types includes:

    • Articulated
    • Cartesian/Gantry
    • Parallel/Delta
    • SCARA
    • Power & Force Limited Collaborative robots

These five common types of robots have emerged to address different applications, though there is now some overlap in the applications they serve. And range of industries where they are used is now very wide. The IFR’s 2021 report ranks electronics/electrical, automotive, metal & machinery, plastic and chemical products and food as the industries most commonly using fixed industrial robots. And the top applications identified in the report are material/parts handling and machine loading/unloading, welding, assembling, cleanrooms, dispensing/painting and processing/machining.

Articulated robots

Articulated robots most closely resemble a human arm and have multiple rotary joints–the most common versions have six axes. These can be large, powerful robots, capable of moving heavy loads precisely at moderate speeds. Smaller versions are available for precise movement of lighter loads. These robots have the largest market share (≈60%) and are growing between 5–10% per year.

Articulated robots are used across many industries and applications. Automotive has the biggest user base, but they are also used in other industries such as packaging, metalworking, plastics and electronics. Applications include material & parts handling (including machine loading & unloading, picking & placing and palletizing), assembling (ranging from small to large parts), welding, painting, and processing (machining, grinding, polishing).

SCARA robots

A SCARA robot is a “Selective Compliance Assembly Robot Arm,” also known as a “Selective Compliance Articulated Robot Arm.” They are compliant in the X-Y direction but rigid in the Z direction. These robots are fairly common, with around 15% market share and a 5-10% per year growth rate.

SCARA robots are most often applied in the Life Sciences, Semiconductor and Electronics industries. They are used in applications requiring high speed and high accuracy such as assembling, handling or picking & placing of lightweight parts, but also in 3D printing and dispensing.

Cartesian/Gantry robots

Cartesian robots, also known as gantry or linear robots, move along multiple linear axes. Since these axes are very rigid, they can precisely move heavy payloads, though this also means they require a lot of space. They have about 15% market share and a 5-10% per year growth rate.

Cartesian robots are often used in handling, loading/unloading, sorting & storing and picking & placing applications, but also in welding, assembling and machining. Industries using these robots include automotive, packaging, food & beverage, aerospace, heavy engineering and semiconductor.

Delta/Parallel robots

Delta robots (also known as parallel robots) are lightweight, high-speed robots, usually for fast handling of small and lightweight products or parts. They have a unique configuration with three or four lightweight arms arranged in parallelograms. These robots have 5% market share and a 3–5% growth rate.

They are often used in food or small part handling and/or packaging. Typical applications are assembling, picking & placing and packaging. Industries include food & beverage, cosmetics, packaging, electronics/ semiconductor, consumer goods, pharmaceutical and medical.

Power & Force Limiting Collaborative robots

We add the term “Power & Force Limiting” to our Collaborative robot category because the standards actually define four collaborative robot application modes, and we want to focus on this, the most well-known mode. Click here to read a blog on the different collaborative modes. Power & Force Limiting robots include models from Universal Robots, the FANUC CR green robots and the YuMi from ABB. Collaborative robots have become popular due to their ease of use, flexibility and “built-in” safety and ability to be used in close proximity to humans. They are most often an articulated robot with special features to limit power and force exerted by the axes to allow close, safe operation near humans or other machines. Larger, faster and stronger robots can also be used in collaborative applications with the addition of safety sensors and special programming.

Power & Force Limiting Collaborative robots have about 5% market share and sales are growing rapidly at 20%+ per year. They are a big success with small and mid-size enterprises, but also with more traditional robot users in a very broad range of industries including automotive and electronics. Typical applications include machine loading/unloading, assembling, handling, dispensing, picking & placing, palletizing, and welding.

Summary­

The robot market is one of the most rapidly growing segments of the industrial automation industry. The need for more automation and robots is driven by factors such as supply chain issues, changing workforce, cost pressures, digitalization and mass customization (highly flexible manufacturing). A broad range of robot types, capabilities and price points have emerged to address these factors and satisfy the needs of applications and industries ranging from automotive to food & beverage to life sciences.

Note: Market share and growth rate estimates in this blog are based on public data published by the International Federation of Robotics, Loup Ventures, NIST and Interact Analysis.

Picking Solutions: How Complex Must Your System Be?

Bin-picking, random picking, pick and place, pick and drop, palletization, depalletization—these are all part of the same project. You want a fully automated process that grabs the desired sample from one position and moves it somewhere else. Before you choose the right solution for your project, you should think about how the objects are arranged. There are three picking solutions: structured, semi-structured, and random.

As you can imagine, the basic differences between these solutions are in their complexity and their approach. The distribution and arrangement of the samples to be picked will set the requirements for a solution. Let’s have a look at the options:

Structured picking

From a technical point of view, this is the easiest type of picking application. Samples are well organized and very often in a single layer. Arranging the pieces in a highly organized way requires high-level preparation of the samples and more storage space to hold the pieces individually. Because the samples are in a single layer or are layered at a defined height, a traditional 2-dimensional camera is more than sufficient. There are even cases where the vision system isn’t necessary at all and can be replaced by a smart sensor or another type of sensor. Typical robot systems use SCARA or Delta models, which ensure maximum speed and a short cycle time.

Semi-structured picking

Greater flexibility in robotization is necessary since semi-structured bin picking requires some predictability in sample placement. A six-axis robot is used in most cases, and the demands on its grippers are more complex. However, it depends on the gripping requirements of the samples themselves. It is rarely sufficient to use a classic 2D area scan camera, and a 3D camera is required instead. Many picking applications also require a vision inspection step, which burdens the system and slows down the entire cycle time.

Random picking

Samples are randomly loaded in a carrier or pallet. On the one hand, this requires minimal preparation of samples for picking, but on the other hand, it significantly increases the demands on the process that will make a 3D vision system a requirement. You need to consider that there are very often collisions between selected samples. This is a factor not only when looking for the right gripper but also for the approach of the whole picking process.

Compared to structured picking, the cycle time is extended due to scanning evaluation, robot trajectory, and mounting accuracy. Some applications require the deployment of two picking stations to meet the required cycle time. It is often necessary to limit the gripping points used by the robot, which increases the demands on 3D image quality, grippers, and robot track guidance planning and can also require an intermediate step to place the same in the exact position needed for gripping.

In the end, the complexity of the picking solution is set primarily by the way the samples are arranged. The less structured their arrangement, the more complicated the system must be to meet the project’s demands. By considering how samples are organized before they are picked, as well as the picking process, you can design an overall process that meets your requirements the best.

Add Depth to Your Processes With 3D Machine Vision

What comes to mind first when you think of 3D? Cheap red and blue glasses? Paying extra at a movie theater? Or maybe the awkward top screen on a Nintendo 3DS? Neither industrial machine vision nor robot guidance likely come to mind, but they should.

Advancements in 3D machine vision have taken the old method of 2D image processing and added literal depth. You become emerged into the application with true definition of the target—far from what you get looking at a flat image.

See For Yourself

Let’s do an exercise: Close one eye and try to pick up an object on your desk by pinching it. Did you miss it on the first try? Did things look foreign or off? This is because your depth perception is skewed with only one vision source. It takes both eyes to paint an accurate picture of your surroundings.

Now, imagine what you can do with two cameras side by side looking at an application. This is 3D machine vision; this is human.

How 3D Saves the Day

Robot guidance. The goal of robotics is to emulate human movements while allowing them to work more safely and reliably. So, why not give them the same vision we possess? When a robot is sent in to do a job it needs to know the x, y and z coordinates of its target to best control its approach and handle the item(s). 3D does this.

Part sorting. If you are anything like me, you have your favorite parts of Chex mix. Whether it’s the pretzels or the Chex pieces themselves, picking one out of the bowl takes coordination. Finding the right shape and the ideal place to grab it takes depth perception. You wouldn’t use a robot to sort your snacks, of course, but if you need to select specific parts in a bin of various shapes and sizes, 3D vision can give you the detail you need to select the right part every time.

Palletization and/or depalletization. Like in a game of Jenga, the careful and accurate stacking and removing of parts is paramount. Whether it’s for speed, quality or damage control, palletization/ depalletization of material needs 3D vision to position material accurately and efficiently.

I hope these 3D examples inspire you to seek more from your machine vision solution and look to the technology of the day to automate your processes. A picture is worth a thousand words, just imagine what a 3D image can tell you.

Ensure Food Safety with Machine Vision

Government agencies have put food manufacturers under a microscope to ensure they follow food safety standards and comply with regulations. When it comes to the health and safety of consumers, quality assurance is a top priority, but despite this, according to The World of Health Organization, approximately 600 million people become ill each year after eating contaminated food, and 420,000 die.

Using human manual inspection for quality assurance checks in this industry can be detrimental to the company and its consumers due to human error, fatigue and subjective opinions. Furthermore, foreign particles that should not be found in the product may be microscopic and invisible to the human eye. These defects can lead to illness, recalls, lawsuits, and a long-term negative perception of the brand itself. Packaging, food and beverage manufacturers must realize these potential risks and review the benefits of incorporating machine vision. Although machine vision implementation may sound like a costly investment, it is small price to pay when compared to the potential damage of uncaught issues. Below I explore a few benefits that machine vision offers in the packaging, food and beverage industries.

Safety
Consumers expect and rely on safe products from food manufacturers. Machine vision can see through packaging to determine the presence of foreign particles that should not be present, ensuring these products are removed from the production line. Machine vision is also capable of inspecting for cross-contamination, color correctness, ripeness, and even spoilage. For example, bruises on apples can be hard to spot for the untrained eye unless extremely pronounced. SWIR (shortwave infrared) illumination proves effective for the detection of defects and contamination. Subsurface bruising defects become much easier to detect due to the optimization of lighting and these defected products can be scrapped.

Uniformity of Containers
Brand recognition is huge for manufacturers in this industry. Products that have defects such as dents or uneven contents inside the container can greatly affect the public’s perception of the product and/or company. Machine vision can detect even the slightest deformity in the container and ensure they are removed from the line. It can also scan the inside of the container to ensure that the product is uniform for each batch. Vision systems have the ability optimize lighting intensity, uniformity, and geometry to obtain images with good contrast and signal to noise. Having the ability to alter lighting provides a much clearer image of the point of interest. This can allow you to see inside a container to determine if the fill level is correct for the specific product.

Packaging
Packaging is important because if the products shipped to the store are regularly defected, the store can choose to stop stocking that item, costing the manufacturer valuable business. The seal must last from production to arrival at the store to ensure that the product maintains its safe usability through its marked expiration date. In bottling applications, the conveyors are moving at high speeds so the inspection process must be able to quickly and correctly identify defects. A facility in Marseille, France was looking to inspect Heineken beer bottles as they passed through a bottling machine at a rate of 22 bottles/second (80,000 bottles/hour). Although this is on the faster end of the spectrum, many applications require high-speed quality checks that are impossible for a human operator. A machine vision system can be configured to handle these high-speed applications and taught to detect the specified defect.

Labels

It’s crucial for the labels to be printed correctly and placed on the correct product because of the food allergy threats that some consumers experience. Machine vision can also benefit this aspect of the production process as cameras can be taught to recognize the correct label and brand guidelines. Typically, these production lines move at speeds too fast for human inspection. An intuitive, easy to use, machine vision software package allows you to filter the labels, find the object using reference points and validate the text quickly and accurately.

These areas of the assembly process throughout packaging, food and beverage facilities should be considered for machine vision applications. Understanding what problems occur and the cost associated with them is helpful in justifying whether machine vision is right for you.

For more information on machine vision, visit https://www.balluff.com/local/us/products/product-overview/machine-vision-and-optical-identification/.

 

 

Document Product Quality and Eliminate Disputes with Machine Vision

“I caught a record-breaking walleye last weekend,” an excited Joe announced to his colleagues after returning from his annual fishing excursion to Canada.

“Record-breaking?  Really?  Prove it.” demanded his doubtful co-worker.

Well, I left my cell phone in the cabin so it wouldn’t get wet on the boat so I couldn’t take a picture, but I swear that big guy was the main course for dinner.”

“Okay, sure it was Joe.”

We have all been there — spotted a mountain lion, witnessed an amazing random human interaction, or maybe caught a glimpse at a shooting star.  These are great stories, but they are so much more believable and memorable with a picture or video to back them up.  Now a days, we all carry a camera within arm’s reach.  Capturing life events has never been easier and more common, so why not use cameras to document and record important events and stages within your manufacturing process?

As the smart phone becomes more advanced and common, so does the technology and hardware for industrial cameras (i.e. machine vision).  Machine vision can do so much more than pass fail and measurement type applications.  Taking, storing, and relaying pictures along different stages of a production process could not only set you apart from the competition but also save you costly quality disputes after it leaves your facility.  A picture can tell a thousand words, so what do you want to tell the world?  Here are just a couple examples how you can back up you brand with machine vision:

Package integrity: We have all seen the reduced rack at a grocery store where a can is dented or missing a label.  If this was caused by a large-scale label application defect, someone is losing business.  So, before everyone starts pointing fingers, the manufacturer could simply provide a saved image from their end-of line-vision system to prove the cans were labeled when shipped from their facility.

Assembly defects: When you are producing assembled parts for a larger manufacturer, the standards they set are what you live and die by.  If there is ever a dispute, having several saved images from either individual parts or an audit of them throughout the day could prove your final product met their specifications and could save your contract.

Barcode legibility and placement: Show your retail partners that your product’s bar code will not frustrate the cashier by having to overcome a poorly printed or placed barcode.  Share images with them to show an industrial camera easily reading the code along the packaging line ensuring a hassle-free checkout as well as a barcode grade to ensure their barcode requirements are being met.

In closing, pictures always help tell a story and make it more credible.  Ideally your customers will take your word for it, but when you catch the record-breaking walleye, you want to prove it.

The Right Mix of Products for Recipe-Driven Machine Change Over

The filling of medical vials requires flexible automation equipment that can adapt to different vial sizes, colors and capping types. People are often deployed to make those equipment changes, which is also known as a recipe change. But by nature, people are inconsistent, and that inconsistency will cause errors and delay during change over.

Here’s a simple recipe to deliver consistency through operator-guided/verified recipe change. The following ingredients provide a solid recipe-driven change over:

Incoming Components: Barcode

Fixed mount and hand-held barcode scanners at the point-of-loading ensure correct parts are loaded.

Change Parts: RFID

Any machine part that must be replaced during a changeover can have a simple RFID tag installed. A read head reads the tag in ensure it’s the correct part.

Feed Systems: Position Measurement

Some feed systems require only millimeters of adjustment. Position sensor ensure the feed system is set to the correct recipe and is ready to run.

Conveyors Size Change: Rotary Position Indicator

Guide rails and larger sections are adjusted with the use of hand cranks. Digital position indicators show the intended position based on the recipes. The operators adjust to the desired position and then acknowledgment is sent to the control system.

Vial Detection: Array Sensor

Sensor arrays can capture more information, even with the vial variations. In addition to vial presence detection, the size of the vial and stopper/cap is verified as well. No physical changes are required. The recipe will dictate the sensor values required for the vial type.

Final Inspection: Vision

For label placement and defect detection, vision is the go-to product. The recipe will call up the label parameters to be verified.

Traceability: Vision

Often used in conjunction with final inspection, traceability requires capturing the barcode data from the final vials. There are often multiple 1D and 2D barcodes that must be read. A powerful vision system with a larger field of view is ideal for the changing recipes.

All of these ingredients are best when tied together with IO-Link. This ensures easy implantation with class-leading products. With all these ingredients, it has never been easier to implement operator-guided/verified size change.

Machine Vision: 5 Simple Steps to Choose the Right Camera

The machine vision and industrial camera market is offering thousands of models with different resolutionssizes, speeds, colors, interfaces, prices, etc. So, how do you choose? Let’s go through 5 simple steps which will ensure easy selection of the right camera for your application. 

1.  Defined task: color or monochrome camera  

2.  Amount of information: minimum of pixels per object details 

3.  Sensor resolution: formula for calculating the image sensor 

4.  Shutter technology: moving or static object 

5.  Interfaces and camera selector: lets pick the right model 

STEP 1 – Defined task  

It is always necessary to start with the size of the scanned object (X, Y), or you can determine the smallest possible value (d) that you want to distinguish with the camera.

For easier explanation, you can choose the option of solving the measurement task. However, the basic functionality can be used for any other applications.

In the task, the distance (D) between the centers of both holes is determined with the measurement accuracy (d). Using these values, we then determine the parameter for selecting the right image sensor and camera.

Example:
Distance (D) between 2 points with measuring accuracy (d) of 0.05 mm. Object size X = 48 mm (monochrome sensor, because color is not relevant here)

Note: Monochrome or color?
Color sensors use a Bayer color filter, which allows only one basic color to reach each pixel. The missing colors are determined using interpolation of the neighboring pixels. Monochrome sensors are twice as light sensitive as color sensors and lead to a sharper image by acquiring more details within the same number of pixels. For this reason, monochrome sensors are recommended if no color information is needed.

STEP 2 – Amount of information

Each type of application needs a different size of information to solve. This is differentiated by the minimum number of pixels. Lets again use monochrome options.

Minimum of pixels per object details

  • Object detail measuring / detection       3
  • Barcode line width                                           2
  • Datamatrix code module width                4
  • OCR character height                                    16

Example:
The measuring needs 3 pixels for the necessary accuracy (object detail size d). As necessary accuracy (d) which is 0.05 mm in this example, is imaged on 3 pixels.

Note:
Each characteristic or application type presupposes a minimum number of pixels. It avoids the loss of information through sampling blurs.

STEP 3 – Sensor resolution

We already defined the object size as well as resolution accuracy. As a next step, we are going to define resolution of the camera. It is simple formula to calculate the image sensor.

S = (N x O) / d = (min. number of pixels per object detail x object size) / object detail size

Object size (O) can be describe horizontally as well as vertically. Some of sensors are square and this problem is eliminated 😊

Example:
S = (3 x 48 mm) / 0.05 mm = 2880 pixels

We looked at the available image sensors and the closest is a model with resolution 3092 x 2080 => 6.4Mpixels image sensor.

Note:
Pay attention to the format of the sensor.

For a correct calculation, it is necessary to check the resolution, not only in the horizontal but also in the vertical axis.

 

STEP 4 – Shutter technology

Global shutter versus rolling shutter.

These technologies are standard in machine vision and you are able to find hundreds of cameras with both.

Rolling shutter: exposes the motive line-by-line. This procedure results in a time delay for each acquired line. Thus, moving objects are displayed blurrily in the resulting motive through the generated “object time offset” (compare to the image).

Pros:

    • More light sensitive
    • Less expensive
    • Smaller pixel size provides higher resolution with the same image format.

Cons:

    • Image distortion occurs on moving objects

Global shutter: used to get distortion-free images by exposing all pixels at the same time.

Pros:

    • Great for fast processes
    • Sharp images with no blur on moving objects.

Cons:

    • More expensive
    • Larger image format

Note:
The newest rolling shutter sensors have a feature called global reset mode, which starts the exposure of all rows simultaneously and the reset of each row is released simultaneously, also. However, the readout of the lines is equal to the readout of the rolling shutter: line by line.

This means the bottom lines of the sensor will be exposed to light longer! For this reason, this mode will only make sense, if there is no extraneous light and the flash duration is shorter or equal to the exposure time.

STEP 5 – Interfaces and camera selector

Final step is here:

You must consider the possible speed (bandwidth) as well as cable length of camera technology.

USB2
Small, handy and cost-effective, USB 2.0 industrial cameras have become integral parts in the area of medicine and microscopy. You can get a wide range of different variants, including with or without housings, as board-level or single-board, or with or without digital I/Os.

USB3/GigE Vision
Without standards every manufacturer does their own thing and many advantages customers learned to love with the GigE Vision standard would be lost. Like GigE Vision, USB3 Vision also defines:

    • a transport layer, which controls the detection of a device (Device Detection)
    • the configuration (Register Access)
    • the data streaming (Streaming Data)
    • the handling of events (Event Handling)
    • established interface to GenICam. GenICam abstracts the access to the camera features for the user. The features are standardized (name and behavior) by the standard feature naming convention (SFNC). Additionally, it is possible to create specific features in addition to the SFNC to differentiate from other vendors (quality of implementation). In contrast to GigE Vision, this time the mechanics (e.g. lockable cable connectors) are part of the standard which leads to a more robust interface.

I believe that these five points will help you choose the most suitable camera. Are you still unclear? Do not hesitate to contact us or contact me directly: I will be happy to consult your project, needs or any questions.

 

 

Be Driven by Data and Decrease Downtime

Being “driven by data” is simply the act of making decisions based on real data instead of guessing or basing them on theoretical outcomes. Why one should do that, especially in manufacturing operations, is obvious. How it is done is not always so clear.

Here is how you can use a sensor, indicator light, and RFID to provide feedback that drives overall quality and efficiency.

 

Machine Condition Monitoring

You’ve heard the saying, “if it ain’t broke, don’t fix it.” However, broken machines cause downtime. What if there was a way to know when a machine is getting ready to fail, and you could fix it before it caused downtime? You can do that now!

The two main types of data measured in manufacturing applications are temperature and vibration. A sudden or gradual increase in either of these is typically an indicator that something is going wrong. Just having access to that data won’t stop the machine from failing, though. Combined with an indicator light and RFID, the sensor can provide real-time feedback to the operator, and the event can be documented on the RFID tag. The machine can then be adjusted or repaired during a planned maintenance period.

Managing Quality – A machine on its way to failure can produce parts that don’t meet quality standards. Fixing the problem before it affects production prevents scrap and rework and ensures the customer is getting a product with the quality they expect.

Managing Efficiency– Unplanned downtime costs thousands of dollars per minute in some industries. The time and resources required to deal with a failed machine far exceed the cost of the entire system designed to produce an early warning, provide indication, and document the event.

Quality and efficiency are the difference makers in manufacturing. That is, whoever makes the highest quality products most efficiently usually has the most profitable and sustainable business. Again, why is obvious, but how is the challenge. Hopefully, you can use the above data to make higher quality products more efficiently.

 

More to come! Here are the data-driven topics I will cover in my next blogs:

  • Part inspection and data collection for work in process
  • Using data to manage molds, dies, and machine tools