The Evolution of Barcode Scanning in Logistics Automation

 

Barcodes have played a pivotal role in revolutionizing supply chains since the 1970s. Traditional LED and laser scanners have been the go-to solution for reading barcodes, but with advancements in technology, new possibilities have emerged.

Here, I explore the limitations of traditional scanners and the rise of camera-based barcode scanners empowered by image analysis systems. I will delve into the intricate operations performed by these scanners and their superior efficiency in barcode location and decoding. Additionally, I will discuss the ongoing research in computer vision-based barcode reading techniques and the broader impact of machine vision in logistics beyond barcode scanning.

The limitations of traditional scanners

Traditional barcode readers operate by shining LED or laser light across a barcode, with the reflected beam detected by a photoelectric cell. While simple and effective in their time, these scanners have certain limitations that hinder their performance and restrict their application range. They require prior knowledge of barcode location, struggle with complex scenes, and are unable to read multiple barcodes simultaneously. Moreover, low-quality barcodes pose challenges, potentially leading to losses in time, money, and reputation.

The rise of camera-based barcode scanners

Camera-based barcode scanners, empowered by image analysis systems, have emerged as a game-changer in logistics automation. These scanners perform intricate operations, starting with image acquisition and preprocessing. Images are converted to grayscale, noise is reduced, and barcode edges are enhanced using various filters. Binarization is then applied, isolating black and white pixels for decoding. Unlike traditional scanners, image-based barcode scanners excel in barcode location and decoding. They eliminate the need for prior knowledge of barcode position and can locate and extract multiple barcodes in a single image.

The advantages of optical barcode scanners

As technology progresses, optical barcode scanners are gradually replacing LED and laser-based solutions, offering superior efficiency and performance. Computer vision-based barcode reading techniques have sparked extensive research, addressing challenges in both location and decoding steps. Barcode localization, the most intricate part, involves detecting and extracting barcodes accurately despite illumination variations, rotation, perspective distortion, or camera focus issues. Researchers continually refine barcode extraction techniques, using mathematical morphology and additional preprocessing steps for precise recognition.

Beyond barcode scanning: the impact of machine vision in logistic

The impact of machine vision in logistics extends beyond barcode scanning. Robot-operated warehouses, such as those employed by Amazon, rely on 2D barcodes to navigate shelves efficiently. Drones equipped with computer vision capabilities open new possibilities for delivery services, enabling autonomous and accurate package handling.

Machine vision technology is revolutionizing the way logistics operations are conducted, enhancing efficiency, accuracy, and overall customer experience.

Overcoming Challenges in Metal Detection: The Power of Factor 1 Sensors

Standard inductive proximity sensors are used across the automation industry for metal detection applications and are generally reliable in these operations. But issues arise when switching from steel to other metals like copper, brass, or aluminum. A standard inductive sensor may encounter problems in such scenarios. Due to the reduction factor, the standard inductive sensor detects these different metals at different distances. If you had a sensor mounted and set up to sense a steel material but switched to copper, for example, the copper material might be out of the sensor’s range due to this difference in reduction factor, resulting in a missed reading. Factor 1 sensors were created to eliminate this problem.

Reduction factor

The reduction factor is the root cause of variable distance readings with a standard inductive sensor. But what exactly is it? The standard operating range of an inductive proximity sensor is determined by its response to a one-millimeter-thick square piece of mild steel. Other metals like copper and aluminum deviate from this standard range due to differences in material properties. For example, copper has a reduction factor of around 0.4, so it can only be detected at 0.4 times the standard operating range of an inductive proximity sensor.

We can save for later the details of why this occurs, but the key point here is that different material properties cause different reduction factors, which result in different switching distances. The table below shows these different reduction factors and switching distances. Factor 1 sensors take all these variable reduction factors and equalize them to a standard operating distance. This means that you can read anything from copper to steel at the same range, reducing the possibility of missed readings and eliminating the need for repositioning sensors whenever a material change occurs.

When to use Factor 1 sensors

Factor 1 sensors are well-suited for any process that involves different metals. Whether it is automated welding or a packaging conveyor, the factor 1 sensor will keep the material switching ranges uniform. But why is this such a big advantage?

Think about the time spent having to adjust sensor distances. Not only is the task annoying, it also takes up time. Having factor 1 sensors in place will increase the uptime of these processes and eliminate the need for sensor adjustments.

One last benefit to note about factor 1 sensors is that they are inherently weld field immune. The internal construction of the sensor prevents it from being affected by the electromagnetic field generated during welding. This additional immunity allows the sensor to survive in these welding conditions where a typical sensor might fail if it comes in proximity to the weld field.

In the end, you know your application best, but if any of the above benefits resonate with you, it’s time to start thinking about factor 1.

Capacitive Sensors – the One Technology That Can Sense It All?

I choose capacitive sensors every day to solve application challenges in the life science and semiconductor industries. Capacitive sensors in life science reliably detect liquid levels of reagents, buffers, and all manner of biological substances. In the semiconductor industry, capacitive sensors are in wide use in “wet” processes, such as monitoring liquids in etching and deposition tools.

In standard Industrial applications, capacitive sensors detect plastics and liquids, and although they can also detect just about anything else, there are better alternatives keeping them from becoming the “one sensor to sense it all.” When sensing metals, for instance, an inductive sensor is the better choice (for cost, safety, and other reasons).

Furthermore, special patented capacitive sensors exist for stubborn liquids. These sensors can ignore foam and/or the presence of material coating the inner walls of containers, which can lead to false triggering with standard sensors. These smart capacitive sensors use the slight conductance of the liquid to cleverly provide a more precise and reliable liquid level reading.

So, although capacitive sensors can sense just about anything, they are mostly relegated to sensing non-metallic objects and liquids.

How capacitive sensors work

Contrary to the common belief that capacitive sensors work based on density to detect a target, they operate on a different principle. While it may seem logical that targets being denser than air would be the basis for detection, understanding the actual working mechanism of capacitive sensors might save us some application grief.

Capacitive sensors create an electrostatic field between two conductive plates, similar to a capacitor, but rather than the plates opposing each other as in a capacitor, a capacitive sensor has plates side by side. In either case, the mathematical formulas are the same, with only the area of the plates, distance, number of plates, and the εr of the material as the variables.

What is εr?

εr is the relative permittivity of the material to be sensed, also known as the dielectric constant. When speaking of relative permittivity or dielectrics, we are speaking of materials that do not readily conduct electricity, materials called insulators. The dielectric constant (K) is a ratio of a substance compared to the dielectric constant of a vacuum, where K=1. With capacitive sensors, we typically use air as the starting point, with K=1.00059, which is close enough for our purposes. Capacitive sensors trigger output when an object with a higher K than air is sensed in the electrostatic field. But first, we must interact with the target material, which is where the “K” becomes important.

Relative permittivity is a measure of how easy/difficult it is to polarize a substance (as a ratio to a vacuum which equals 1). Polarizing is accomplished by placing the matter in an electrostatic field, which causes the molecules of the matter to rotate, lining up with the field. The easier the matter lines up, the higher the K value.

So, a capacitive sensor works by polarizing the target material, which in turn creates a higher capacitance in the sensor’s circuitry. Internal to the sensor is an oscillator or generator (to create the electrostatic field), comparators, op-amps, etc. These components determine if the internal capacitance of the circuitry has changed enough to trigger an output.

Why is water so easy to polarize? Because it’s already a “polar” molecule.

Due to the hydrogen bond, each water molecule is already a tiny magnet, with a slight positive on one end and a slight negative on the other. When placing water in an electrostatic field, the water molecules easily align with the sensor’s plates, with the plus side of the water molecule pointing towards the negative plate of the sensor, and the minus side of the molecule pointing to the positive plate.

In the end, the density explanation doesn’t hold water, since glass is much denser than water, but it is water, due to how easily it can be polarized, which is easier for a capacitive sensor to recognize. Not to say that a capacitive sensor cannot sense glass, because they can sense just about any material, but with such a difference in dielectric constants between water and glass, the sensor gain (trimpot or teach wire) can be adjusted to reduce the sensitivity of the sensor, to ignore the glass/plastic container, and only sense the water-based media inside. Make sense?

Capacitive sensors work by:

    1. Polarizing the target media as it enters the sensor’s electrostatic field
    2. Measuring the internal increase in capacitance due to the polarized media
    3. Creating an output once the set threshold of internal capacitance is exceeded

So next time you’re looking to sense an object or liquid, take a look at a table of dielectrics, and consider a capacitive sensor to do the job.

Revisiting the Key Points of IO-Link

IO-Link is a communication protocol for use in industrial automation systems to connect sensors and actuators to a central control system. It provides a standardized interface for the communication and configuration of devices, allowing for seamless integration and easy parameterization.

Here are some key points about IO-Link

    • Communication: IO-Link uses a point-to-point serial communication link between the IO-Link master and the IO-Link devices (sensors or actuators). Typically, the communication occurs over a standard 3-wire sensor cable.
    • Master/device architecture: The IO-Link system consists of an IO-Link master, which serves as a gateway between the IO-Link devices and the control system. The IO-Link master can connect to multiple IO-Link devices in a network.
    • Device identification: On the network, each IO-Link device uniquely identifies itself. When the devices connect to the IO-Link master, it automatically recognizes the device and communicates its parameters and capabilities to the master.
    • Configuration and parameterization: IO-Link allows for easy configuration and parameterization of connected devices. Through the master, the control system can read and write device parameters, such as sensor ranges, output behavior, and diagnostic information.
    • Data exchange: IO-Link supports the exchange of process data, event data, and service data. Process data is the primary information exchanged between the device and the control system primarily exchange process data, which represents the measured or controlled variables. Status and diagnostic information make up the event data, while configuration and parameterization use the service data.

Overall, IO-Link offers a flexible and standardized communication platform for connecting sensors and actuators in industrial automation systems. Its ease of use, configurability, and diagnostic capabilities make it a popular choice for modern industrial applications.

Click here for some IO-Link application examples.

Using RFID Databolts in an Engine Assembly Plant

There are many types of RFID processors and network protocols to keep in mind as you’re installing your RFID system in your automotive plant manufacturing line. This blog post focuses on RFID databolts. I’ll discuss best practices for installing them, how to use RFID technology to track engine parts and components throughout the production process and how to use RFID databolts to provide instructions and to document the finished process.

The RFID databolt is a threaded device that can be embedded into a blank engine block or other component prior to production. It includes a radio-frequency identification (RFID) tag, a microprocessor, RFID antenna, and a power source, such as a battery or a connection to a power supply.

When installing an RFID system, it is important to keep in mind the best practices for mounting the RFID antennas and data bolts. The antennas should be mounted away from metal as much as possible, as metal can interfere with the signal. They should also be mounted with Delrin or UHMW mounting plates, as these materials will not interfere with the signal, and they provide a secure mount.

This can be used to ensure quality production and to identify any potential issues that may arise during the assembly process.

RFID databolts can also be used to track engine parts and components throughout the manufacturing process. This allows you to monitor the production process and quickly identify and address any issues. This will help to ensure quality production and to reduce errors and delays in the production process.

RFID databolts are an important part of the automotive manufacturing process and can be used to provide instructions and document the finished process, as well as to track engine parts and components throughout the production process. It is important to keep in mind the best practices for mounting the RFID antennas and databolts, such as mounting the antenna away from metal and using Delrin or UHMW mounting plates. By following these tips, you can ensure quality production and reduce errors and delays in the production process.

Key Considerations for Choosing the Right RFID Tag for Your Traceability Application

Choosing an RFID tag for your traceability application can be difficult given the huge variation of tags available today. Here are four main factors to keep in mind when selecting a tag, which will greatly contribute to the success of your RFID project.  

 

Choose tag type: I like to start with tags and work backward. Tags come in many shapes and sizes – from paper labels to hang tags, pucks, and even glass capsules and reusable data bolts. First, think about where you want to mount your tag. It is important that it does not interfere with your current product or production process. If you plan to tag a metal product, using a metal-mount style tag will give you the best results.

Assess the required read range: Think about how much range you need between your RFID readers and your tags. Remember that the shorter your range, the more options you will have when selecting a suitable frequency. While all frequencies work for short ranges, long ranges require HF (High Frequency) or even UHF (Ultra High Frequency) products. As a rule of thumb, it is best to keep your reading range as short as possible for the most reliable results.

 

Consider the environment: RFID tags are designed to withstand high temperatures, chemicals, water, and moisture. If your environment involves any of these conditions, you will want a tag that is up to the challenge and will remain functional.

 

Choose the data storage option: RFID tags can be read only or read/write, so think about what kind of data you want to store on your tags. Do you want your tag to be a simple license plate tied back to a centralized database, or do you want to store process/status data directly on the tag? RFID gives you a choice and now is the time to think about what and how much data you want to maximize the benefit of RFID for your process.

 

So now that you have thought about tag type, read range, environment, and data, you already have a promising idea of which tags will work in your application. The final step is to get price quotes and get started with your project. This is a wonderful time to ask the RFID experts for more recommendations and ask about on-site testing to make sure your tags are a great fit for your application. It is also an excellent time to collect recommendations for which reader will pair best with your tag and application.

Who Moved My Data? Outsourcing Condition Monitoring

This is the first in a three-part blog series on condition monitoring.

 

Critical assets are the lifeblood of the manufacturing plant. They are the devices, machines, and systems that when broken down or not performing to expected standards, can cause downtimes and production or quality losses resulting in rejects. If not maintained at the optimal levels of performance, these assets can damage the overall reputation of the brand. Some examples include evaporate fans, presses, motors, conveyor lines, mixers, grinders, and pumps.

Most manufacturing plants maintain critical assets on a periodic schedule, also known as preventative maintenance. However, in recent years, condition-based maintenance strategies, made possible with advancements in sensor and communications technologies, further improve the uptime, lower the overall cost of maintenance, and enhance the life of critical assets. Condition-based maintenance relies on continuous monitoring of key parameters of these assets.

Once the plant decides to adopt predictive maintenance (PdM) strategies for maintaining the assets, they face an important decision: to implement the condition monitoring strategy in-house or to outsource it to a third party – new term – continuous condition monitoring as a service (CCMAAS).

The bipartisan view expressed in this three-part blog series explores these options to help plant managers make the best, most appropriate decision for their plants. Just a hint: the decision for the most part is based on who controls the data regarding your plant’s critical assets.

In this part, we will delve a little deeper into the advantages and disadvantages of the CCMAAS option.

The advancements in cloud-based data management enable businesses to offer remote monitoring of the data related to the assets. In a nutshell, the service providers will audit the plant’s needs and deploy sensors and devices in the plant. Then, using IoT gateways, they transfer the critical parameters about the assets, such as vibration, temperature, humidity, and other related parameters to the cloud-based storage. The service provider’s proprietary algorithms and expertise would synthesize the data and send the plant’s maintenance personnel alerts about maintenance.

Advantages of outsourcing condition monitoring:

    1. Expertise and support: By outsourcing data management to a specialized provider, the plant has access to a team of experts who possess in-depth knowledge of condition monitoring and data analytics. These professionals can provide valuable insights, guidance, and technical support.
    2. Scalability and flexibility: Outsourced solutions offer greater scalability, allowing businesses to easily accommodate changing monitoring requirements and fluctuating data volumes.
    3. Cost reduction: Outsourcing eliminates the need for upfront investments in hardware and infrastructure, significantly reducing capital expenses. Instead, companies pay for services based on usage, making it a more predictable and manageable operational expense.

Disadvantages of outsourcing condition monitoring:

    1. Data security concerns: Entrusting critical data to a third-party provider raises concerns about data security and confidentiality. Plants must thoroughly assess the provider’s security protocols, data handling practices, and compliance with industry regulations to mitigate these risks.
    2. Dependency on service providers: Outsourcing data management means relying on external entities. If the service provider has technical difficulties, interruptions in service, or business-related issues, it may impact the organization’s operations and decision-making.
    3. Potential data access and control limitations: Plants may face limitations in accessing and controlling their data in real time. Reliance on a service-level agreement with the provider for data access, retrieval, or system upgrades can introduce delays or restrict autonomy.

Just like critical assets are the lifeblood of the manufacturing plants, in the near future data that is being generated every second in the plant will also be equally important. Outsourcing does allow manufacturing plants to adapt quickly to the new normal in the industry.  I would not completely discount outsourcing based on the control of data. The option does have its place. You will just have to wait for my concluding blog on this topic.

In the meantime, your feedback is always welcome.

Exploring the Versatility of Digital Rotary Encoders

Digital rotary encoders provide precise position feedback and motion control for a range of applications, from simple motor speed control to complex robotics and CNC machines. Here I explore some key applications of digital rotary encoders and how manufacturers benefit from their use.

Incremental and absolute encoders

Digital rotary encoders convert rotary motion into digital signals. They typically consist of a rotating disk or shaft with an optical or magnetic sensor that detects the position of the disk or shaft and generates an electrical signal. The signal can be read by a digital controller, such as a microcontroller or PLC, to provide position feedback and control of motors and other mechanical systems.

There are two main types of digital rotary encoders: incremental and absolute. Incremental encoders generate a series of pulses that indicate the relative position of the encoder shaft or disk. Absolute encoders provide a unique digital code that represents the absolute position of the encoder shaft or disk.

Both types of encoders have their specific applications and choosing the right type of encoder depends on the requirements of the specific application.

Applications

Digital rotary encoders have applications in various industries, from automotive to aerospace, and from robotics to manufacturing. Following are some of their key applications in manufacturing:

Motion control

In motion control systems, encoders provide precise position feedback for accurate control of motors, such as servo motors, to achieve the desired speed and direction of movement. In a CNC machine, for example, encoders provide feedback to the controller, which adjusts the motor speed and position to cut precise shapes and patterns in the material.

Robotics

In robotics, Digital rotary encoders provide position feedback and control of the robotic arms and joints. Encoders provide accurate feedback on the position and orientation of the robotic arm, which enables precise movement and manipulation of objects. Robot grippers also use encoders to detect the force applied to the object and adjust the grip accordingly.

Industrial automation

Digital rotary encoders play a critical role in industrial automation by providing precise position feedback and control of various mechanical systems. For example, in a conveyor belt system, encoders provide feedback on the speed and position of the belt, which allows for accurate control of the product flow and sorting.

Machine tooling

Digital rotary encoders are used in machine tooling, such as lathes and milling machines, to provide precise position feedback and control of the cutting tool. Encoders enable the cutting tool to move accurately and precisely along the material, resulting in high-quality parts and components.

Benefits of using encoders in manufacturing

Using digital rotary encoders in manufacturing offers several benefits, including:

Improved quality. Encoders provide precise position feedback, which results in improved accuracy and quality of the manufactured parts and components. With encoders, manufacturers can achieve high-quality cuts, precise measurements, and accurate movement of mechanical systems.

Increased efficiency. Digital rotary encoders improve the efficiency of manufacturing processes by providing real-time position feedback and control of mechanical systems. This enables manufacturers to optimize the speed and movement of the systems, resulting in faster production cycles and reduced downtime.

Reduced maintenance costs. Digital rotary encoders are reliable and require minimal maintenance. Unlike traditional mechanical sensors, encoders have no moving parts, which reduces wear and tear and extends their lifespan. This results in reduced maintenance costs and downtime, which increases the overall productivity of the manufacturing process.

Overall, digital rotary encoders are versatile devices for measuring and monitoring rotational movements in numerous applications where precise position or speed control is required.

Using MQTT Protocol for Smarter Automation

In my previous blog post, “Edge Gateways to Support Real-Time Condition Monitoring Data,” I talked about the importance of using an edge gateway to gather the IoT data from sensors in parallel with a PLC. This was because of the large data load and the need to avoid interfering with the existing machine communications. In this post, I want to delve deeper into the topic and explain the process of implementing an edge gateway.

Using the existing Ethernet infrastructure

One way to collect IoT data with an edge gateway is by using the existing Ethernet infrastructure. With most devices already communicating on an industrial Ethernet protocol, an edge gateway can gather the data on the same physical Ethernet port but at a separate software-defined number associated to a network protocol communication.

Message Queue Telemetry Transport (MQTT)

One of the most commonly used IoT protocols is Message Queue Telemetry Transport (MQTT). It is an ISO standard and has a dedicated software Ethernet port of 1883 and 8883 for secure encrypted communications. One reason for its popularity is that it is designed to be lightweight and efficient. Lightweight means that the protocol requires a minimum coding and it uses low-bandwidth connections.

Brokers and clients

The MQTT protocol defines two entities: a broker and client. The edge gateway typically serves as a message broker that receives client messages and routes them to the appropriate destination clients. A client is any device that runs an MQTT library and connects to an MQTT broker.

MQTT works on a publisher and subscriber model. Smart IoT devices are set up to be publishers, where they publish different condition data as topics to an edge gateway. Other clients, such as PC and data centers, can be set up as subscribers. The edge gateway, serving as a broker receives all the published data and forwards it only to the subscribers interested in that topic.

One client can publish many different topics as well as be a subscriber to other topics. There can also be many clients subscribing to the same topic, making the architecture flexible and scalable.

The edge gateway serving as the broker makes it possible for devices to communicate with each other if the device supports the MQTT protocol. MQTT can connect a wide range of devices, from sensors to actuators on machines to mobile devices and cloud servers. While MQTT isn’t the only way to gather data, it offers a simple and reliable way for customers to start gathering that data with their existing Ethernet infrastructures.

Understanding IP Ratings

Ingress Protection (IP) ratings, developed by the International Electrotechnical Commission (IEC), are a standardized measure for manufacturers to specify and understand the level of protection that an enclosure offers against the intrusion of solid objects and liquids. It helps customers understand the suitability of a product for its intended use.

There are various levels of protection provided by IP ratings, and in this post, we’ll be discussing the differences between them.

Protection against solids

The first digit in an IP rating refers to the level of protection against solids – ranging from 0 to 6, with 0 being no protection and 6 providing protection against dust and other small particles. For example, a product with an IP rating of 4x provides protection against solid objects larger than 1mm in diameter.

Protection against liquids

The second digit in an IP rating refers to the level of protection against liquids – ranging from 0 to 9, with 0 being no protection and 9 providing protection against high temperatures, high pressure, water, and steam. A product with an IP rating of 7, for example, provides protection against immersion in water up to 1 meter for up to 30 minutes.

It is essential to note that higher IP ratings do not necessarily mean better protection. For instance, a product with an IP rating of 68 provides protection against dust and continuous immersion in water, making it suitable for underwater applications. However, it might not be suitable for areas with high humidity levels because it may not protect against condensation. Two common IP ratings are IP20, typical of control cabinet devices, and IP67, which is common in field devices.

Understanding the difference in IP ratings is essential for selecting the right product for its intended application. But it’s also important to follow appropriate guidelines to maintain a given device’s rating. This may include following specific mounting instructions, selecting the right connectors/cables, adhering to torque ratings, and more. One common example where we might see IP rating being negated would be a failure to use port plugs on unused ports on IO-Link master blocks.

In conclusion, the IP rating system is an important standard used to specify the level of protection against solids and liquids of a device. The first digit refers to the level of protection against solid objects, while the second digit refers to the level of protection against liquids. It is important to note that higher IP ratings do not necessarily mean better protection and understanding the difference between the ratings is crucial for selecting the right product for its intended application.

For a full description of the IEC IP ratings, including their testing conditions, please refer to IEC 60529.