Improving Conveyor Performance: The Value of Condition Monitoring

In the realm of industrial operations, even the smallest improvements can yield substantial gains. This rings especially true when considering the often overlooked yet indispensable component of many manufacturing operations: conveyors. While these mechanical workhorses silently go about their tasks, incorporating a touch of innovation in the form of condition monitoring sensors can yield significant payoffs. In this blog, I delve into why integrating condition monitoring into your conveyance systems isn’t just a good idea – it’s a savvy investment in efficiency, reliability, and overall peace of mind.

The case for condition monitoring

Here are five compelling reasons why condition monitoring is an essential addition to your conveyor systems:

    1. Reducing unplanned downtime: The ability to keep an eye on the health and performance of critical conveyor components empowers you to detect potential issues before they snowball into disruptive downtime events. Proactive, even predictive, maintenance becomes the name of the game, minimizing the risk of unscheduled stoppages.
    2. Enhancing reliability: Early issue identification leads to fewer instances of system failures and breakdowns. By fostering a proactive maintenance approach, condition monitoring bolsters the overall reliability of your conveyor system, offering a buffer against unexpected interruptions.
    3. Elevating safety standards: Safety should never be an afterthought. Condition monitoring serves as a vigilant sentinel, flagging potential safety concerns within the conveyor system. For instance, it can detect abnormal vibrations in motors or gearboxes, thereby averting catastrophic failures that might jeopardize personnel safety or equipment integrity.
    4. Trimming maintenance costs: The ability to time maintenance activities optimally, thanks to condition monitoring, can translate into substantial cost savings. Instead of waiting for a failure to necessitate urgent fixes, you can schedule maintenance when it’s most cost-effective, avoiding pricier last-minute solutions and expedited freight expenses.
    5. Prolonging equipment lifespan: Condition monitoring, by tracking the condition of vital components, helps you pinpoint or predict exactly when maintenance or repairs are due. This precision not only extends the lifespan of your equipment but also curbs the need for costly replacements or frantic damage control.

Adding condition monitoring to your conveyor systems isn’t just about immediate gains in efficiency and reliability – it’s an investment that can significantly reduce maintenance costs and stretch the lifespan of your equipment. Why wait? Make the investment today and unlock the financial benefits and peace of mind that come with the utilization of condition monitoring across your conveyor systems.

Click here to learn more about condition monitoring.

Boosting Sensor Resilience in Welding With Self-Bunkering Inductive Proximity Sensors

A welding cell will press the limits of any sensor placed in its proximity. Avoiding weld spatter, magnetic fields, extreme temperatures, and impact damage are common challenges in a harsh welding environment. And when sensors fail in these conditions, it can significantly disrupt production uptime. To prevent such disruptions, manufacturers explore more robust sensor mounting solutions, such as proximity mounts, bunker blocks, and other protective devices to shield sensors from these harsh conditions. The self-bunkering inductive proximity sensor plays a key role in alleviating the issues, especially in situations where other accessories are not an option due to limited space.

Weld spatter and magnetic field resistant

In many welding applications, the substantial currents involved can generate heightened magnetic fields, making a welding cell vulnerable to interference. This interference can lead a basic proximity sensor to trigger, even though a part may not be present. The self-bunkering proximity sensor is designed to resist magnetic fields, allowing it to work much closer to the welding surface than a typical inductive sensor. Additionally, the sensor comes with a polytetrafluoroethylene (PTFE) weld coating, allowing for easy removal of the spatter buildup with abrasive tools like a wire brush.

Guard against heavy impacts

Again, the self-bunkering inductive proximity sensor is built for rugged environments. It features a thick, strong one-piece connector body and super thick brass housing to buffer the internal electronics from external impacts and conductive heat. It also includes a deflection ring and a non-brittle, ferrite-free coil carrier to protect the sensor face from direct impacts, disperse shock, and safeguard internal sensing components. The wide-radius corners offer stress relief at the major junction points of the connector body and housing.

Withstand extreme temperatures

With a ceramic PTFE-coated face plate, the sensor can resist up to 2200°F weld spatter burn through from the front. The rest of the body, coated with PTFE and paired with an extra-thick brass housing, provides protection for the sensor up to 300°F. This means that if the sensor is properly maintained, its lifetime should be quite a bit longer than a standard inductive sensor.

Don’t replace, defend

The core components of the proximity sensor can be destroyed if any of the three critical failures – conducted heat, impact, or spatter – occur in combination. To prevent this, the product incorporates a collection of design measures intended to create a virtually impenetrable shield around the internal critical components.

In summary, the self-bunkering inductive proximity sensor is a key solution to combat the challenges sensors face in harsh welding environments that will ultimately disrupt production. Its resistance to magnetic fields and ability to withstand heavy impacts and extreme temperatures, especially in situations with limited space, ensures the protection of the critical sensor components and extended sensor lifespan.

Exploring the Significance of CIP Safety in Automation Protocols

CIP Safety is a communication protocol used in industrial automation to ensure the safety of machinery, equipment, and processes. It is a part of the larger family of protocols known as the Common Industrial Protocol (CIP) developed by ODVA, a global trade and standard development organization.

The primary goal of CIP Safety is to enable the safe exchange of data between safety devices, controllers, and other components within an industrial automation system. This protocol allows for real-time communication of safety-related information, such as emergency stops, safety interlocks, and safety status, between various devices in a manufacturing or processing environment.

Key features and concepts of CIP Safety

    • Safety communication: CIP Safety is designed to provide fast and reliable communication for safety-critical information. It ensures that safety messages are transmitted and received without delays, ensuring that safety actions are executed promptly.
    • Deterministic behavior: Determinism is a crucial aspect of safety systems, as it ensures that safety messages are transmitted predictably and with low latency. This helps in reducing the risk of accidents and ensuring the proper functioning of safety mechanisms.
    • Redundancy and fault tolerance: CIP Safety supports redundancy and fault tolerance, allowing for the implementation of systems that can continue operating safely even in the presence of hardware or communication failures.
    • Safe states and actions: The protocol defines various safe states that a system can enter in response to safety-related events. It also specifies safe actions that controllers and devices can take to prevent or mitigate hazards.
    • Device integration: CIP Safety can be integrated with other CIP protocols, such as EtherNet/IP, enabling seamless integration of safety and standard communication on the same network.
    • Certification: Devices and systems that implement CIP Safety are often required to undergo certification processes to ensure their compliance with safety standards and their ability to perform in critical environments.
    • Flexibility: CIP Safety is designed to accommodate various levels of safety requirements, from simple safety tasks to more complex and sophisticated safety functions. This flexibility makes it suitable for a wide range of industrial applications.

CIP Safety has been widely adopted in industries such as manufacturing, automotive, energy, and more, where ensuring the safety of personnel, equipment, and processes is of paramount importance. It allows for the integration of safety systems into the overall control architecture, leading to more efficient and streamlined safety management within industrial environments.

Examples of connections with an external CIP Safety Block
Examples of connections with an external CIP Safety Block

Learn more at https://www.balluff.com/en-us/products/areas/A0007/groups/G0701/products/F07103?page=1&perPage=10&availableFirst=true

Exploring Industrial Cameras: A Guide for Engineers in Life Sciences, Semiconductors, and Automotive Fields 

In the bustling landscape of industrial camera offerings, discerning the parameters that genuinely define a camera’s worth can be a daunting task. This article serves as a compass, steering you through six fundamental properties that should illuminate your path when selecting an industrial camera. While the first three aspects play a pivotal role in aligning with your camera needs, the latter three hold significance if your requirements lean towards unique settings, external conditions, or challenging light environments.

    1. Resolution: unveiling the finer details. Imagine your camera as a painter’s canvas and resolution as the number of dots that bring your masterpiece to life. In simple terms, resolution is the number of pixels forming the image, determining its level of detail. For instance, a camera labeled 4096 x 3008 pixels amounts to a pixel symphony of around 12.3 million, or 12.3 megapixels. Yet don’t be swayed solely by megapixels. Focus on the pixel count on both the horizontal (X) and vertical (Y) axes. A 12-megapixel camera might sport configurations like 4000 x 3000 pixels, 5000 x 2400 pixels, or 3464 x 3464 pixels, each tailor-made for your observation intent and image format.
    1. Frame rate: capturing motion in real-time. The frame rate, akin to a movie’s frame sequence, dictates how swiftly your camera captures moving scenes. With figures like 46.5/74.0/135 denoting your camera’s capabilities, it reveals the number of images taken in different modes. Burst mode captures a rapid series of images, while Max. streaming ensures a consistent flow despite interface limitations. The elegance of Binning also plays a role, making it an adept solution for scenarios craving clarity in dim light and minimal noise.
    1. Connectivity: bridging the camera to your system. The camera’s connectivity interfaces, such as USB3 and GigE, shape its rapport within your system.

USB3 Interface: Like a speedy expressway for data, USB3 suits real-time applications like quality control and automation. Its straightforward nature adapts to diverse setups.

GigE Interface: This Ethernet-infused interface excels in robust, long-distance connections. Tailored for tasks like remote monitoring and industrial inspection, it basks in Ethernet’s reliability. Choosing the best fit: USB3 facilitates swift, direct communication, while GigE emerges triumphant in extended cable spans and networking. Your choice hinges on data velocity, distance, and infrastructure compatibility.

    1. Dynamic range: capturing radiance and shadow. Imagine your camera as an artist of light, skillfully capturing both dazzling radiance and somber shadows. Dynamic range defines this ability, representing the breadth of brightness levels the camera can encapsulate. Think of it as a harmony between light and dark. Technical folks may refer to it as the Ratio of Signal to Noise. It’s influenced by the camera’s design and the sensor’s performance. HDR mode is also worth noting, enhancing contrast by dividing the integration time into phases, each independently calibrated for optimal results.
    1. Sensitivity: shining in low-light environments. Your camera’s sensitivity determines its prowess in low-light scenarios. This sensitivity is akin to the ability to see in dimly lit spaces. Some cameras excel at this, providing a lifeline in settings with scarce illumination. Sensitivity’s secret lies in the art of collecting light while taming noise, finding the sweet spot between clear images and environmental challenges.
    1. Noise: orchestrating image purity. In the world of imagery, noise is akin to static in an audio recording—distracting and intrusive. Noise takes multiple forms and can mar image quality:

Read noise: This error appears when converting light to electrical signals. Faster speeds can amplify read noise, affecting image quality. Here, sensor design quality is a decisive factor.

Dark current noise: Under light exposure, sensors can warm up, introducing unwanted thermal electrons. Cooling methods can mitigate this thermal interference.

Patterns/artifacts: Sometimes, images bear unexpected patterns or shapes due to sensor design inconsistencies. Such artifacts disrupt accuracy, especially in low-light conditions. By understanding and adeptly managing these noise sources, CMOS industrial cameras have the potential to deliver superior image quality across diverse applications.

In the realm of industrial cameras, unraveling the threads of resolution, frame rate, connectivity, dynamic range, sensitivity, and noise paints a vivid portrait of informed decision-making. For engineers in life sciences, semiconductors, and automotive domains, this guide stands as a beacon, ushering them toward optimal camera choices that harmonize with their unique demands and aspirations.

Mastering IO-Link: Best Practices for Seamless Industrial Automation Integration

IO-Link is a versatile communication protocol for use in industrial automation to connect sensors and actuators to control systems. Here are some best practices to consider when implementing IO-Link in your automation setup:

Device selection: Choose IO-Link devices that best fit your application’s requirements. Consider factors such as sensing range, accuracy, ruggedness, and compatibility with your IO-Link master and network. Look to see if add-on Instructions and/or function blocks are available for ease of integration.

Network topology: Design a clear and well-organized network topology. Plan the arrangement of IO-Link devices, masters, and other components to minimize cable lengths and optimize communication efficiency. Remember that the maximum distance for an IO-Link device is 20 meters of cable from the IO-Link master.

Standardized cable types: Use standardized IO-Link cables to ensure consistent and reliable connections. High-quality cables can prevent signal degradation and communication issues. Pay careful attention to the needs of the IO-Link device. Some devices require 3, 4, or 5 conductors in the associated cable.

Parameterization and configuration: Take advantage of IO-Link’s ability to remotely configure and parameterize devices. This simplifies setup and makes it possible to change device settings without physically accessing the device. Learn how to take advantage of the IO-Link master’s parameter server functionality.

Centralized diagnostics: Use the diagnostic capabilities of IO-Link devices to monitor health, status, and performance. Centralized diagnostics can help identify issues quickly and enable predictive maintenance. Of the three types of IO-Link data, pay attention to the event data.

Remote monitoring and control: Leverage IO-Link’s bi-directional communication to remotely monitor and adjust devices. This can improve operational efficiency by reducing the need for manual intervention.

Error handling: Implement error handling mechanisms to respond to communication errors or device failures. This could include notifications, alarms, and fallback strategies.

Network segmentation: If you have a large and complex automation setup, consider segmenting your IO-Link network into smaller sections. This can help manage network traffic and improve overall performance.

Training and documentation: Provide training for your team on IO-Link technology, best practices, and troubleshooting techniques. Create documentation that outlines network layouts, device addresses, and configuration details.

Testing and validation: Thoroughly test IO-Link devices and their interactions before deploying them in a production environment. This can help identify potential issues and ensure proper functionality.

Scalability: Plan for future expansion by designing a scalable IO-Link network. Consider how easily you can add new devices or reconfigure existing ones as your automation needs evolve.

Vendor collaboration: Collaborate closely with IO-Link device manufacturers and IO-Link master suppliers. They can provide valuable insights and support during the planning, implementation, and maintenance stages.

By following these best practices, you can optimize the implementation of IO-Link in your industrial automation setup, leading to improved efficiency, reliability, and ease of maintenance.

Click here to learn more about using IO-Link to improve process quality.

From Wired to Wireless Automation Advancements in Automotive Manufacturing

Looking back, the days of classic muscle cars stand out as a remarkable period in automotive history. Consider how they were built, including every component along the assembly line connected through intricate wiring, resulting in prolonged challenges related to both wiring and maintenance. Advancements in technology led to the introduction of junction blocks, yet this didn’t entirely solve the persistent problems associated with time and connections.

In the mid-2000s, a collaborative effort among multiple companies resulted in the development of the IO-Link protocol. This protocol effectively tackled the wiring and maintenance issues. Since its inception, IO-Link has continued to progress and evolve.

In 2023, we’re taking the next step with a wireless IO-Link master block.

In modern manufacturing, the process involves using independently moving automated guided vehicles (AGVs), also known as skillets. These AGVs are responsible for performing various tasks along the production line before completing their circuit and returning to their initial position. Initially, when these AGVs were integrated, each of these skillets was equipped with a programmable logic controller (PLC), which incurred significant expenses and extended the setup time. Additionally, the scalability of this system was limited by the available IP addresses for the nodes.

Demand for wireless IO-Link blocks

In recent years, there has been a growing demand for wireless IO-Link blocks. Now, a solution to meet this demand is available. The wireless IO-Link block works in a manner similar to the existing current blocks but without the need for a PLC, simplifying wiring and using existing Wi-Fi infrastructure.

Imagine a conveyor scenario where numerous AGVs follow a designated path, each with a hub attached. The setup would look something like this: up to 40 hubs communicating simultaneously with a central master. Each hub has the capacity to accommodate up to eight connected devices, resulting in a total of 320 distinct IO points managed by a single IO-Link master.

Communication among these blocks employs a protocol akin to that of a cell phone. As an AGV transitions from one master hub to another, it continues to transmit its data. Within each hub, an identity parameter not only designates the specific hub but also identifies the associated skillets and the location within the manufacturing plant.

Transitioning to a wireless system leads to a substantial reduction in your overall cost of ownership. This includes decreased setup times, simplified troubleshooting, lower maintenance efforts, and a reduced need for spare parts.

We are in an exciting time of technological advancement. Make sure you are moving alongside us!

Comparing IO-Link and Modbus Protocols in Industrial Automation


In the realm of industrial automation, the seamless exchange of data between sensors, actuators, and control systems is critical for optimizing performance, increasing efficiency, and enabling advanced functionalities. Two widely used communication protocols, IO-Link and Modbus, have emerged to facilitate this data exchange. In this blog, I’ll analyze the characteristics, strengths, and weaknesses of both protocols to help you choose the right communication standard for your industrial application.

IO-Link: transforming industrial communication for advanced applications

IO-Link is a relatively new communication protocol designed to provide seamless communication between sensors and actuators and the control system. It operates on a point-to-point communication model, meaning each device on the network communicates directly with the IO-Link master or gateway. IO-Link offers features like bidirectional process data exchange, parameterization, device diagnostics, and plug-and-play functionality, making it an ideal choice for advanced industrial applications.

IO-Link key features:

    • Bidirectional communication: IO-Link allows data exchange not only from the IO-Link master to the devices but also from devices to the IO-Link master, enabling real-time diagnostics and enhanced control.
    • Device parameterization: IO-Link supports remote device configuration, reducing downtime during device replacement or maintenance.
    • Diagnostics: The protocol provides extensive diagnostic capabilities, allowing for proactive maintenance and minimizing production interruptions, including condition monitoring.
    • Flexibility: IO-Link supports a plethora of smart devices, both digital and analog devices, signal converters, and condition monitoring sensors, providing compatibility with a wide range of sensors and actuators, and is manufacturer-independent.

Modbus: a time-tested protocol power industrial communication

Modbus is a widely adopted communication protocol introduced in the late 1970s. Initially designed for serial communication, it has evolved and now includes TCP/IP-based versions for Ethernet networks. Modbus operates on a master-slave architecture, where a single master device communicates with multiple slave devices. Due to its simplicity and ease of implementation, Modbus remains popular in many industrial applications.

Modbus key features:

    • Simplicity: Modbus is a straightforward protocol, making it easy to implement, and troubleshoot, especially in smaller networks.
    • Versatility: Modbus can be used over various physical communication media, including serial (RS-232/RS-485) and Ethernet (TCP/IP).
    • Widely supported: A vast array of devices and system support Modbus due to its long-standing presence in the industry.
    • Low overhead: Modbus has minimal message overhead, making it suitable for simple and time-critical applications.

Now, let’s compare IO-Link and Modbus based on several crucial factors:

    • Speed and data capacity:

   – IO-Link offers higher data transfer rates, making it suitable for applications requiring real-time data exchange and high precision.

– Modbus operates at lower speeds, limiting its suitability for applications with demanding data transfer requirements.

    • Complexity and configuration:

   – IO-Link’s advanced features may require more complex configuration and setup, but its bidirectional communication, device parameterization capabilities, and remote diagnostics make it more versatile.

   – Modbus’ simplicity makes it easier to configure and deploy, but it lacks the bidirectional communication and parameterization features found in IO-Link.

    • Device compatibility:

   – IO-Link’s compatibility with both digital and analog smart devices, and being manufacturer-independent, ensures a much broader range of sensor and actuator support.

   – Modbus is compatible with various devices, but its support for analog devices can be limited in comparison to IO-Link.

    • Diagnostics and maintenance:

   – IO-Link’s comprehensive diagnostics facilitate proactive maintenance and rapid issue resolution.

   – Modbus provides basic diagnostics, but they may not be as extensive or real-time as those offered by IO-Link.

    • Industry adoption:

   – IO-Link adoption is growing in industrial automation, especially in applications that demand high performance, advanced capabilities, and support of IIOT.

   – Modbus has been widely adopted over the years and remains prevalent, especially in legacy systems or simpler applications.

Both IO-Link and Modbus are valuable communication protocols in industrial automation, each with its strengths and weaknesses. IO-Link excels in high-performance applications that demand real-time data exchange, bidirectional communication, and advanced diagnostics. On the other hand, Modbus remains a viable option for simpler systems where ease of implementation and broad device support are essential.

The choice between IO-Link and Modbus depends on the specific requirements of your industrial application, the level of complexity needed, and the devices you plan to use. Understanding the capabilities of each protocol will empower you to make an informed decision, ensuring your communication system optimally supports your automation needs.

Enhancing Manufacturing Efficiency: OEE Measurement Through Sensors

Optimizing operational efficiency in manufacturing is crucial for businesses seeking to stay competitive. One powerful tool for measuring and enhancing manufacturing performance is overall equipment effectiveness (OEE). By leveraging sensor technology, manufacturers can gain valuable insights into their production processes, enabling them to identify areas for improvement, reduce downtime, and boost overall productivity.

What is OEE?

OEE is a metric for measuring the efficiency and productivity of a manufacturing process, including three key factors: availability, performance, and quality. Availability measures the percentage of time that equipment is available for production, while performance measures the speed at which the equipment runs. Quality measures the rate of products that meet the required quality standards. Combining these factors, OEE provides a comprehensive view of how well a manufacturing process performs and can help determine the need for improvements.

Sensors: the building blocks of OEE measurement

Sensors play an important role in helping manufacturers determine the effective use of equipment. Following are some key metrics that sensors can track:

    • Machine health monitoring: Sensors can continuously monitor the condition of machines, detecting anomalies and potential breakdowns before they escalate. Predictive maintenance, facilitated by sensor data, helps reduce unplanned downtime, increasing equipment availability.
    • Production tracking: Sensors can track production rates and cycle times, comparing them to target rates. This data empowers businesses to assess equipment performance and identify bottlenecks that hinder optimal efficiency.
    • Quality control: Implementing sensors for real-time quality inspection ensures the prompt identification and removal of defective products from the production line, enhancing the overall quality factor in the OEE calculation.
    • Downtime analysis: Sensors can log and categorize downtime events, providing valuable insights into the root causes of inefficiencies. With this knowledge, manufacturers can implement targeted improvements to reduce downtime and enhance availability.
    • Energy efficiency: Some advanced sensors can monitor energy consumption, allowing businesses to optimize energy usage and contribute to sustainability efforts.

Integrating sensors and OEE measurement

The integration of sensors into the manufacturing process might seem daunting, but it offers numerous benefits that far outweigh the initial investment:

    • Real-time insights: Sensors provide real-time data, enabling manufacturers to monitor performance, quality, and availability metrics continuously. This empowers businesses to take immediate action when issues arise, minimizing the impact on production.
    • Data-driven decision-making: By analyzing sensor-generated data, manufacturers can make informed decisions about process improvements, equipment upgrades, and workforce optimization to enhance OEE.
    • Continuous improvement: OEE measurement with sensors fosters a culture of continuous improvement within the organization. Regularly reviewing OEE data and setting improvement goals drives teams to work collaboratively towards boosting overall efficiency.
    • Increased competitiveness: Manufacturers leveraging sensor-driven OEE measurement gain a competitive edge by optimizing productivity, minimizing downtime, and producing high-quality products consistently.

Measuring OEE using sensors is crucial to achieving operational excellence in modern manufacturing. Using real-time sensor data, manufacturers can identify areas for improvement, reduce waste, and boost productivity. Integrating OEE and sensor technology streamlines production processes and encourages continuous improvement. This approach helps manufacturers stay ahead in the ever-changing industrial landscape.

Read the Automation Insights blog Improving Overall Equipment Effectiveness to learn about the focus areas for winning the biggest improvements in OEE.

Overcoming Challenges in Metal Detection: The Power of Factor 1 Sensors

Standard inductive proximity sensors are used across the automation industry for metal detection applications and are generally reliable in these operations. But issues arise when switching from steel to other metals like copper, brass, or aluminum. A standard inductive sensor may encounter problems in such scenarios. Due to the reduction factor, the standard inductive sensor detects these different metals at different distances. If you had a sensor mounted and set up to sense a steel material but switched to copper, for example, the copper material might be out of the sensor’s range due to this difference in reduction factor, resulting in a missed reading. Factor 1 sensors were created to eliminate this problem.

Reduction factor

The reduction factor is the root cause of variable distance readings with a standard inductive sensor. But what exactly is it? The standard operating range of an inductive proximity sensor is determined by its response to a one-millimeter-thick square piece of mild steel. Other metals like copper and aluminum deviate from this standard range due to differences in material properties. For example, copper has a reduction factor of around 0.4, so it can only be detected at 0.4 times the standard operating range of an inductive proximity sensor.

We can save for later the details of why this occurs, but the key point here is that different material properties cause different reduction factors, which result in different switching distances. The table below shows these different reduction factors and switching distances. Factor 1 sensors take all these variable reduction factors and equalize them to a standard operating distance. This means that you can read anything from copper to steel at the same range, reducing the possibility of missed readings and eliminating the need for repositioning sensors whenever a material change occurs.

When to use Factor 1 sensors

Factor 1 sensors are well-suited for any process that involves different metals. Whether it is automated welding or a packaging conveyor, the factor 1 sensor will keep the material switching ranges uniform. But why is this such a big advantage?

Think about the time spent having to adjust sensor distances. Not only is the task annoying, it also takes up time. Having factor 1 sensors in place will increase the uptime of these processes and eliminate the need for sensor adjustments.

One last benefit to note about factor 1 sensors is that they are inherently weld field immune. The internal construction of the sensor prevents it from being affected by the electromagnetic field generated during welding. This additional immunity allows the sensor to survive in these welding conditions where a typical sensor might fail if it comes in proximity to the weld field.

In the end, you know your application best, but if any of the above benefits resonate with you, it’s time to start thinking about factor 1.

Capacitive Sensors – the One Technology That Can Sense It All?

I choose capacitive sensors every day to solve application challenges in the life science and semiconductor industries. Capacitive sensors in life science reliably detect liquid levels of reagents, buffers, and all manner of biological substances. In the semiconductor industry, capacitive sensors are in wide use in “wet” processes, such as monitoring liquids in etching and deposition tools.

In standard Industrial applications, capacitive sensors detect plastics and liquids, and although they can also detect just about anything else, there are better alternatives keeping them from becoming the “one sensor to sense it all.” When sensing metals, for instance, an inductive sensor is the better choice (for cost, safety, and other reasons).

Furthermore, special patented capacitive sensors exist for stubborn liquids. These sensors can ignore foam and/or the presence of material coating the inner walls of containers, which can lead to false triggering with standard sensors. These smart capacitive sensors use the slight conductance of the liquid to cleverly provide a more precise and reliable liquid level reading.

So, although capacitive sensors can sense just about anything, they are mostly relegated to sensing non-metallic objects and liquids.

How capacitive sensors work

Contrary to the common belief that capacitive sensors work based on density to detect a target, they operate on a different principle. While it may seem logical that targets being denser than air would be the basis for detection, understanding the actual working mechanism of capacitive sensors might save us some application grief.

Capacitive sensors create an electrostatic field between two conductive plates, similar to a capacitor, but rather than the plates opposing each other as in a capacitor, a capacitive sensor has plates side by side. In either case, the mathematical formulas are the same, with only the area of the plates, distance, number of plates, and the εr of the material as the variables.

What is εr?

εr is the relative permittivity of the material to be sensed, also known as the dielectric constant. When speaking of relative permittivity or dielectrics, we are speaking of materials that do not readily conduct electricity, materials called insulators. The dielectric constant (K) is a ratio of a substance compared to the dielectric constant of a vacuum, where K=1. With capacitive sensors, we typically use air as the starting point, with K=1.00059, which is close enough for our purposes. Capacitive sensors trigger output when an object with a higher K than air is sensed in the electrostatic field. But first, we must interact with the target material, which is where the “K” becomes important.

Relative permittivity is a measure of how easy/difficult it is to polarize a substance (as a ratio to a vacuum which equals 1). Polarizing is accomplished by placing the matter in an electrostatic field, which causes the molecules of the matter to rotate, lining up with the field. The easier the matter lines up, the higher the K value.

So, a capacitive sensor works by polarizing the target material, which in turn creates a higher capacitance in the sensor’s circuitry. Internal to the sensor is an oscillator or generator (to create the electrostatic field), comparators, op-amps, etc. These components determine if the internal capacitance of the circuitry has changed enough to trigger an output.

Why is water so easy to polarize? Because it’s already a “polar” molecule.

Due to the hydrogen bond, each water molecule is already a tiny magnet, with a slight positive on one end and a slight negative on the other. When placing water in an electrostatic field, the water molecules easily align with the sensor’s plates, with the plus side of the water molecule pointing towards the negative plate of the sensor, and the minus side of the molecule pointing to the positive plate.

In the end, the density explanation doesn’t hold water, since glass is much denser than water, but it is water, due to how easily it can be polarized, which is easier for a capacitive sensor to recognize. Not to say that a capacitive sensor cannot sense glass, because they can sense just about any material, but with such a difference in dielectric constants between water and glass, the sensor gain (trimpot or teach wire) can be adjusted to reduce the sensitivity of the sensor, to ignore the glass/plastic container, and only sense the water-based media inside. Make sense?

Capacitive sensors work by:

    1. Polarizing the target media as it enters the sensor’s electrostatic field
    2. Measuring the internal increase in capacitance due to the polarized media
    3. Creating an output once the set threshold of internal capacitance is exceeded

So next time you’re looking to sense an object or liquid, take a look at a table of dielectrics, and consider a capacitive sensor to do the job.