Tackling the Most Demanding Applications With Precision Sensors

Standard industrial sensors can solve a lot of automation challenges. Photoelectric, capacitive, and inductive technologies detect presence, distances, shapes, colors, thicknesses, and more. To satisfy these general applications, there are a few reputable manufacturers in the market that design and produce such products. In many instances, it is possible to interchange them from manufacturer to manufacturer, due to similar mounting patterns, technical specifications, connectors, and even common pin assignments.

But some applications require more precision – where standard sensors will not do.  Some examples include:

    • The target may be too small or difficult material to detect
    • The target may move very slowly, or very quickly
    • The target may have a minimal displacement, as in valve feedback
    • The sensor must have low mass, for high-acceleration applications
    • The sensor location has severe space constraints or material constraints

Applications that must detect particles that can’t be seen with the naked eye, or something as small as sensing the thin edge of a silicon wafer or the edge of a clear glass microscope slide, require sensors with exceptional precision.

Many precision sensing applications require a custom-designed sensor to meet the customer’s expectations. These expectations typically involve a quality sensor with robust attributes, likely coupled with difficult design parameters, such as high switch-point repeatability, exceptional temperature stability, or exotic materials.

What constitutes a precision sensing application? Let’s take a look.

Approximately 70% of all medical decisions are based on lab results. Our doctors are making decisions about our health based on these test outcomes. Therefore, accurate, trustworthy results, performed quickly, are priorities. Many tests rely on pipetting, the aspirating and dispensing of fluids – sometimes at a microscale level – from one place to another. Using a manual pipette is a time-consuming, labor-intensive process. Automating this procedure reduces contamination and eliminates human errors.

To satisfy the requirements of an application such as this requires a custom-manufactured LED light source, with a wavelength chosen to best interact with the fluids, and an extremely small, concentrated light emission that approaches laser-like properties (yet without the expense and power requirements of the laser). This light source verifies pipette presence and dispensing levels, with a quality check of the fluids dispensed down to the nanoliter scale.

So, the next time you face an application challenge that cannot be tackled with a standard sensor, consider a higher precision sensor and rest assured you will get the reliability you demand.

Why Choose an IO-Link Ecosystem for Your Next Automation Project?

By now we’ve all heard of IO-Link, the device-level communication protocol that seems magical. Often referred to as the “USB of industrial automation,” IO-Link is a universal, open, and bi-directional communication technology that enables plug-and-play device replacement, dynamic device configuration, centralized device management, remote parameter setting, device level diagnostics, and uses existing sensor cabling as part of the IEC standard accepted worldwide.

But what makes IO-Link magical?

If the list above doesn’t convince you to consider using IO-Link on your next automation project, let me tell you more about the things that matter beyond its function as a communications protocol.

Even though these benefits are very nice, none of them mean anything if the devices connected to the network don’t provide meaningful, relevant, and accurate data for your application.

Evolution of the IO-Link

IO-Link devices, also known as “smart devices,” have evolved significantly over the years. At first, they were very simple and basic, providing data such as the status of your inputs and outputs and maybe giving you the ability to configure a few basic parameters, such as port assignment as an input or an output digitally over IO-Link. Next, came the addition of functions that would improve the diagnostics and troubleshooting of the device. Multi-functionality followed, where you have one device under one part number, and can configure it in multiple modes of operation.

Nothing, however, affected the development of smart devices as much as the introduction of IIoT (Industrial Internet of Things) and the demand for more real-time information about the status of your machine, production line, and production plant starting at a device level. This demand drove the development of smart devices with added features and benefits that are outside of their primary functions.

Condition monitoring

IO-Link supplies both sensor/actuator details and secure information
IO-Link supplies both sensor/actuator details and secure information

One of the most valuable added features, for example, is condition monitoring. Information such as vibration, humidity, pressure, voltage and current load, and inclination – in addition to device primary function data – is invaluable to determine the health of your machine, thus the health of your production line or plant.

IO-Link offers the flexibility to create a controls architecture independent of PLC manufacturer or higher-level communications protocols. It enables you to:

    • use existing low-cost sensor cabling
    • enhance your existing controls architecture by adding devices such as RFID readers, barcode and identification vision sensors, linear and pressure transducers, process sensors, discrete or analog I/O, HMI devices, pneumatic and electro-mechanical actuators, condition monitoring, etc.
    • dynamically change the device configuration, auto-configure devices upon startup, and plug-and-play replacement of devices
    • enable IIOT, predictive maintenance, machine learning, and artificial intelligence

There is no other device-level communications protocol that provides as many features and benefits and is cost-effective and robust enough for industrial automation applications as IO-Link.

Demystifying Machine Learning

Machine learning can help organizations improve manufacturing operations and increase efficiency, productivity, and safety by analyzing data from connected machines and sensors, machine. For example, its algorithms can predict when equipment will likely fail, so manufacturers can schedule maintenance before problems occur, thereby reducing downtime and repair costs.

How machine learning works

Machine learning teaches computers to learn from data – to do things without being specifically told how to do them. It is a type of artificial intelligence that enables computers to automatically learn or improve their performances by learning from their experiences.

machine learning stepsImagine you have a bunch of toy cars and want to teach a computer to sort them into two groups: red and blue cars. You could show the computer many pictures of red and blue cars and say, “this is a red car” or “this is a blue car” for each one.

After seeing enough examples, the computer can start to guess which group a car belongs in, even if it’s a car that it hasn’t seen before. The machine is “learning” from the examples you show to make better and better guesses over time. That’s machine learning!

Steps to translate it to industrial use case

As in the toy car example, we must have pictures of each specimen and describe them to the computer. The image, in this case, is made up of data points and the description is a label. The sensors collecting data can be fed to the machine learning algorithm in different stages of the machine operation – like when it is running optimally, needs inspection, or needs maintenance, etc.

Data taken from vibration, temperature or pressure measures, etc., can be read from different sensors, depending on the type of machine or process to monitor.

In essence, the algorithm finds a pattern for each stage of the machine’s operation. It can notify the operator about what must be done given enough data points when it starts to veer toward a different stage.

What infrastructure is needed? Can my PLC do it?

The infrastructure needed can vary depending on the algorithm’s complexity and the data volume. Small and simple tasks like anomaly detection can be used on edge devices but not on traditional automation controllers like PLCs. Complex algorithms and significant volumes of data require more extensive infrastructure to do it in a reasonable time. The factor is the processing power, and as close to real-time we can detect the machine’s state, the better the usability.

Embedded vision – What It Is and How It Works

Embedded vision is a rapidly growing field that combines computer vision and embedded systems with cameras or other imaging sensors, enabling devices to interpret and understand the visual world around them – as humans do. This technology, with broad applications, is expected to revolutionize how we interact with technology and the world around us and will likely play a major role in the Internet of Things and Industry 4.0 revolution.

Embedded vision uses computer vision algorithms and techniques to process visual information on devices with limited computational resources, such as embedded systems or mobile devices. These systems use cameras or other imaging sensors to acquire visual data and perform tasks on that data, such as image or video processing, object detection, and image analysis.

Applications for embedded vision systems

Among the many applications that use embedded vision systems are:

    • Industrial automation and inspection
    • Medical and biomedical imaging
    • Surveillance and security systems
    • Robotics and drones
    • Automotive and transportation systems

Hardware and software for embedded vision systems

Embedded vision systems typically use a combination of software and hardware to perform their tasks. On the hardware side, embedded vision systems often use special-purpose processors, such as digital signal processors (DSPs) or field-programmable gate arrays (FPGAs), to perform the heavy lifting of image and video processing. On the software side, they typically use libraries or frameworks that provide pre-built functions for tasks, such as image filtering, object detection, and feature extraction. Some common software libraries and frameworks for embedded vision include OpenCV, MATLAB, Halcon, etc.

It’s also quite important to note that the field of embedded vision is active and fast moving with new architectures, chipsets, and software libraries appearing regularly to make this technology more available and accessible to a broader range of applications, devices, and users.

Embedded vision components

The main parts of embedded vision include:

    1. Processor platforms are typically specialized for handling the high computational demands of image and video processing. They may include digital signal processors (DSPs), field-programmable gate arrays (FPGAs), and application-specific integrated circuits (ASICs).
    2. Camera components refer to imaging sensors that acquire visual data. These sensors can include traditional digital cameras and specialized sensors such as stereo cameras, thermal cameras, etc.
    3. Accessories and carrier boards include the various additional hardware and components that interface the camera with the processor and other peripherals. Examples include memory cards, power supplies, and IO connectors.
    4. Housing and mechanics are the physical enclosures of the embedded vision system, including the mechanics that hold the camera, processor, and other components in place, and the housing that protects the system from external factors such as dust and water.
    5. The operating system runs on the processor. It could be a custom firmware or a general-purpose operating system, like Linux or Windows.
    6. Application SW is the software that runs on the embedded vision system to perform tasks such as image processing, object detection, and feature extraction. This software often uses a combination of high-level programming languages, such as C++, Python, and lower-level languages, like C.
    7. Feasibility studies evaluate a proposed solution’s technical and economic feasibility, identifying any risks or possible limitations that could arise during the development. They are conducted before the development of any embedded vision systems.
    8. Integration interfaces refer to the process of integrating the various components of the embedded vision system and interfacing it with other systems or devices. This can include integrating the camera, processor, and other hardware and developing software interfaces to enable communication between the embedded vision system and other systems.

Learn more here about selecting the most efficient and cost-effective vision product for your project or application.

Why Invest in Smart Manufacturing Practices?

We’re all privy to talks about smart manufacturing, smart factory, machine learning, IIOT, ITOT convergence, and so on, and many manufacturers have already embarked on their smart manufacturing journeys. Let’s take a pause and really think about it… Is it really important or is it a fad? If it is important, then why?

In my role traveling across the U.S. meeting various manufacturers and machine builders, I often hear about their needs to collect data and have certain types of interfaces. But they don’t know what good that data is going to do. Well, let’s get down to the basics and understand this hunger for data and smart manufacturing.

Manufacturing goals

Since the dawn of industrialization, the industry has been focused on efficiency – always addressing issues of how to produce more, better and quicker. The goal of manufacturing always revolved around these four things:

    1. Reduce total manufacturing and supply chain costs
    2. Reduce scrap rate and improve quality
    3. Improve/increase asset utilization and machine availability
    4. Reduced unplanned downtime

Manufacturing megatrends

While striving for these goals, we have made improvements that have tremendously helped us as a society to improve our lifestyle. But we are now in a different world altogether. The megatrends that are affecting manufacturing today require manufacturers to be even more focused on these goals to stay competitive and add to their bottom lines.

The megatrends affecting the whole manufacturing industry include:

    • Globalization: The competition for a manufacturer is no longer local. There is always somebody somewhere making products that are cheaper, better or more available to meet demand.
    • Changing consumer behavior: I am old enough to say that, when growing up, there were only a handful of brands and only certain types of products that made it over doorsteps. These days, we have variety in almost every product we consume. And, our taste is constantly changing.
    • Lack of skilled labor: Almost every manufacturer that I talk to expresses that keeping and finding good skilled people has been very difficult. The baby boomers are retiring and creating huge skills gaps in the workplaces.
    • Aging equipment: According to one study, almost $65B worth of equipment in the U.S. is outdated, but still in production. Changing regulations require manufacturers to track and trace their products in many industries.

Technology has always been the catalyst for achieving new heights in efficiency. Given the megatrends affecting the manufacturing sector, the need for data is dire. Manufacturers must make decisions in real-time and having relevant and useful data is a key to success in this new economy.

Smart manufacturing practices

What we call “smart manufacturing practices” are practices that use technology to affect how we do things today and improve them multifold. They revolve around three key areas:

    1. Efficiency: If a line is down, the machine can point directly to where the problem is and tell you how to fix it. This reduces downtime. Even better is using data and patterns about the system to predict when the machine might fail.
    2. Flexibility: Using technology to retool or change over the line quickly for the next batch of production or responding to changing consumer tastes through adopting fast and agile manufacturing practices.
    3. Visibility: Operators, maintenance workers, and plant management all need a variety of information about the machine, the line, or even the processes. If we don’t have this data, we are falling behind.

In a nutshell, smart manufacturing practices that focus on one or more of these key areas, helps manufacturers boost productivity and address challenges presented by the megatrends. Hence, it is important to invest in these practices to stay competitive.

One more thing: There is no finish line when it comes to smart manufacturing. It should become a part of your continuous improvement program to evaluate and invest in technology that offers you more visibility, improves efficiency, and adds more flexibility to how you do things.