Miniature Sensors With Monumental Capabilities

Application requirements solved by miniature optical sensors.Application requirements solved by miniature optical sensors.The requirement for miniature optical sensors to meet the demands of medical and semiconductor automation equipment often exceeds the capabilities of standard self-contained optical sensors. In some cases, other industry application requirements can be best solved by these same miniature optical sensors with advanced capabilities. So, what do these optical sensors offer that makes them so much better?

Application requirements solved by miniature optical sensors.Applications

Let’s begin with some of the applications that require these capabilities: medical applications, such as lab-on-a-chip microfluidics, liquid presence or level in drip chambers or pipettes, turbidity, drop detection, and micro or macro bubble detection, to name a few. Semicon applications include wafer presence on end-effectors, wafer mapping, wafer centering, and wafer presence in transfer chambers. Other applications that benefit from these sensors include packaging pharmaceuticals, detecting extremely small parts, and spray detection. In addition, these sensors are frequently used in customer-specific designs because they can be customized for specific applications.

Application requirements solved by miniature optical sensors.These sensors require an amplifier which sometimes is not popular with design engineers. They are associated with additional cost and extra work during installation; however, the remote amplifier offers real advantages. The optical function is separate from the control unit which allows it to be incorporated into an extremely tiny sensor head. Since the LEDs are mounted in the sensor heads, we now have a small wired connection back to the amplifier. Unlike fiber optics, this wired connection to the emitting LED and receiver allows for very minimal or no bending radius because of the cable in use.

Features

The new generation of amplifiers offers tremendous flexibility with advanced features, including:

    • OLED displayoptical sensors.
    • Intuitive menu structure
    • LEDs for status, communication, and warnings
    • Teaching/Parametrization
    • Single-point, two-point, window, dynamic, and tracking operating modes
    • Multiple teach modes: direct, dynamic, external, automatic and I/O-Link
    • Selectable power modes
    • Selectable outputsminiature optical sensors
    • Selectable speed settings
    • Auto-sync up to 8 amplifiers
    • Configurable delays and hysteresis
    • Compatible with existing all sensor heads

The sensor heads or optical heads come in a wide variety of housings, including the ability to customize them to meet specific requirements. And they are available in small precision LEDs, photodiodes, phototransistors, and complete laser modules according to a patented manufacturing process. Due to the high optical quality, additional lenses or apertures are no longer necessary.

Application requirements solved by miniature optical sensors.A multitude of special characteristics completely differentiates these sensors from the products made by standard optical sensor manufacturers. The range of products includes extraordinary miniature optical sensors as standard products, optimally adapted customized solutions, and precision optoelectronic components, such as LEDs, photodiodes, and laser modules. High optical quality, and unique modular designs, in connection with the greatest possible manufacturing flexibility, guarantee solutions that are exactly adapted to the respective problems and needs of the users.

Demystifying Machine Learning

Machine learning can help organizations improve manufacturing operations and increase efficiency, productivity, and safety by analyzing data from connected machines and sensors, machine. For example, its algorithms can predict when equipment will likely fail, so manufacturers can schedule maintenance before problems occur, thereby reducing downtime and repair costs.

How machine learning works

Machine learning teaches computers to learn from data – to do things without being specifically told how to do them. It is a type of artificial intelligence that enables computers to automatically learn or improve their performances by learning from their experiences.

machine learning stepsImagine you have a bunch of toy cars and want to teach a computer to sort them into two groups: red and blue cars. You could show the computer many pictures of red and blue cars and say, “this is a red car” or “this is a blue car” for each one.

After seeing enough examples, the computer can start to guess which group a car belongs in, even if it’s a car that it hasn’t seen before. The machine is “learning” from the examples you show to make better and better guesses over time. That’s machine learning!

Steps to translate it to industrial use case

As in the toy car example, we must have pictures of each specimen and describe them to the computer. The image, in this case, is made up of data points and the description is a label. The sensors collecting data can be fed to the machine learning algorithm in different stages of the machine operation – like when it is running optimally, needs inspection, or needs maintenance, etc.

Data taken from vibration, temperature or pressure measures, etc., can be read from different sensors, depending on the type of machine or process to monitor.

In essence, the algorithm finds a pattern for each stage of the machine’s operation. It can notify the operator about what must be done given enough data points when it starts to veer toward a different stage.

What infrastructure is needed? Can my PLC do it?

The infrastructure needed can vary depending on the algorithm’s complexity and the data volume. Small and simple tasks like anomaly detection can be used on edge devices but not on traditional automation controllers like PLCs. Complex algorithms and significant volumes of data require more extensive infrastructure to do it in a reasonable time. The factor is the processing power, and as close to real-time we can detect the machine’s state, the better the usability.

Embedded vision – What It Is and How It Works

Embedded vision is a rapidly growing field that combines computer vision and embedded systems with cameras or other imaging sensors, enabling devices to interpret and understand the visual world around them – as humans do. This technology, with broad applications, is expected to revolutionize how we interact with technology and the world around us and will likely play a major role in the Internet of Things and Industry 4.0 revolution.

Embedded vision uses computer vision algorithms and techniques to process visual information on devices with limited computational resources, such as embedded systems or mobile devices. These systems use cameras or other imaging sensors to acquire visual data and perform tasks on that data, such as image or video processing, object detection, and image analysis.

Applications for embedded vision systems

Among the many applications that use embedded vision systems are:

    • Industrial automation and inspection
    • Medical and biomedical imaging
    • Surveillance and security systems
    • Robotics and drones
    • Automotive and transportation systems

Hardware and software for embedded vision systems

Embedded vision systems typically use a combination of software and hardware to perform their tasks. On the hardware side, embedded vision systems often use special-purpose processors, such as digital signal processors (DSPs) or field-programmable gate arrays (FPGAs), to perform the heavy lifting of image and video processing. On the software side, they typically use libraries or frameworks that provide pre-built functions for tasks, such as image filtering, object detection, and feature extraction. Some common software libraries and frameworks for embedded vision include OpenCV, MATLAB, Halcon, etc.

It’s also quite important to note that the field of embedded vision is active and fast moving with new architectures, chipsets, and software libraries appearing regularly to make this technology more available and accessible to a broader range of applications, devices, and users.

Embedded vision components

The main parts of embedded vision include:

    1. Processor platforms are typically specialized for handling the high computational demands of image and video processing. They may include digital signal processors (DSPs), field-programmable gate arrays (FPGAs), and application-specific integrated circuits (ASICs).
    2. Camera components refer to imaging sensors that acquire visual data. These sensors can include traditional digital cameras and specialized sensors such as stereo cameras, thermal cameras, etc.
    3. Accessories and carrier boards include the various additional hardware and components that interface the camera with the processor and other peripherals. Examples include memory cards, power supplies, and IO connectors.
    4. Housing and mechanics are the physical enclosures of the embedded vision system, including the mechanics that hold the camera, processor, and other components in place, and the housing that protects the system from external factors such as dust and water.
    5. The operating system runs on the processor. It could be a custom firmware or a general-purpose operating system, like Linux or Windows.
    6. Application SW is the software that runs on the embedded vision system to perform tasks such as image processing, object detection, and feature extraction. This software often uses a combination of high-level programming languages, such as C++, Python, and lower-level languages, like C.
    7. Feasibility studies evaluate a proposed solution’s technical and economic feasibility, identifying any risks or possible limitations that could arise during the development. They are conducted before the development of any embedded vision systems.
    8. Integration interfaces refer to the process of integrating the various components of the embedded vision system and interfacing it with other systems or devices. This can include integrating the camera, processor, and other hardware and developing software interfaces to enable communication between the embedded vision system and other systems.

Learn more here about selecting the most efficient and cost-effective vision product for your project or application.

Using Guided Changeover to Reduce Maintenance Costs, Downtime

A guided changeover system can drastically reduce the errors involved with machine operation, especially when added to machines using fully automated changeovers. Processing multiple parts and recipes during a production routine requires a range of machines, and tolerances are important to quantify. Only relying on the human element is detrimental to profits, machine maintenance, and production volumes. Implementing operator assistance to guide visual guidance will reveal inefficiencies and allow for vast improvements.

Removing human error

Unverified manual adjustments may cause machine fatigue or failure. In a traditional manual changeover system, the frequency of machine maintenance is greater if proper tolerances are not observed at each changeover. Using IO-Link can remove the variable of human error with step-by-step instructions paired with precise sensors in closed-loop feedback. The machine can start up and run only when all parts are in the correct position.

Preventative maintenance and condition monitoring

Preventative maintenance is achievable with the assistance of sensors, technology, and systems. Using condition monitoring for motors, pumps and critical components can help prevent the need for maintenance and notably improve the effectiveness of maintenance with custom alerts and notifications with a highly useful database and graphing function.

A repeatable maintenance routine based on condition monitoring data and using a system to guide machine changeover will prolong machine life and potentially eliminate downtime altogether.

For more, read this real-world application story, including an automated format change to eliminate human error, reduce waste and decrease downtime.

Automation Insights: Top Blogs From 2022

It’s an understatement to say 2022 had its challenges. But looking back at the supply chain disruptions, inflation, and other trials threatening success in many industries, including manufacturing, there were practical insights we can benefit from as we dive into 2023. Below are the most popular blogs from last year’s Automation Insights site.

    1. Evolution of Pneumatic Cylinder Sensors

Top 2022 Automation Insights BlogsToday’s pneumatic cylinders are compact, reliable, and cost-effective prime movers for automated equipment. They’re used in many industrial applications, such as machinery, material handling, assembly, robotics, and medical. One challenge facing OEMs, integrators, and end users is how to detect reliably whether the cylinder is fully extended, retracted, or positioned somewhere in between before allowing machine movement.

Read more.

    1. Series: Condition Monitoring & Predictive Maintenance 

Top 2022 Automation Insights BlogsBy analyzing which symptoms of failure are likely to appear in the predictive domain for a given piece of equipment, you can determine which failure indicators to prioritize in your own condition monitoring and predictive maintenance discussions.

Read the series, including the following blogs:

    1. Know Your RFID Frequency Basics

Top 2022 Automation Insights BlogsIn 2008 I purchased my first toll road RFID transponder, letting me drive through and pay my toll without stopping at a booth. This was my first real-life exposure to RFID, and it was magical. Back then, all I knew was that RFID stood for “radio frequency identification” and that it exchanged data between a transmitter and receiver using radio waves. That’s enough for a highway driver, but you’ll need more information to use RFID in an industrial automation setting. So here are some basics on what makes up an RFID system and the uses of different radio frequencies.

Read more.

    1. IO-Link Event Data: How Sensors Tell You How They’re Doing

Top 2022 Automation Insights BlogsI have been working with IO-Link for more than 10 years, so I’ve heard lots of questions about how it works. One line of questions I hear from customers is about the operating condition of sensors. “I wish I knew when the IO-Link device loses output power,” or, “I wish my IO-Link photoelectric sensor would let me know when the lens is dirty.” The good news is that it does give you this information by sending Event Data. That’s a type of data that is usually not a focus of users, although it is available in JSON format from the REST API.

Read more.

    1. Converting Analog Signals to Digital for Improved Performance

Top 2022 Automation Insights BlogsWe live in an analog world, where we experience temperatures, pressures, sounds, colors, etc., in seemingly infinite values. There are infinite temperature values between 70-71 degrees, for example, and an infinite number of pressure values between 50-51 psi.

Read more.

We appreciate your dedication to Automation Insights in 2022 and look forward to growth and innovation in 2023.

Simplified Sensing Over a Complex Headache

The constant need for more data and higher accuracy has pushed sensing technologies to the extreme. Advancements in factory automation have created a perfect storm of innovation and new capabilities. This is probably an unpopular opinion but, do we always need all of this?

I started my career in factory automation in the late 90s. This was a time of technology transitions. PLCs had been around for ages but had never been so affordable. Technologies, such as time-of-flight laser measurement, industrial cameras, and inductive coupling, were new and exciting, and they were becoming more affordable, too.

As a controls engineer, I remember using these advanced technologies and systems as a way of keeping my projects future-proof – or so I thought. In reality, sometimes they just made things more complicated.

Let me explain this using an example where tried, true and affordable sensors could have made the project more reliable and future-proof from the start.

Photoelectric sensors have earned their place in the automation hall of fame. I don’t see a time when their use will not be necessary as a reliable way to conduct presence detection.

I was working on a project that required tracking several washing machine cabinet bases to be counted and orientated correctly on a conveyor. I wanted to use an industrial camera because the technology was getting better and better. I paid $7,000  for the camera and accessories. After several days and iterations, the camera system was working perfectly.  It continued working for about a week before it was knocked out of alignment by a production worker who was using it as a leaning post. It took another day or so to dial back in.

Tried, true and affordable win out

The solution I ultimately chose was the easiest. I strategically placed seven basic photoeyes underneath the conveyor to identify what base it was looking at and if certain characteristics were present for quality tracking. My investment was around $400, and it was extremely protected from failure. And, if a sensor went out, rather than calling an engineer in the middle of the night, a maintenance electrician could simply replace it with a new one.

Another huge benefit of using photoeyes was the avoidance of buyer’s remorse. Camera technology is always evolving. From one day to the next they get better and more capable, but also might have proprietary comms or software. Basic photoelectric sensors with a PNP or NPN output can easily be swapped out by almost any brand for decade to come.

Keep it simple

At the end of the day, sometimes it is best to keep the solution simple, clean, and backed by the tried-and-true technologies in factory automation. Next time you dig into a project, take a moment to think about my example. Melt the solution down to the lowest common denominator and build up the complexity from there. You might just save more than just money; you might save a headache or two.

Securing Your Supply Chain and Beefing Up Traceability

 

Snake oil is one of the most maligned products in all of history. Synonymous with cure-alls and quackery, it is a useless rip-off, right? Well, no, it’s actually high in the Omega 3’s, EPA, and DHA.

Snake oil fell from prominence because it was all too easy for charlatans to brew up fake oil and pass it off as the genuine article, with sometimes dangerous outcomes.

Today’s customers are smarter than ever and waking up with ever-evolving knockoffs. We are more aware of fake reviews and fake products. Brands that can prove their products are genuine can command higher prices and forge long-standing customer relationships. This starts with securing your supply chain and beefing up traceability.

Securing your brand

Many roads lead to Rome and no single technology will be the one silver bullet to secure your supply chain. That said, RFID technology is likely to play an important role. RFID allows for multi-read without a line of sight, making it a great choice in both production and warehouse/logistics environments. Perhaps more importantly, RFID tags can be encrypted. This adds protection against would-be cheats. The ability to both read and write provides additional flexibility for tracking and tracing in production.

RFID is not the only traceability solution and smart companies will use a combination of technologies to secure their brands. We’ve seen holograms on baseball cards and QR codes on underwear. We’ve seen authorized retailer programs … and RFID on coffee cups and medical devices. As you think through the various options, it’s worth keeping in mind the following 4 questions:

      1. Is the technology secure? Does it support modern cryptographic methods?
      2. Does the solution add value – i.e. improve current processes?
      3. Is the technology future-proof?
      4. Is the technology robust?

Any technology that answers yes to these questions will be well-suited to meet this new market. Brands that stay ahead of the curve will grow and those who fall behind the curve risk ending up in the dustbin – right next to the snake oil.

Navigating the Automotive Plant for Automation Opportunities

When one first looks at an automotive manufacturing plant, the thought of identifying opportunities for automation may be overwhelming to some.

These plants are multi-functional and complex. A typical plant manages several processes, such as:

    • Press, stamping, and dye automation
    • Welding, joining, and body in white
    • Painting
    • Final assembly
    • Robot cell
    • Material handling, including AGV, conveyor, and ASRS
    • Engine and powertrain assembly
    • Casting and machining parts
    • EV and EV sub-processes

Navigating the complexity of the automation processes in your plant to promote more automation products will take some time. You will have to look at this task by:

Time. When tackling a large automotive plant, it’s important to understand how to dissect it into smaller parts and spread out your strategies over a full year or two.

Understanding. Probably the most important thing is to understand the processes and flow of the build assembly process in a plant and then to list the strategic products that can be of use in each area.

Prioritizing. Once you have a good understanding of the plant processes and a strategic timeline to present these technologies, the next step is to prioritize your time and the technology to the highest return on investment. You may now learn that your company could use a great deal of weld cables and weld sensors, for example, so this would be your starting point for presenting this new automation technology.

Knowing who to talk to in the plant. The key to getting the best return on your time and fast approval of your automation technology is knowing the key people in the plant who can influence the use of new automation technology. Typically, you should know/list and communicate monthly with engineering groups, process improvement groups, maintenance groups, purchasing and quality departments. Narrowing down your focus to specific groups or individuals can help you get technology approval faster, etc. Don’t feel like you need to know everyone in the plant, just the key individuals.

Knowing what subjects to discuss. Don’t just think MRO! Talk about the five technology opportunities to have new automation in your plant, including:

        1. MRO
        2. Large programs and specs
        3. Project upgrades
        4. Training
        5. VMI/vending

Most people concentrate on the MRO business and don’t engage in discussions to find out these other ways to introduce automation technology in your plant. Concentrating on all five of these opportunities will lead to placing a lot of automation in the plant for a very long time.

So, when you look at your plant be very excited about all the opportunities to present automation throughout it and watch your technology levels soar to levels of manufacturing excellence.

Good luck as you begin implementing your expansion of automation technology.

Understanding Image Processing Standards and Their Benefits

In the industrial image processing world, there are standards – GenICam, GigE Vision, and USB3 Vision – that are similar to the USB and Ethernet standards used in consumer products. What do these image processing standards mean, and what are their benefits?

The GenICam standard, which is maintained by the European Machine Vision Association (EMVA), serves as a base for all the image processing standards. This standard abstracts the user access to the features of a camera and defines the Standard Feature Naming Convention (SFNC) that all manufacturers use so that common feature names are used to describe the same functions.

Additionally, manufacturers can add specific “Quality of Implementation” features outside of the SFNC definitions to differentiate their products from ones made by other manufacturers. For example, a camera can offer specific features like frame average, flat field correction, logic gates, etc. GenICam/GigE Vision-based driver and software solutions from other manufacturers can also use these features without any problem.

“On-the-wire” standards

USB3 Vision and GigE Vision are “on-the-wire” interfaces between the driver and the camera. These standards are maintained by the Automated Imaging Association (AIA). You are probably familiar with “on-the-wire” standards and their advantages if you have used plug-and-play devices like USB memory sticks, USB mice, or USB hard disks. They work together without any problem, even if they are made by different manufacturers. It’s the same thing with GenICam/GigE Vision/USB3 Vision-based driver/software solutions. The standards define a transport layer, which controls the detection of a device, configuration (register access), data streaming (device detection), and event handling, and connects the interface to GenICam (Figure 1).

USB3 Vision builds on the GigE Vision standard by including accessories like cables. The mechanics are part of the standard and defines lockable cable interfaces, as one example. This creates a more robust interface for manufacturing environments.

Are standards a must-have?

Technically, standards aren’t necessary. But they make it possible to use products from multiple manufacturers and make devices more useful in the long term. For a historical comparison, look at USB 2.0 cameras and GigE Vision. USB 2.0 industrial cameras were introduced in 2004 and only worked with proprietary drivers (Figure 2) between the client and Vision Library/SDK and between the driver and camera. Two years later, Gigabit Ethernet cameras were introduced with the GigE Vision image processing standard, which didn’t require proprietary drivers to operate.

In the case of a system crash, users of the USB 2.0 cameras wouldn’t know whether the proprietary driver or the software library was to blame, which made them difficult to support. During the decision phase of selecting sensors and support, the customer had to keep the product portfolio in mind to meet their specifications. Afterward, the application was implemented and only worked with the proprietary interfaces of the manufacturer. In case of future projects or adaptions –for example, if a new sensor was required –it would have been necessary for the manufacturer to offer this sensor. Otherwise, it was necessary to change the manufacturer, which meant that a new implementation of the software was necessary as well. In contrast, flexibility is a big advantage with Gigabit Ethernet cameras and GigE Vision: GigE Vision-compliant cameras can be used interchangeably without regard to the manufacturer.

Despite this obvious benefit, USB cameras are more prevalent in certain image processing fields like medicine, given that the applications define the camera’s sensor resolution, image format and image frequency (bandwidth), and the environment for the purpose of cable length, frame grabber, or digital camera solution. With such tightly-defined requirements, USB cameras solve the challenges of these applications.

It’s hard to believe, but a few years ago, there weren’t any standards in the image processing market. Each manufacturer had its own solution. These times are gone – the whole market has pulled together, to the benefit of customers. Because of the standards, the interaction between hardware, driver, and software delivers the experience of a uniform piece. The quality of the market is improved. For the customer, it is easier to make product decisions since they are not locked into one company’s portfolio. With standards-compliant products, the customer can always choose the best components, independent of the company. With GenICam as a base, the image processing market offers the best interface for every application, either with GigE Vision or USB3 Vision.

Edge Gateways To Support Real-Time Condition Monitoring Data

In my previous blog post from early summer, I talked about IO-Link sensors with condition monitoring features that work with PLCs. I covered how condition monitoring variables can be set up as alarms and how simple logic can be set up inside the sensor so it only sets off those alarms to the PLC in real time to alert operators when something is wrong. Many companies, however, take advantage of the IoT sensor data with the long-term goal of analyzing the environmental data conditions to predict maintenance needs in real-time versus relying on a schedule. Some even want to connect directly to their MES systems to inform maintenance personnel of daily maintenance orders, which requires a separate device, such as an IoT edge gateway.

Edge gateway benefits

The biggest benefit of an IoT edge gateway is the ability to process and store large amounts of data quickly, enabling real-time applications to use that data efficiently.

An IoT edge gateway typically sits at the end or edge of your network and gathers all the sensor data either directly from the sensors or from the PLC. Since there will be a large amount of data from all the sensors on the network, part of the edge gateway setup is to filter the relevant and important information and process this vast amount of data. The edge gateway must also handle the amount of data required reliably, and it must have low latency. These important factors are often associated with the gateway’s CPU and memory specifications.

After looking at the performance of the edge gateway, comes the ‘gateway’ aspect which provides a translation to different communications networks, whether local or cloud-based. There are the hardware specs of the gateway, whether it’s using serial, USB or Ethernet for that connection, as well as the environmental ratings on the gateway. Then, more importantly, is the software side of the edge gateway. There are cloud-based communications standards designed for different applications and for either private or public cloud networks.

Edge gateways support different communications protocols, such as HTTPS, MQTT, RESTful API, C/Python API. The gateway portion also helps in the conversion of those protocols and the ease of interoperability to different platforms, such as AWS, Azure, Ignition, and Wonderware. This provides data transparency so that all the data gathered can be used across the many different software platforms.

To get to the IoT end goal, an edge gateway is necessary and it’s important to choose the correct one.