Demystifying Machine Learning

Machine learning can help organizations improve manufacturing operations and increase efficiency, productivity, and safety by analyzing data from connected machines and sensors, machine. For example, its algorithms can predict when equipment will likely fail, so manufacturers can schedule maintenance before problems occur, thereby reducing downtime and repair costs.

How machine learning works

Machine learning teaches computers to learn from data – to do things without being specifically told how to do them. It is a type of artificial intelligence that enables computers to automatically learn or improve their performances by learning from their experiences.

machine learning stepsImagine you have a bunch of toy cars and want to teach a computer to sort them into two groups: red and blue cars. You could show the computer many pictures of red and blue cars and say, “this is a red car” or “this is a blue car” for each one.

After seeing enough examples, the computer can start to guess which group a car belongs in, even if it’s a car that it hasn’t seen before. The machine is “learning” from the examples you show to make better and better guesses over time. That’s machine learning!

Steps to translate it to industrial use case

As in the toy car example, we must have pictures of each specimen and describe them to the computer. The image, in this case, is made up of data points and the description is a label. The sensors collecting data can be fed to the machine learning algorithm in different stages of the machine operation – like when it is running optimally, needs inspection, or needs maintenance, etc.

Data taken from vibration, temperature or pressure measures, etc., can be read from different sensors, depending on the type of machine or process to monitor.

In essence, the algorithm finds a pattern for each stage of the machine’s operation. It can notify the operator about what must be done given enough data points when it starts to veer toward a different stage.

What infrastructure is needed? Can my PLC do it?

The infrastructure needed can vary depending on the algorithm’s complexity and the data volume. Small and simple tasks like anomaly detection can be used on edge devices but not on traditional automation controllers like PLCs. Complex algorithms and significant volumes of data require more extensive infrastructure to do it in a reasonable time. The factor is the processing power, and as close to real-time we can detect the machine’s state, the better the usability.

Embedded vision – What It Is and How It Works

Embedded vision is a rapidly growing field that combines computer vision and embedded systems with cameras or other imaging sensors, enabling devices to interpret and understand the visual world around them – as humans do. This technology, with broad applications, is expected to revolutionize how we interact with technology and the world around us and will likely play a major role in the Internet of Things and Industry 4.0 revolution.

Embedded vision uses computer vision algorithms and techniques to process visual information on devices with limited computational resources, such as embedded systems or mobile devices. These systems use cameras or other imaging sensors to acquire visual data and perform tasks on that data, such as image or video processing, object detection, and image analysis.

Applications for embedded vision systems

Among the many applications that use embedded vision systems are:

    • Industrial automation and inspection
    • Medical and biomedical imaging
    • Surveillance and security systems
    • Robotics and drones
    • Automotive and transportation systems

Hardware and software for embedded vision systems

Embedded vision systems typically use a combination of software and hardware to perform their tasks. On the hardware side, embedded vision systems often use special-purpose processors, such as digital signal processors (DSPs) or field-programmable gate arrays (FPGAs), to perform the heavy lifting of image and video processing. On the software side, they typically use libraries or frameworks that provide pre-built functions for tasks, such as image filtering, object detection, and feature extraction. Some common software libraries and frameworks for embedded vision include OpenCV, MATLAB, Halcon, etc.

It’s also quite important to note that the field of embedded vision is active and fast moving with new architectures, chipsets, and software libraries appearing regularly to make this technology more available and accessible to a broader range of applications, devices, and users.

Embedded vision components

The main parts of embedded vision include:

    1. Processor platforms are typically specialized for handling the high computational demands of image and video processing. They may include digital signal processors (DSPs), field-programmable gate arrays (FPGAs), and application-specific integrated circuits (ASICs).
    2. Camera components refer to imaging sensors that acquire visual data. These sensors can include traditional digital cameras and specialized sensors such as stereo cameras, thermal cameras, etc.
    3. Accessories and carrier boards include the various additional hardware and components that interface the camera with the processor and other peripherals. Examples include memory cards, power supplies, and IO connectors.
    4. Housing and mechanics are the physical enclosures of the embedded vision system, including the mechanics that hold the camera, processor, and other components in place, and the housing that protects the system from external factors such as dust and water.
    5. The operating system runs on the processor. It could be a custom firmware or a general-purpose operating system, like Linux or Windows.
    6. Application SW is the software that runs on the embedded vision system to perform tasks such as image processing, object detection, and feature extraction. This software often uses a combination of high-level programming languages, such as C++, Python, and lower-level languages, like C.
    7. Feasibility studies evaluate a proposed solution’s technical and economic feasibility, identifying any risks or possible limitations that could arise during the development. They are conducted before the development of any embedded vision systems.
    8. Integration interfaces refer to the process of integrating the various components of the embedded vision system and interfacing it with other systems or devices. This can include integrating the camera, processor, and other hardware and developing software interfaces to enable communication between the embedded vision system and other systems.

Learn more here about selecting the most efficient and cost-effective vision product for your project or application.

Using Guided Changeover to Reduce Maintenance Costs, Downtime

A guided changeover system can drastically reduce the errors involved with machine operation, especially when added to machines using fully automated changeovers. Processing multiple parts and recipes during a production routine requires a range of machines, and tolerances are important to quantify. Only relying on the human element is detrimental to profits, machine maintenance, and production volumes. Implementing operator assistance to guide visual guidance will reveal inefficiencies and allow for vast improvements.

Removing human error

Unverified manual adjustments may cause machine fatigue or failure. In a traditional manual changeover system, the frequency of machine maintenance is greater if proper tolerances are not observed at each changeover. Using IO-Link can remove the variable of human error with step-by-step instructions paired with precise sensors in closed-loop feedback. The machine can start up and run only when all parts are in the correct position.

Preventative maintenance and condition monitoring

Preventative maintenance is achievable with the assistance of sensors, technology, and systems. Using condition monitoring for motors, pumps and critical components can help prevent the need for maintenance and notably improve the effectiveness of maintenance with custom alerts and notifications with a highly useful database and graphing function.

A repeatable maintenance routine based on condition monitoring data and using a system to guide machine changeover will prolong machine life and potentially eliminate downtime altogether.

For more, read this real-world application story, including an automated format change to eliminate human error, reduce waste and decrease downtime.

Navigating the Automotive Plant for Automation Opportunities

When one first looks at an automotive manufacturing plant, the thought of identifying opportunities for automation may be overwhelming to some.

These plants are multi-functional and complex. A typical plant manages several processes, such as:

    • Press, stamping, and dye automation
    • Welding, joining, and body in white
    • Painting
    • Final assembly
    • Robot cell
    • Material handling, including AGV, conveyor, and ASRS
    • Engine and powertrain assembly
    • Casting and machining parts
    • EV and EV sub-processes

Navigating the complexity of the automation processes in your plant to promote more automation products will take some time. You will have to look at this task by:

Time. When tackling a large automotive plant, it’s important to understand how to dissect it into smaller parts and spread out your strategies over a full year or two.

Understanding. Probably the most important thing is to understand the processes and flow of the build assembly process in a plant and then to list the strategic products that can be of use in each area.

Prioritizing. Once you have a good understanding of the plant processes and a strategic timeline to present these technologies, the next step is to prioritize your time and the technology to the highest return on investment. You may now learn that your company could use a great deal of weld cables and weld sensors, for example, so this would be your starting point for presenting this new automation technology.

Knowing who to talk to in the plant. The key to getting the best return on your time and fast approval of your automation technology is knowing the key people in the plant who can influence the use of new automation technology. Typically, you should know/list and communicate monthly with engineering groups, process improvement groups, maintenance groups, purchasing and quality departments. Narrowing down your focus to specific groups or individuals can help you get technology approval faster, etc. Don’t feel like you need to know everyone in the plant, just the key individuals.

Knowing what subjects to discuss. Don’t just think MRO! Talk about the five technology opportunities to have new automation in your plant, including:

        1. MRO
        2. Large programs and specs
        3. Project upgrades
        4. Training
        5. VMI/vending

Most people concentrate on the MRO business and don’t engage in discussions to find out these other ways to introduce automation technology in your plant. Concentrating on all five of these opportunities will lead to placing a lot of automation in the plant for a very long time.

So, when you look at your plant be very excited about all the opportunities to present automation throughout it and watch your technology levels soar to levels of manufacturing excellence.

Good luck as you begin implementing your expansion of automation technology.

Understanding Image Processing Standards and Their Benefits

In the industrial image processing world, there are standards – GenICam, GigE Vision, and USB3 Vision – that are similar to the USB and Ethernet standards used in consumer products. What do these image processing standards mean, and what are their benefits?

The GenICam standard, which is maintained by the European Machine Vision Association (EMVA), serves as a base for all the image processing standards. This standard abstracts the user access to the features of a camera and defines the Standard Feature Naming Convention (SFNC) that all manufacturers use so that common feature names are used to describe the same functions.

Additionally, manufacturers can add specific “Quality of Implementation” features outside of the SFNC definitions to differentiate their products from ones made by other manufacturers. For example, a camera can offer specific features like frame average, flat field correction, logic gates, etc. GenICam/GigE Vision-based driver and software solutions from other manufacturers can also use these features without any problem.

“On-the-wire” standards

USB3 Vision and GigE Vision are “on-the-wire” interfaces between the driver and the camera. These standards are maintained by the Automated Imaging Association (AIA). You are probably familiar with “on-the-wire” standards and their advantages if you have used plug-and-play devices like USB memory sticks, USB mice, or USB hard disks. They work together without any problem, even if they are made by different manufacturers. It’s the same thing with GenICam/GigE Vision/USB3 Vision-based driver/software solutions. The standards define a transport layer, which controls the detection of a device, configuration (register access), data streaming (device detection), and event handling, and connects the interface to GenICam (Figure 1).

USB3 Vision builds on the GigE Vision standard by including accessories like cables. The mechanics are part of the standard and defines lockable cable interfaces, as one example. This creates a more robust interface for manufacturing environments.

Are standards a must-have?

Technically, standards aren’t necessary. But they make it possible to use products from multiple manufacturers and make devices more useful in the long term. For a historical comparison, look at USB 2.0 cameras and GigE Vision. USB 2.0 industrial cameras were introduced in 2004 and only worked with proprietary drivers (Figure 2) between the client and Vision Library/SDK and between the driver and camera. Two years later, Gigabit Ethernet cameras were introduced with the GigE Vision image processing standard, which didn’t require proprietary drivers to operate.

In the case of a system crash, users of the USB 2.0 cameras wouldn’t know whether the proprietary driver or the software library was to blame, which made them difficult to support. During the decision phase of selecting sensors and support, the customer had to keep the product portfolio in mind to meet their specifications. Afterward, the application was implemented and only worked with the proprietary interfaces of the manufacturer. In case of future projects or adaptions –for example, if a new sensor was required –it would have been necessary for the manufacturer to offer this sensor. Otherwise, it was necessary to change the manufacturer, which meant that a new implementation of the software was necessary as well. In contrast, flexibility is a big advantage with Gigabit Ethernet cameras and GigE Vision: GigE Vision-compliant cameras can be used interchangeably without regard to the manufacturer.

Despite this obvious benefit, USB cameras are more prevalent in certain image processing fields like medicine, given that the applications define the camera’s sensor resolution, image format and image frequency (bandwidth), and the environment for the purpose of cable length, frame grabber, or digital camera solution. With such tightly-defined requirements, USB cameras solve the challenges of these applications.

It’s hard to believe, but a few years ago, there weren’t any standards in the image processing market. Each manufacturer had its own solution. These times are gone – the whole market has pulled together, to the benefit of customers. Because of the standards, the interaction between hardware, driver, and software delivers the experience of a uniform piece. The quality of the market is improved. For the customer, it is easier to make product decisions since they are not locked into one company’s portfolio. With standards-compliant products, the customer can always choose the best components, independent of the company. With GenICam as a base, the image processing market offers the best interface for every application, either with GigE Vision or USB3 Vision.

Using Guided Format Change to Improve Changeover and Productivity

Long before Covid, we were seeing an increase in the number of packaging SKUs. In 2019, Packaging Digest reported an estimated 42% increase in SKUs in the food and beverage industry.

Since Covid, there has been a further explosion of new packaging sizes, especially in the food and beverage marketplace. Food manufacturers have gotten very creative. Instead of raising prices due to the higher costs of goods, for example, they can reduce the size of product packages while keeping the consumer prices the same.

Many of today’s production machines are not equipped to changeover as quickly and as accurately to meet the demand of the marketplace. Manufacturers now face the challenge of “semi-automating” their existing machines, as opposed to procuring new machines or adding expensive motors to existing machines. One solution is to digitalize change points on existing machines.

Companies are looking to reduce the amount of time and the mistakes that occur when doing product changeovers. Allowing for operator guidance and position measurement can reduce your time and enhance your accuracy of those changeover events. Measurements are then tied to the recipe and the operator becomes the prime mover.

Guided Format Change

There are lots of technologies out there for helping with guided format change, such as automated position measurement, machine position, distance measurement, linear measurement, and digitalized rotary encoders.


As you are likely quite aware, there are often scales, marks, etc., written onto machines that don’t provide the greatest degree of accuracy. Introducing digitalized position and distance sensing can help you reduce time and limit errors during changeovers.

Change Part Identification

The other side of changeover is change part identification. Quite often during this process parts on the machine must be exchanged. Using the wrong change part can result in mistakes, waste, and delays, and can even damage existing machines.

Technologies, such as RFID, can help ensure the correct change part is chosen and added to the machine. During a recipe change, the operator can then validate that all the correct parts are installed before the startup of the next product run.

Guided format change is a cost-effective way to reduce changeover time and increase productivity either by retrofitting your existing machines or even new machines.

Security in the World of the Industrial Internet of Things

The Industrial Internet of Things (IIoT) is becoming an indispensable part of the manufacturing industry, leading to real-time monitoring and an increase in overall equipment effectiveness (OEE) and productivity. Since the machines are being connected to the intranet and sometimes to the Internet for remote monitoring, this brings a set of challenges and security concerns for these now-connected devices.

 What causes security to be so different between OT and IT?

Operational Technology (OT) manufacturing equipment is meant to run 24/7. So, if a bug is found that requires a machine to be shut down for an update, that stop causes a loss in productivity. So, manufacturers can’t rely on updating operational equipment as frequently as their Information Technology (IT) counterparts.

Additionally, the approach of security for OT machines has largely been “security through obscurity.” If, for example, a machine is not connected to the network, then the only way to access the hardware is to access it physically.

Another reason is that OT equipment can have a working lifetime that spans decades, compared to the typical 2-5-year service life of IT equipment. And when you add new technology, the old OT equipment becomes almost impossible to update to the latest security patches without the effort and expense of upgrading the hardware. Since OT equipment is in operation for such a long time, it makes sense that OT security focuses on keeping equipment working continuously as designed, where IT is more focused on keeping data available and protected.

These different purposes makes it hard to implement the IT standard on OT infrastructure. But that being said, according to Gartner’s 80/20 rule-of-thumb, 80 percent of security issues faced in the OT environment are the same faced by IT, while 20 percent are domain specific on critical assets, people, or environment. With so many security issues in common, and so many practical differences, what is the best approach?

The solution

The difference in operation philosophy and goals between IT and OT systems makes it necessary to consider IIoT security when implementing the systems carefully. Typical blanket IT security systems can’t be applied to OT systems, like PLCs or other control architecture, because these systems do not have built-in security features like firewalls.

We need the benefits of IIoT, but how do we overcome the security concerns?

The best solution practiced by the manufacturing industry is to separate these systems: The control side is left to the existing network infrastructure, and IT-focused work like monitoring is carried out on a newly added infrastructure.

The benefit of this method is that the control side is again secured by the method it was designed for – “security by obscurity” – and the new monitoring infrastructure can take advantage of the faster developments and updates of the IT lifecycle. This way, the operations and information technology operations don’t interfere with each other.

Choosing Between M18 and Flatpack Proxes

Both M18s and flatpacks are inductive or proximity sensors that are widely used in mechanical engineering and industrial automation applications. Generally, they are similar in that they produce an electromagnetic field that reacts to a metal target when it approaches the sensor head. And the coil in both sensors is roughly the same size, so they have the same sensing range – between 5 to 8 millimeters. They also both work well in harsh environments, such as welding.

There are, however, some specific differences between the M18 and flatpack sensors that are worth consideration when setting up production.

M18

One benefit of the M18 sensor is that it’s adjustable. It has threads around it that allow you to adjust it up or down one millimeter every time you turn it 360 degrees. The M18 can take up a lot of space in a fixture, however. It has a standard length of around two inches long and, when you add a connector, it can be a problem when space is an issue.

Flatpack

A flatpack, on the other hand, has a more compact style and format while offering the same sensing range. The mounting of the flatpack provides a fixed distance so it offers less adjustability of the M18, but its small size delivers flexibility in installation and allows use in much tighter fixes and positions.

The flatpack also comes with a ceramic face and a welding cable, especially suited for harsh and demanding applications. You can also get it with a special glass composite protective face, a stainless-steel face, or a steel face with special coatings on it.

Each housing has its place, based on your detection application, of course. But having them both in your portfolio can expand your ability to solve your applications with sensor specificity.

Check out this previous blog for more information on inductive sensors and their unlimited uses in automation.

Predictive Maintenance vs. Predictive Analytics, What’s the Difference?

With more and more customers getting onboard with IIoT applications in their plants, a new era of efficiency is lurking around the corner. Automation for maintenance is on the rise thanks to a shortage of qualified maintenance techs coinciding with a desire for more efficient maintenance, reduced downtime, and the inroads IT is making on the plant floor. Predictive Maintenance and Predictive Analytics are part of almost every conversation in manufacturing these days, and often the words are used interchangeably.

This blog is intended to make the clear distinction between these phrases and put into perspective the benefits that maintenance automation brings to the table for plant management and decision-makers, to ensure they can bring to their plants focused innovation and boost efficiencies throughout them.

Before we jump into the meat of the topic, let’s quickly review the earlier stages of the maintenance continuum.

Reactive and Preventative approaches

The Reactive and Preventative approaches are most commonly used in the maintenance continuum. With a Reactive approach, we basically run the machine or line until a failure occurs. This is the most efficient approach with the least downtime while the machine or line runs. Unfortunately, when the machine or line comes to a screeching stop, it presents us with the most costly of downtimes in terms of time wasted and the cost of machine repairs.

The Preventative approach calls for scheduled maintenance on the machine or line to avoid impending machine failures and reduce unplanned downtimes. Unfortunately, the Preventative maintenance strategy does not catch approximately 80% of machine failures. Of course, the Preventative approach is not a complete waste of time and money; regular tune-ups help the operations run smoother compared to the Reactive strategy.

Predictive Maintenance vs. Predictive Analytics

As more companies implement IIoT solutions, data has become exponentially more important to the way we automate machines and processes within a production plant, including maintenance processes. The idea behind Predictive Maintenance (PdM), aka condition-based maintenance, is that by frequently monitoring critical components of the machine, such as motors, pumps, or bearings, we can predict the impending failures of those components over time. Hence, we can prevent the failures by scheduling planned downtime to service machines or components in question. We take action based on predictive conditions or observations. The duration between the monitored condition and the action taken is much shorter here than in the Predictive Analytics approach.

Predictive Analytics, the next higher level on the maintenance continuum, refers to collecting the condition-based data over time, marrying it with expert knowledge of the system, and finally applying machine learning or artificial intelligence to predict the event or failure in the future. This can help avoid the failure altogether. Of course, it depends on the data sets we track, for how long, and how good our expert knowledge systems are.

So, the difference between Predictive Maintenance and Predictive Analytics, among other things, is the time between condition and action. In short, Predictive Maintenance is a stepping-stone to Predictive Analytics. Once in place, the system monitors and learns from the patterns to provide input on improving the system’s longevity and uptime. Predictive Maintenance or Preventative Maintenance does not add value in that respect.

While Preventative Maintenance and Predictive Maintenance promises shorter unplanned downtimes, Predictive Analytics promises avoidance of unplanned downtime and the reduction of planned downtime.

The first step to improving your plant floor OEE is with monitoring the conditions of the critical assets in the factory and collecting data regarding the failures.

Other related Automation Insights blogs:

Looking Into & Through Transparent Material With Photoelectric Sensors

Advance automated manufacturing relies on sensor equipment to ensure each step of the process is done correctly, reliably, and effectively. For many standard applications, inductive, capacitive, or basic photoelectric sensors can do a fine job of monitoring and maintaining the automated manufacturing process. However, when transparent materials are the target, you need a different type of sensor, and maybe even need to think differently about how you will use it.

What are transparent materials?

When I think of transparent materials, water, glass, plexiglass, polymers, soaps, cooling agents, and packaging all come to mind. Because transparent material absorbs very little of the emitted red LED light, standard photoelectric sensors struggle on this type of application. If light can make its way back to the receiver, how can you tell if the beam was broken or not? By measuring the amount of light returned, instead of just if it is there or not, we can detect a transparent material and learn how transparent it is.

Imagine being able to determine proper mixes or thicknesses of liquid based on a transparency scale associated to a value of returned light. Another application that I believe a transparent material photoelectric senor would be ideal for is the thickness of a clear bottle. Imagine the wall thickness being crucial to the integrity of the bottle. Again, we would measure the amount of light allowed back to the receiver instead of an expensive measurement laser or even worse, a time-draining manual caliper.

Transparent material sensor vs. standard photoelectric sensor

So how does a transparent material sensor differ from a standard photoelectric sensor? Usually, the type of light is key. UV light is absorbed much greater than other wavelengths, like red or blue LEDs you find in standard photoelectric sensors. To add another level, you polarize that UV light to better control the light back into the receiver. Polarized UV light with a polarized reflector is the best combination. This can be done on a large or micro scale based on the sensor head size and build.

Uses for transparent material sensor include packaging trays, level tubes, medical tests, adhesive extrusion, and bottle fill levels, just to name a few. Transparent materials are everywhere, and the technology has matured. Make sure you are looking into specialized sensor technologies and working through best set-up practices to ensure reliable detection of transparent materials.