Embedded vision – What It Is and How It Works

Embedded vision is a rapidly growing field that combines computer vision and embedded systems with cameras or other imaging sensors, enabling devices to interpret and understand the visual world around them – as humans do. This technology, with broad applications, is expected to revolutionize how we interact with technology and the world around us and will likely play a major role in the Internet of Things and Industry 4.0 revolution.

Embedded vision uses computer vision algorithms and techniques to process visual information on devices with limited computational resources, such as embedded systems or mobile devices. These systems use cameras or other imaging sensors to acquire visual data and perform tasks on that data, such as image or video processing, object detection, and image analysis.

Applications for embedded vision systems

Among the many applications that use embedded vision systems are:

    • Industrial automation and inspection
    • Medical and biomedical imaging
    • Surveillance and security systems
    • Robotics and drones
    • Automotive and transportation systems

Hardware and software for embedded vision systems

Embedded vision systems typically use a combination of software and hardware to perform their tasks. On the hardware side, embedded vision systems often use special-purpose processors, such as digital signal processors (DSPs) or field-programmable gate arrays (FPGAs), to perform the heavy lifting of image and video processing. On the software side, they typically use libraries or frameworks that provide pre-built functions for tasks, such as image filtering, object detection, and feature extraction. Some common software libraries and frameworks for embedded vision include OpenCV, MATLAB, Halcon, etc.

It’s also quite important to note that the field of embedded vision is active and fast moving with new architectures, chipsets, and software libraries appearing regularly to make this technology more available and accessible to a broader range of applications, devices, and users.

Embedded vision components

The main parts of embedded vision include:

    1. Processor platforms are typically specialized for handling the high computational demands of image and video processing. They may include digital signal processors (DSPs), field-programmable gate arrays (FPGAs), and application-specific integrated circuits (ASICs).
    2. Camera components refer to imaging sensors that acquire visual data. These sensors can include traditional digital cameras and specialized sensors such as stereo cameras, thermal cameras, etc.
    3. Accessories and carrier boards include the various additional hardware and components that interface the camera with the processor and other peripherals. Examples include memory cards, power supplies, and IO connectors.
    4. Housing and mechanics are the physical enclosures of the embedded vision system, including the mechanics that hold the camera, processor, and other components in place, and the housing that protects the system from external factors such as dust and water.
    5. The operating system runs on the processor. It could be a custom firmware or a general-purpose operating system, like Linux or Windows.
    6. Application SW is the software that runs on the embedded vision system to perform tasks such as image processing, object detection, and feature extraction. This software often uses a combination of high-level programming languages, such as C++, Python, and lower-level languages, like C.
    7. Feasibility studies evaluate a proposed solution’s technical and economic feasibility, identifying any risks or possible limitations that could arise during the development. They are conducted before the development of any embedded vision systems.
    8. Integration interfaces refer to the process of integrating the various components of the embedded vision system and interfacing it with other systems or devices. This can include integrating the camera, processor, and other hardware and developing software interfaces to enable communication between the embedded vision system and other systems.

Learn more here about selecting the most efficient and cost-effective vision product for your project or application.

Using Guided Changeover to Reduce Maintenance Costs, Downtime

A guided changeover system can drastically reduce the errors involved with machine operation, especially when added to machines using fully automated changeovers. Processing multiple parts and recipes during a production routine requires a range of machines, and tolerances are important to quantify. Only relying on the human element is detrimental to profits, machine maintenance, and production volumes. Implementing operator assistance to guide visual guidance will reveal inefficiencies and allow for vast improvements.

Removing human error

Unverified manual adjustments may cause machine fatigue or failure. In a traditional manual changeover system, the frequency of machine maintenance is greater if proper tolerances are not observed at each changeover. Using IO-Link can remove the variable of human error with step-by-step instructions paired with precise sensors in closed-loop feedback. The machine can start up and run only when all parts are in the correct position.

Preventative maintenance and condition monitoring

Preventative maintenance is achievable with the assistance of sensors, technology, and systems. Using condition monitoring for motors, pumps and critical components can help prevent the need for maintenance and notably improve the effectiveness of maintenance with custom alerts and notifications with a highly useful database and graphing function.

A repeatable maintenance routine based on condition monitoring data and using a system to guide machine changeover will prolong machine life and potentially eliminate downtime altogether.

For more, read this real-world application story, including an automated format change to eliminate human error, reduce waste and decrease downtime.

Automation Insights: Top Blogs From 2022

It’s an understatement to say 2022 had its challenges. But looking back at the supply chain disruptions, inflation, and other trials threatening success in many industries, including manufacturing, there were practical insights we can benefit from as we dive into 2023. Below are the most popular blogs from last year’s Automation Insights site.

    1. Evolution of Pneumatic Cylinder Sensors

Top 2022 Automation Insights BlogsToday’s pneumatic cylinders are compact, reliable, and cost-effective prime movers for automated equipment. They’re used in many industrial applications, such as machinery, material handling, assembly, robotics, and medical. One challenge facing OEMs, integrators, and end users is how to detect reliably whether the cylinder is fully extended, retracted, or positioned somewhere in between before allowing machine movement.

Read more.

    1. Series: Condition Monitoring & Predictive Maintenance 

Top 2022 Automation Insights BlogsBy analyzing which symptoms of failure are likely to appear in the predictive domain for a given piece of equipment, you can determine which failure indicators to prioritize in your own condition monitoring and predictive maintenance discussions.

Read the series, including the following blogs:

    1. Know Your RFID Frequency Basics

Top 2022 Automation Insights BlogsIn 2008 I purchased my first toll road RFID transponder, letting me drive through and pay my toll without stopping at a booth. This was my first real-life exposure to RFID, and it was magical. Back then, all I knew was that RFID stood for “radio frequency identification” and that it exchanged data between a transmitter and receiver using radio waves. That’s enough for a highway driver, but you’ll need more information to use RFID in an industrial automation setting. So here are some basics on what makes up an RFID system and the uses of different radio frequencies.

Read more.

    1. IO-Link Event Data: How Sensors Tell You How They’re Doing

Top 2022 Automation Insights BlogsI have been working with IO-Link for more than 10 years, so I’ve heard lots of questions about how it works. One line of questions I hear from customers is about the operating condition of sensors. “I wish I knew when the IO-Link device loses output power,” or, “I wish my IO-Link photoelectric sensor would let me know when the lens is dirty.” The good news is that it does give you this information by sending Event Data. That’s a type of data that is usually not a focus of users, although it is available in JSON format from the REST API.

Read more.

    1. Converting Analog Signals to Digital for Improved Performance

Top 2022 Automation Insights BlogsWe live in an analog world, where we experience temperatures, pressures, sounds, colors, etc., in seemingly infinite values. There are infinite temperature values between 70-71 degrees, for example, and an infinite number of pressure values between 50-51 psi.

Read more.

We appreciate your dedication to Automation Insights in 2022 and look forward to growth and innovation in 2023.

Simplified Sensing Over a Complex Headache

The constant need for more data and higher accuracy has pushed sensing technologies to the extreme. Advancements in factory automation have created a perfect storm of innovation and new capabilities. This is probably an unpopular opinion but, do we always need all of this?

I started my career in factory automation in the late 90s. This was a time of technology transitions. PLCs had been around for ages but had never been so affordable. Technologies, such as time-of-flight laser measurement, industrial cameras, and inductive coupling, were new and exciting, and they were becoming more affordable, too.

As a controls engineer, I remember using these advanced technologies and systems as a way of keeping my projects future-proof – or so I thought. In reality, sometimes they just made things more complicated.

Let me explain this using an example where tried, true and affordable sensors could have made the project more reliable and future-proof from the start.

Photoelectric sensors have earned their place in the automation hall of fame. I don’t see a time when their use will not be necessary as a reliable way to conduct presence detection.

I was working on a project that required tracking several washing machine cabinet bases to be counted and orientated correctly on a conveyor. I wanted to use an industrial camera because the technology was getting better and better. I paid $7,000  for the camera and accessories. After several days and iterations, the camera system was working perfectly.  It continued working for about a week before it was knocked out of alignment by a production worker who was using it as a leaning post. It took another day or so to dial back in.

Tried, true and affordable win out

The solution I ultimately chose was the easiest. I strategically placed seven basic photoeyes underneath the conveyor to identify what base it was looking at and if certain characteristics were present for quality tracking. My investment was around $400, and it was extremely protected from failure. And, if a sensor went out, rather than calling an engineer in the middle of the night, a maintenance electrician could simply replace it with a new one.

Another huge benefit of using photoeyes was the avoidance of buyer’s remorse. Camera technology is always evolving. From one day to the next they get better and more capable, but also might have proprietary comms or software. Basic photoelectric sensors with a PNP or NPN output can easily be swapped out by almost any brand for decade to come.

Keep it simple

At the end of the day, sometimes it is best to keep the solution simple, clean, and backed by the tried-and-true technologies in factory automation. Next time you dig into a project, take a moment to think about my example. Melt the solution down to the lowest common denominator and build up the complexity from there. You might just save more than just money; you might save a headache or two.

Securing Your Supply Chain and Beefing Up Traceability

 

Snake oil is one of the most maligned products in all of history. Synonymous with cure-alls and quackery, it is a useless rip-off, right? Well, no, it’s actually high in the Omega 3’s, EPA, and DHA.

Snake oil fell from prominence because it was all too easy for charlatans to brew up fake oil and pass it off as the genuine article, with sometimes dangerous outcomes.

Today’s customers are smarter than ever and waking up with ever-evolving knockoffs. We are more aware of fake reviews and fake products. Brands that can prove their products are genuine can command higher prices and forge long-standing customer relationships. This starts with securing your supply chain and beefing up traceability.

Securing your brand

Many roads lead to Rome and no single technology will be the one silver bullet to secure your supply chain. That said, RFID technology is likely to play an important role. RFID allows for multi-read without a line of sight, making it a great choice in both production and warehouse/logistics environments. Perhaps more importantly, RFID tags can be encrypted. This adds protection against would-be cheats. The ability to both read and write provides additional flexibility for tracking and tracing in production.

RFID is not the only traceability solution and smart companies will use a combination of technologies to secure their brands. We’ve seen holograms on baseball cards and QR codes on underwear. We’ve seen authorized retailer programs … and RFID on coffee cups and medical devices. As you think through the various options, it’s worth keeping in mind the following 4 questions:

      1. Is the technology secure? Does it support modern cryptographic methods?
      2. Does the solution add value – i.e. improve current processes?
      3. Is the technology future-proof?
      4. Is the technology robust?

Any technology that answers yes to these questions will be well-suited to meet this new market. Brands that stay ahead of the curve will grow and those who fall behind the curve risk ending up in the dustbin – right next to the snake oil.

Navigating the Automotive Plant for Automation Opportunities

When one first looks at an automotive manufacturing plant, the thought of identifying opportunities for automation may be overwhelming to some.

These plants are multi-functional and complex. A typical plant manages several processes, such as:

    • Press, stamping, and dye automation
    • Welding, joining, and body in white
    • Painting
    • Final assembly
    • Robot cell
    • Material handling, including AGV, conveyor, and ASRS
    • Engine and powertrain assembly
    • Casting and machining parts
    • EV and EV sub-processes

Navigating the complexity of the automation processes in your plant to promote more automation products will take some time. You will have to look at this task by:

Time. When tackling a large automotive plant, it’s important to understand how to dissect it into smaller parts and spread out your strategies over a full year or two.

Understanding. Probably the most important thing is to understand the processes and flow of the build assembly process in a plant and then to list the strategic products that can be of use in each area.

Prioritizing. Once you have a good understanding of the plant processes and a strategic timeline to present these technologies, the next step is to prioritize your time and the technology to the highest return on investment. You may now learn that your company could use a great deal of weld cables and weld sensors, for example, so this would be your starting point for presenting this new automation technology.

Knowing who to talk to in the plant. The key to getting the best return on your time and fast approval of your automation technology is knowing the key people in the plant who can influence the use of new automation technology. Typically, you should know/list and communicate monthly with engineering groups, process improvement groups, maintenance groups, purchasing and quality departments. Narrowing down your focus to specific groups or individuals can help you get technology approval faster, etc. Don’t feel like you need to know everyone in the plant, just the key individuals.

Knowing what subjects to discuss. Don’t just think MRO! Talk about the five technology opportunities to have new automation in your plant, including:

        1. MRO
        2. Large programs and specs
        3. Project upgrades
        4. Training
        5. VMI/vending

Most people concentrate on the MRO business and don’t engage in discussions to find out these other ways to introduce automation technology in your plant. Concentrating on all five of these opportunities will lead to placing a lot of automation in the plant for a very long time.

So, when you look at your plant be very excited about all the opportunities to present automation throughout it and watch your technology levels soar to levels of manufacturing excellence.

Good luck as you begin implementing your expansion of automation technology.

Understanding Image Processing Standards and Their Benefits

In the industrial image processing world, there are standards – GenICam, GigE Vision, and USB3 Vision – that are similar to the USB and Ethernet standards used in consumer products. What do these image processing standards mean, and what are their benefits?

The GenICam standard, which is maintained by the European Machine Vision Association (EMVA), serves as a base for all the image processing standards. This standard abstracts the user access to the features of a camera and defines the Standard Feature Naming Convention (SFNC) that all manufacturers use so that common feature names are used to describe the same functions.

Additionally, manufacturers can add specific “Quality of Implementation” features outside of the SFNC definitions to differentiate their products from ones made by other manufacturers. For example, a camera can offer specific features like frame average, flat field correction, logic gates, etc. GenICam/GigE Vision-based driver and software solutions from other manufacturers can also use these features without any problem.

“On-the-wire” standards

USB3 Vision and GigE Vision are “on-the-wire” interfaces between the driver and the camera. These standards are maintained by the Automated Imaging Association (AIA). You are probably familiar with “on-the-wire” standards and their advantages if you have used plug-and-play devices like USB memory sticks, USB mice, or USB hard disks. They work together without any problem, even if they are made by different manufacturers. It’s the same thing with GenICam/GigE Vision/USB3 Vision-based driver/software solutions. The standards define a transport layer, which controls the detection of a device, configuration (register access), data streaming (device detection), and event handling, and connects the interface to GenICam (Figure 1).

USB3 Vision builds on the GigE Vision standard by including accessories like cables. The mechanics are part of the standard and defines lockable cable interfaces, as one example. This creates a more robust interface for manufacturing environments.

Are standards a must-have?

Technically, standards aren’t necessary. But they make it possible to use products from multiple manufacturers and make devices more useful in the long term. For a historical comparison, look at USB 2.0 cameras and GigE Vision. USB 2.0 industrial cameras were introduced in 2004 and only worked with proprietary drivers (Figure 2) between the client and Vision Library/SDK and between the driver and camera. Two years later, Gigabit Ethernet cameras were introduced with the GigE Vision image processing standard, which didn’t require proprietary drivers to operate.

In the case of a system crash, users of the USB 2.0 cameras wouldn’t know whether the proprietary driver or the software library was to blame, which made them difficult to support. During the decision phase of selecting sensors and support, the customer had to keep the product portfolio in mind to meet their specifications. Afterward, the application was implemented and only worked with the proprietary interfaces of the manufacturer. In case of future projects or adaptions –for example, if a new sensor was required –it would have been necessary for the manufacturer to offer this sensor. Otherwise, it was necessary to change the manufacturer, which meant that a new implementation of the software was necessary as well. In contrast, flexibility is a big advantage with Gigabit Ethernet cameras and GigE Vision: GigE Vision-compliant cameras can be used interchangeably without regard to the manufacturer.

Despite this obvious benefit, USB cameras are more prevalent in certain image processing fields like medicine, given that the applications define the camera’s sensor resolution, image format and image frequency (bandwidth), and the environment for the purpose of cable length, frame grabber, or digital camera solution. With such tightly-defined requirements, USB cameras solve the challenges of these applications.

It’s hard to believe, but a few years ago, there weren’t any standards in the image processing market. Each manufacturer had its own solution. These times are gone – the whole market has pulled together, to the benefit of customers. Because of the standards, the interaction between hardware, driver, and software delivers the experience of a uniform piece. The quality of the market is improved. For the customer, it is easier to make product decisions since they are not locked into one company’s portfolio. With standards-compliant products, the customer can always choose the best components, independent of the company. With GenICam as a base, the image processing market offers the best interface for every application, either with GigE Vision or USB3 Vision.

Edge Gateways To Support Real-Time Condition Monitoring Data

In my previous blog post from early summer, I talked about IO-Link sensors with condition monitoring features that work with PLCs. I covered how condition monitoring variables can be set up as alarms and how simple logic can be set up inside the sensor so it only sets off those alarms to the PLC in real time to alert operators when something is wrong. Many companies, however, take advantage of the IoT sensor data with the long-term goal of analyzing the environmental data conditions to predict maintenance needs in real-time versus relying on a schedule. Some even want to connect directly to their MES systems to inform maintenance personnel of daily maintenance orders, which requires a separate device, such as an IoT edge gateway.

Edge gateway benefits

The biggest benefit of an IoT edge gateway is the ability to process and store large amounts of data quickly, enabling real-time applications to use that data efficiently.

An IoT edge gateway typically sits at the end or edge of your network and gathers all the sensor data either directly from the sensors or from the PLC. Since there will be a large amount of data from all the sensors on the network, part of the edge gateway setup is to filter the relevant and important information and process this vast amount of data. The edge gateway must also handle the amount of data required reliably, and it must have low latency. These important factors are often associated with the gateway’s CPU and memory specifications.

After looking at the performance of the edge gateway, comes the ‘gateway’ aspect which provides a translation to different communications networks, whether local or cloud-based. There are the hardware specs of the gateway, whether it’s using serial, USB or Ethernet for that connection, as well as the environmental ratings on the gateway. Then, more importantly, is the software side of the edge gateway. There are cloud-based communications standards designed for different applications and for either private or public cloud networks.

Edge gateways support different communications protocols, such as HTTPS, MQTT, RESTful API, C/Python API. The gateway portion also helps in the conversion of those protocols and the ease of interoperability to different platforms, such as AWS, Azure, Ignition, and Wonderware. This provides data transparency so that all the data gathered can be used across the many different software platforms.

To get to the IoT end goal, an edge gateway is necessary and it’s important to choose the correct one.

Why Invest in Smart Manufacturing Practices?

We’re all privy to talks about smart manufacturing, smart factory, machine learning, IIOT, ITOT convergence, and so on, and many manufacturers have already embarked on their smart manufacturing journeys. Let’s take a pause and really think about it… Is it really important or is it a fad? If it is important, then why?

In my role traveling across the U.S. meeting various manufacturers and machine builders, I often hear about their needs to collect data and have certain types of interfaces. But they don’t know what good that data is going to do. Well, let’s get down to the basics and understand this hunger for data and smart manufacturing.

Manufacturing goals

Since the dawn of industrialization, the industry has been focused on efficiency – always addressing issues of how to produce more, better and quicker. The goal of manufacturing always revolved around these four things:

    1. Reduce total manufacturing and supply chain costs
    2. Reduce scrap rate and improve quality
    3. Improve/increase asset utilization and machine availability
    4. Reduced unplanned downtime

Manufacturing megatrends

While striving for these goals, we have made improvements that have tremendously helped us as a society to improve our lifestyle. But we are now in a different world altogether. The megatrends that are affecting manufacturing today require manufacturers to be even more focused on these goals to stay competitive and add to their bottom lines.

The megatrends affecting the whole manufacturing industry include:

    • Globalization: The competition for a manufacturer is no longer local. There is always somebody somewhere making products that are cheaper, better or more available to meet demand.
    • Changing consumer behavior: I am old enough to say that, when growing up, there were only a handful of brands and only certain types of products that made it over doorsteps. These days, we have variety in almost every product we consume. And, our taste is constantly changing.
    • Lack of skilled labor: Almost every manufacturer that I talk to expresses that keeping and finding good skilled people has been very difficult. The baby boomers are retiring and creating huge skills gaps in the workplaces.
    • Aging equipment: According to one study, almost $65B worth of equipment in the U.S. is outdated, but still in production. Changing regulations require manufacturers to track and trace their products in many industries.

Technology has always been the catalyst for achieving new heights in efficiency. Given the megatrends affecting the manufacturing sector, the need for data is dire. Manufacturers must make decisions in real-time and having relevant and useful data is a key to success in this new economy.

Smart manufacturing practices

What we call “smart manufacturing practices” are practices that use technology to affect how we do things today and improve them multifold. They revolve around three key areas:

    1. Efficiency: If a line is down, the machine can point directly to where the problem is and tell you how to fix it. This reduces downtime. Even better is using data and patterns about the system to predict when the machine might fail.
    2. Flexibility: Using technology to retool or change over the line quickly for the next batch of production or responding to changing consumer tastes through adopting fast and agile manufacturing practices.
    3. Visibility: Operators, maintenance workers, and plant management all need a variety of information about the machine, the line, or even the processes. If we don’t have this data, we are falling behind.

In a nutshell, smart manufacturing practices that focus on one or more of these key areas, helps manufacturers boost productivity and address challenges presented by the megatrends. Hence, it is important to invest in these practices to stay competitive.

One more thing: There is no finish line when it comes to smart manufacturing. It should become a part of your continuous improvement program to evaluate and invest in technology that offers you more visibility, improves efficiency, and adds more flexibility to how you do things.

Machine Failures and Condition Monitoring – Selecting Sensors

In previous blogs, we discussed the different types of machine failures and their implications for different maintenance approaches, the cost-benefit tradeoffs of these maintenance approaches, and the progression of machine failures and indicators that emerge at various failure phases. We now will connect the different failure indicators to the sensors which can detect them.

The Potential – Functional Failure (P-F) curve gives a rough picture of when various indicators may emerge during the progression of a failure:

Each indicator can be detected by one or more types of sensors. Selection of the “best” sensor will depend on the machine/asset being monitored, other attributes being sensed, budget/cost-benefit tradeoff, and the maintenance approach. In some cases, a single-purpose, dedicated condition-monitoring sensor may be the right choice. In other cases, a multi-function sensor (“Smart Automation and Monitoring System sensor”) which can handle both condition monitoring and standard sensing tasks may be an elegant and cost-effective solution.

The table below gives some guidance to possible single- and multi-function sensors which can address the various indicators:

* Condition monitoring sensors are specialized sensors that can often detect multiple indicators including vibration, temperature, humidity, and ambient pressure.

# Smart Automation and Monitoring System sensors add condition monitoring sensing, such as vibration and temperature, to their standard sensing functions, such as photoelectric, inductive, or capacitive sensing

There is a wide range of sensors that can provide the information needed for condition monitoring indication. The table above can provide some guidance, selecting the best fit requires an evaluation of the application, the costs/benefits, and fit with the maintenance strategy.