Embedded vision – What It Is and How It Works

Embedded vision is a rapidly growing field that combines computer vision and embedded systems with cameras or other imaging sensors, enabling devices to interpret and understand the visual world around them – as humans do. This technology, with broad applications, is expected to revolutionize how we interact with technology and the world around us and will likely play a major role in the Internet of Things and Industry 4.0 revolution.

Embedded vision uses computer vision algorithms and techniques to process visual information on devices with limited computational resources, such as embedded systems or mobile devices. These systems use cameras or other imaging sensors to acquire visual data and perform tasks on that data, such as image or video processing, object detection, and image analysis.

Applications for embedded vision systems

Among the many applications that use embedded vision systems are:

    • Industrial automation and inspection
    • Medical and biomedical imaging
    • Surveillance and security systems
    • Robotics and drones
    • Automotive and transportation systems

Hardware and software for embedded vision systems

Embedded vision systems typically use a combination of software and hardware to perform their tasks. On the hardware side, embedded vision systems often use special-purpose processors, such as digital signal processors (DSPs) or field-programmable gate arrays (FPGAs), to perform the heavy lifting of image and video processing. On the software side, they typically use libraries or frameworks that provide pre-built functions for tasks, such as image filtering, object detection, and feature extraction. Some common software libraries and frameworks for embedded vision include OpenCV, MATLAB, Halcon, etc.

It’s also quite important to note that the field of embedded vision is active and fast moving with new architectures, chipsets, and software libraries appearing regularly to make this technology more available and accessible to a broader range of applications, devices, and users.

Embedded vision components

The main parts of embedded vision include:

    1. Processor platforms are typically specialized for handling the high computational demands of image and video processing. They may include digital signal processors (DSPs), field-programmable gate arrays (FPGAs), and application-specific integrated circuits (ASICs).
    2. Camera components refer to imaging sensors that acquire visual data. These sensors can include traditional digital cameras and specialized sensors such as stereo cameras, thermal cameras, etc.
    3. Accessories and carrier boards include the various additional hardware and components that interface the camera with the processor and other peripherals. Examples include memory cards, power supplies, and IO connectors.
    4. Housing and mechanics are the physical enclosures of the embedded vision system, including the mechanics that hold the camera, processor, and other components in place, and the housing that protects the system from external factors such as dust and water.
    5. The operating system runs on the processor. It could be a custom firmware or a general-purpose operating system, like Linux or Windows.
    6. Application SW is the software that runs on the embedded vision system to perform tasks such as image processing, object detection, and feature extraction. This software often uses a combination of high-level programming languages, such as C++, Python, and lower-level languages, like C.
    7. Feasibility studies evaluate a proposed solution’s technical and economic feasibility, identifying any risks or possible limitations that could arise during the development. They are conducted before the development of any embedded vision systems.
    8. Integration interfaces refer to the process of integrating the various components of the embedded vision system and interfacing it with other systems or devices. This can include integrating the camera, processor, and other hardware and developing software interfaces to enable communication between the embedded vision system and other systems.

Learn more here about selecting the most efficient and cost-effective vision product for your project or application.

Error-Free Assembly of Medical Components

A SUV and a medical device used in a lab aren’t very similar in their looks, but when it comes to manufacturing them, they have a lot in common. For both, factory automation is used to increase production volume while also making sure that production steps are completed precisely. Read on to learn about some ways that sensors are used in life science manufacturing.

Sensors with switching output

Automation equipment producers are creative builders of specialized machines, as each project differs somehow from previous ones. When it comes to automated processes in the lab and healthcare sectors where objects being processed or assembled are small, miniaturization is required for manufacturing equipment as well.  Weight reduction also plays an important role in this, since objects with a lower mass can be moved quickly with a smaller amount of force. By using light-weight sensors on automated grippers, they can increase the speed of actuator movements.

Conveyor system using photoelectric sensors for object detection

Photoelectric sensors are quite common in automated production because they can detect objects from a distance. Miniaturized photoelectric sensors are more easily placed in a production process that works with small parts. And photoelectric sensors can be used to detect objects that are made of many different types of material.

A common challenge for lab equipment is to detect clear liquids in clear vessels. Click here for a description of how specialized photoelectric sensors face this challenge.

Specialized photoelectric sensors for clear water detection

Image Processing

Within the last several years, camera systems have been used more frequently in the production of lab equipment. They are fast enough for high-speed production processes and support the use of artificial intelligence through interfaces to machine learning systems.

Identification

In any production setting, products, components and materials must be identified and tracked. Both optical identification and RFID technology are suitable for this purpose.

Sample analysis with industrial camera

Optical identification systems use a scanner to read one-dimensional barcodes or two-dimensional data matrix or QR codes and transmit the object information centrally to a database, which then identifies the object. The identification cost per object is pretty low when using a printed label or laser marking on the object.

When data must be stored directly on or with the object itself, often because the data needs to be changed or added to during the production process, RFID (Radio Frequency Identification) is the best choice. Data storage tags come in many different sizes and can store different amounts of data and have other features to meet specific needs. This decentralized data storage has advantages in fast production processes when there is a need for real-time data storage.

Data of RFID tag at pallet are read and written with RFID read/write head and transferred via bus module

There are numerous parallels between automation in the life science sector and general factory automation. While these manufacturing environments both have their own challenges, the primary automation task is the same: find the best sensor for your application requirements. Being able to choose from many types of sensors, with different sizes and characteristics, can make that job a lot easier. For more information about the life sciences industries, visit https://www.balluff.com/en-us/industries/life-science.