Does Your Stamping Department Need a Checkup? Try a Die-Protection Risk Assessment

If you have ever walked through a stamping department at a metal forming facility, you have heard the rhythmic sound of the press stamping out parts, thump, thump. The stamping department is the heart manufacturing facility, and the noise you hear is the heartbeat of the plant. If it stops, the whole plant comes to a halt. With increasing demands for higher production rates, less downtime, and reduction in bad parts, stamping departments are under ever-increasing pressure to optimize the press department through die protection and error-proofing programs.

The die-protection risk assessment team

The first step in implementing or optimizing a die protection program is to perform a die-protection risk assessment. This is much like risk assessments conducted for safety applications, except they are done for each die set. To do this, build a team of people from various positions in the press department like tool makers, operators, and set-up teams.

Once this team is formed, they can help identify any incidents that could occur during the stamping operations for each die set and determine the likelihood and the severity of possible harm. With this information, they can identify which events have a higher risk/severity and determine what additional measures they should implement to prevent these incidents. An audit is possible even if there are already some die protection sensors in place to determine if there are more that should be added and verify the ones in place are appropriate and effective.

The top 4 die processes to check

The majority of quality and die protection problems occur in one of these three areas: material feed, material progression, and part- and slug-out detections. It’s important to monitor these areas carefully with various sensor technologies.

Material feed

Material feed is perhaps the most critical area to monitor. You need to ensure the material is in the press, in the correct location, and feeding properly before cycling the press. The material could be feeding as a steel blank, or it could come off a roll of steel. Several errors can prevent the material from advancing to the next stage or out of the press: the feed can slip, the stock material feeding in can buckle, or scrap can fail to drop and block the strip from advancing, to name a few. Inductive proximity sensors, which detect iron-based metals at short distances, are commonly used to check material feeds.

Material progression

Material progression is the next area to monitor. When using a progressive die, you will want to monitor the stripper to make sure it is functioning and the material is moving through the die properly. With a transfer die, you want to make sure the sheet of material is nesting correctly before cycling the press. Inductive proximity sensors are the most common sensor used in these applications, as well.

Here is an example of using two inductive proximity sensors to determine if the part is feeding properly or if there is a short or long feed. In this application, both proximity sensors must detect the edge of the metal. If the alignment is off by just a few millimeters, one sensor won’t detect the metal. You can use this information to prevent the press from cycling to the next step.

Short feed, long feed, perfect alignment

Part-out detection

The third critical area that stamping departments typically monitor is part-out detection, which makes sure the finished part has come out of the stamping

area after the cycle is complete. Cycling the press and closing the tooling on a formed part that failed to eject can result in a number of undesirable events, like blowing out an entire die section or sending metal shards flying into the room. Optical sensors are typically used to check for part-out, though the type of photoelectric needed depends on the situation. If the part consistently comes out of the press at the same position every time, a through-beam photo-eye would be a good choice. If the part is falling at different angles and locations, you might choose a non-safety rated light grid.

Slug-ejection detection

The last event to monitor is slug ejection. A slug is a piece of scrap metal punched out of the material. For example, if you needed to punch some holes in metal, the slug would be the center part that is knocked out. You need to verify that the scrap has exited the press before the next cycle. Sometimes the scrap will stick together and fail to exit the die with each stroke. Failure to make sure the scrap material leaves the die could affect product quality or cause significant damage to the press, die, or both. Various sensor types can ensure proper scrap ejection and prevent crashes. The picture below shows a die with inductive ring sensors mounted in it to detect slugs as they fall out of the die.

Just like it is important to get regular checkups at the doctor, performing regular die-protection assessments can help you make continuous improvements that can increase production rates and reduce downtime. Material feed, material progression, part-out and slug-out detection are the first steps to optimize, but you can expand your assessments to include areas like auxiliary equipment. You can also consider smart factory solutions like intelligent sensors, condition monitoring, and diagnostics over networks to give you more data for preventative maintenance or more advanced error-proofing. The key to a successful program is to assemble the right team, start with the critical areas listed above, and learn about new technologies and concepts that are becoming available to help you plan ways to improve your stamping processes.

Maximize the Benefits of Open-Source Code in Manufacturing Software

The rise of many players in manufacturing automation, along with factories’ growing adoption of Industrial Internet of Things (IIoT) and automation solutions, present a suitable environment for open-source software. This software is a value-adding solution for manufacturers, regardless of their operation technology and management requirements, due to the customization, resiliency, scalability, accessibility, cost-effectiveness, and quality it allows.

Customization

Software developers who use open-source code provide software with a core code that establishes specific features and allows users to access it and make changes as necessary. The process is much like being able to complete an author’s writing prompt or change the end of a story. Unlike a closed system that locks users in, open-source allows them to adapt and modify the code to meet a particular need or application.

This add-on coding system provides endless customization. It enables communities (i.e., users) to add or remove features beneficial in an integration phase, such as features for user testing or to find the best solution for a machine.

Customization is also valuable regarding data visualizations; users can develop dashboards and visuals that best describe their operations. Suppose a sensor provides real-time condition monitoring data over a particular machine. In that case, it’s possible to customize the code supporting the software that gathers and processes the data for specific parameters or to calculate specific values.

Resiliency

Additionally, open-source code is resilient to change because it can be modified quickly. The ability to quickly add or remove features and adapt to cyber environments or specific applications also makes it volatile. Like exposure to pathogens can help strengthen an immune response to said pathogens, so can an open-source code be made stronger by its exposure to different environments and applications to be ready to face cybersecurity threats. Implementing an open code isn’t any less risky (cybersecurity-wise) than closed codes due to the testing and enhancements made by so many coders or programmers. However, it is up to the implementer to use the same rules that apply to other closed source software. The implementer must be aware of the code’s source and avoid code from non-reputable sources who could have modified it with negative intentions. Overall, the code is resilient, adaptable, and agile to adapt given a new environment.

Scalability

The add-on and customization aspects of open-source also allow the code to be highly scalable. This scalable implementation happens in two dimensions: adoption timeline and application-based. Both are important to guarantee user acceptance and that it meets the operation and application requirements. Regarding the adoption timeline, scalability allows modification of the software and code to meet users’ expectations. Open-sourced code enables the implementation of features for user testing and feedback. The ultimate solution will include multiple iterations to meet the users’ needs and fulfill operation expectations.

On the other hand, this code is scalable based on the application(s), such as working on different machines, multiples of the same machine with different purposes, or adding/dropping features for specific uses. Say, for example, there are three of the same machine (A, B, and C), but they are in different environments. Machine A is in an environment that is 28°F , B is at room temperature, and C is exposed to constant wash-down. In this case, the condition monitoring software defines the acceptable parameters for each scenario, avoiding false alarms from erroneous triggers. In this example, the base code is adapted to include specific features based on the application.

Accessibility

In general, cost-effective and high-quality open-source code is available online. There are additional resources such as free coding tutorials that don’t require any licenses as well. Moreover, when programmers update an open code, they must make the new version available again, ensuring that the code is accessible and up to date.

Cost-effectiveness and quality

Regarding cost-effectiveness, using community open-source code significantly reduces the cost of developing, integrating, and testing software built in-house. It also reduces the implementation time and makes for better production operations. Essentially, it is high-quality, reliable code created by trusted sources for multiple coders and users.

“The application drives the technology” mantra is at the heart of open-source software development—a model where source code is available for community members to use, modify, and share. IIoT enablers and providers in the manufacturing industry own a particular solution that is then available for manufacturers to adapt to their specific operational requirements. With the increasing adoption of data-collecting technologies, it is in manufacturers’ best interest to seek software providers who grant them the flexibility to adjust software solutions to meet their specific needs. Automation is a catalyst for data-driven operation and maintenance.

Control Meets IIoT, Providing Insights into a New World

In manufacturing and automation control, the programmable logic controller (PLC) is an essential tool. And since the PLC is integrated into the machine already, it’s understandable that you might see the PLC as all that you need to do anything in automation on the manufacturing floor.

Condition monitoring in machine automation

For example, process or condition monitoring is emerging as an important automation feature that can help ensure that machines are running smoothly. This can be done by monitoring motor or mechanical vibration, temperature or pressure. You can also add functionality for a machine or line configuration or setup by adding sensors to verify fixture locations for machine configuration at changeovers.

One way to do this is to wire these sensors to the PLC and modify its code and use it as an all-in-one device. After all, it’s on the machine already. But there’s a definite downside to using a PLC this way. Its processing power is limited, and there are limits to the number of additional processes and functions it can run. Why risk possible complications that could impact the reliability of your control systems? There are alternatives.

External monitoring and support processes

Consider using more flexible platforms, such as an edge gateway, Linux, and IO-Link. These external sources open a whole new world of alternatives that provide better reliability and more options for today and the future. It also makes it easier to access and integrate condition monitoring and configuration data into enterprise IT/OT (information technology/operational technology) systems, which PLCs are not well suited to interface with, if they can be integrated at all.

Here are some practical examples of this type of augmented or add-on/retrofit functionality:

      • Motor or pump vibration condition monitoring
      • Support-process related pressure, vibration and temperature monitoring
      • Monitoring of product or process flow
      • Portable battery based/cloud condition monitoring
      • Mold and Die cloud-based cycle/usage monitoring
      • Product changeover, operator guidance system
      • Automatic inventory monitoring warehouse system

Using external systems for these additional functions means you can readily take advantage of the ever-widening availability of more powerful computing systems and the simple connectivity and networking of smart sensors and transducers. Augmenting and improving your control systems with external monitoring and support processes is one of the notable benefits of employing Industrial Internet of Things (IIoT) and Industry 4.0 tools.

The ease of with which you can integrate these systems into IT/OT systems, even including cloud-based access, can dramatically change what is now available for process information-gathering and monitoring and augment processes without touching or effecting the rudimentary control system of new or existing machines or lines. In many cases, external systems can even be added at lower price points than PLC modification, which means they can be more easily justified for their ROI and functionality.

The Right Mix of Products for Recipe-Driven Machine Change Over

The filling of medical vials requires flexible automation equipment that can adapt to different vial sizes, colors and capping types. People are often deployed to make those equipment changes, which is also known as a recipe change. But by nature, people are inconsistent, and that inconsistency will cause errors and delay during change over.

Here’s a simple recipe to deliver consistency through operator-guided/verified recipe change. The following ingredients provide a solid recipe-driven change over:

Incoming Components: Barcode

Fixed mount and hand-held barcode scanners at the point-of-loading ensure correct parts are loaded.

Change Parts: RFID

Any machine part that must be replaced during a changeover can have a simple RFID tag installed. A read head reads the tag in ensure it’s the correct part.

Feed Systems: Position Measurement

Some feed systems require only millimeters of adjustment. Position sensor ensure the feed system is set to the correct recipe and is ready to run.

Conveyors Size Change: Rotary Position Indicator

Guide rails and larger sections are adjusted with the use of hand cranks. Digital position indicators show the intended position based on the recipes. The operators adjust to the desired position and then acknowledgment is sent to the control system.

Vial Detection: Array Sensor

Sensor arrays can capture more information, even with the vial variations. In addition to vial presence detection, the size of the vial and stopper/cap is verified as well. No physical changes are required. The recipe will dictate the sensor values required for the vial type.

Final Inspection: Vision

For label placement and defect detection, vision is the go-to product. The recipe will call up the label parameters to be verified.

Traceability: Vision

Often used in conjunction with final inspection, traceability requires capturing the barcode data from the final vials. There are often multiple 1D and 2D barcodes that must be read. A powerful vision system with a larger field of view is ideal for the changing recipes.

All of these ingredients are best when tied together with IO-Link. This ensures easy implantation with class-leading products. With all these ingredients, it has never been easier to implement operator-guided/verified size change.

Are machine diagnostics overburdening our PLCs?

In today’s world, we depend on the PLC to be our eyes and ears on the health of our automation machines. We depend on them to know when there has been an equipment failure or when preventative maintenance is needed. To gain this level of diagnostics, the PLC must do more work, i.e. more rungs of code are needed to monitor the diagnostics supplied to the sensors, actuators, motors, drives, etc.

In terms of handling diagnostics on a machine, I see two philosophies. First, put the bare bones minimum in the PLC. With less PLC code, the scan times are faster, and the PLC runs more efficiently. But this version comes with the high probability for longer downtime when something goes wrong due to the lack of granular diagnostics. The second option is to add lots of diagnostic features, which means a lot of code, which can lessen downtime, but may throttle throughput, since the scan time of the PLC increases.

So how can you gain a higher level of diagnostics on the machine and lessen the burden on the PLC?

While we usually can’t have our cake and eat it too, with Industry 4.0 and IIoT concepts, you can have the best of both of these scenarios. There are many viewpoints of what these terms or ideas mean, but let’s just look at what these two ideas have made available to the market to lessen the burden on our PLCs.

Data Generating Devices Using IO-Link

The technology of IO-Link has created an explosion of data generating devices. The level of diversity of devices, from I/O, analog, temperature, pressure, flow, etc., provides more visibility to a machine than anything we have seen so far. Utilizing these devices on a machine can greatly increase visibility of the processes. Many IO-Link masters communicate over an Ethernet-based protocol, so the availability of the IO-Link device data via JSON, OPC UA, MQTT, UDP, TCP/IP, etc., provides the diagnostics on the Ethernet “wire” where more than just the PLC can access it.

Linux-Based Controllers

After using IO-Link to get the diagnostics on the Ethernet “wire,” we need to use some level of controller to collect it and analyze it. It isn’t unusual to hear that a Raspberry Pi is being used in industrial automation, but Linux-based “sandbox” controllers (with higher temperature, vibration, etc., standards than a Pi) are available today. These controllers can be loaded with Codesys, Python, Node-Red, etc., to provide a programming platform to utilize the diagnostics.

Visualization of Data

With IO-Link devices providing higher level diagnostic data and the Linux-based controllers collecting and analyzing the diagnostic data, how do you visualize it?  We usually see expensive HMIs on the plant floors to display the diagnostic health of a machine, but by utilizing the Linux-based controllers, we now can show the diagnostic data through a simple display. Most often the price is just the display, because some programming platforms have some level of visualization. For example, Node-Red has a dashboard view, which can be easily displayed on a simple monitor. If data is collected in a server, other visualization software, such as Grafana, can be used.

To conclude, let’s not overburden the PLC with diagnostic; lets utilize IIoT and Industry 4.0 philosophy to gain visibility of our industrial automation machines. IO-Link devices can provide the data, Linux-based controllers can collect and analyze the data, and simple displays can be used to visualize the data. By using this concept, we can greatly increase scan times in the PLC, while gaining a higher level of visibility to our machine’s process to gain more uptime.

Adding a higher level of visibility to older automation machines

It’s never too late to add more visibility to an automation machine.

In the past, when it came to IO-Link opportunities, if the PLC on the machine was a SLC 500, a PLC-5, or worse yet, a controller older than I, there wasn’t much to talk about. In most of these cases, the PLC could not handle another network communication card, or the PLC memory was maxed, or it used a older network like DeviceNet, Profibus or ASi that was maxed. Or it was just so worn out that it was already being held together with hope and prayer. But, today we can utilize IIoT and Industry 4.0 concepts to add more visibility to older machines.

IIOT and Industry 4.0 have created a volume of products that can be utilized locally at a machine, rather than the typical image of Big Data. There are three main features we can utilize to add a level of visibility: Devices to generate data, low cost controllers to collect and analyze the data, and visualization of the data.

Data Generating Devices

In today’s world, we have many devices that can generate data outside of direct communication to the PLC.  For example, in an Ethernet/IP environment, we can put intelligent devices directly on the EtherNet/IP network, or we can add devices indirectly by using technologies like IO-Link, which can be more cost effective and provide the same level of data. These devices can add monitoring of temperature, flow, pressure, and positioning data that can reduce downtime and scrap. With these devices connected to an Ethernet-based protocol, data can be extracted from them without the old PLC’s involvement.  Utilizing JSON, OPC UA, MQTT, UDP and TCP/IP, the data can be made available to a secondary controller.

Linux-Based Controllers

An inexpensive Raspberry Pi could be used as the secondary controller, but Linux-based open controllers with industrial specifications for temperature, vibration, etc. are available on the market. These lower cost controllers can then be utilized to collect and analyze the data on the Ethernet protocol. With a Linux based “sandbox” system, many programming software packages could be loaded, i.e. Node-Red, Codesys, Python, etc., to create the needed logic.

Visualization of Data

Now that the data is being produced, collected and analyzed, the next step is to view the information to add the extra layer of visibility to the process of an older machine. Some of the programming software that can be loaded into the Linux-based systems, which have a form a visualization, like a dashboard (Node-Red) or an HMI feel (Codesys). This can be displayed on a low-cost monitor on the floor near the machine.

By utilizing the products used in the “big” concepts of IIOT and Industry 4.0, you can add a layer of diagnostic visualization to older machines, that allows for easier maintenance, reduced scrap, and predictive maintenance.

Increase Efficiencies and Add Value with Data

Industry 4.0 and the Industrial Internet of Things (IIoT) are very popular terms these days.  But they are more than just buzzwords; incorporating these concepts into your facility adds instant value.

Industry 4.0 and IIoT provide you with much needed data. Having information easily available regarding how well your machines are performing allows for process improvements and increased efficiencies. The need for increased efficiency is driving the industry to improve manufacturing processes, reduce downtime, increase productivity and eliminate waste.  Increased efficiency is necessary to stay competitive in today’s manufacturing market.  With technology continuing to advance and be more economical, it is more feasible than ever to implement increased efficiencies in the industry.

Industry 4.0 and IIoT are the technology concepts of smart manufacturing or the smart factory.  IIoT is at the core of this as it provides access to data directly from devices on the factory floor. By implementing a controls architecture with IO-Link and predictive maintenance practices through condition monitoring parameters from the devices on the machine, Industry 4.0 and IIoT is occurring.

Condition monitoring is the process of monitoring the condition of a machine through parameters.  In other words, monitoring a parameter that gives the condition of the machine or a device on the machine such as vibration, temperature, pressure, rate, humidity etc. in order to identify a significant change in condition, which indicates the possible development of a fault.  Condition monitoring is the primary aspect of predictive maintenance.

IO-Link is a point-to-point communication for devices which allows for diagnostics information without interfering with the process data. There are hundreds of IO-Link smart devices, which provide condition monitoring parameters for the health of the device and the health of the machine.  By utilizing capabilities of IO-Link for diagnostics the ability to gather large amounts of data directly from devices on the factory floor gives you more control over the machines efficiency.  Smart factory concepts are available today with IO-Link as the backbone of the smart machine and smart factory.

Dive into big data with confidence knowing you can gather the information you need with the smart factory concepts available today.

Make 2020 the Year of Smart Manufacturing

1.jpg

As we near the end of 2019, it is time to start thinking of New Year’s resolutions. Mostly, these are personal — a promise to eat better, to work out, or save money. But the clean slate of a fresh year on the calendar is also a good time to reevaluate business practices and look at how we can improve on the work floor. And as we enter a new decade, one of the areas every manufacturer needs to be considering is smart manufacturing.

Smart manufacturing uses real-time data and technology to help you meet the changing demands and conditions in the factory and supply chain to meet customer needs. This accurate, yet seemingly vague, definition means that the implementation of smart manufacturing into the workplace can help you meet an array of issues that negatively impact efficiency and the bottom line.

Implementation of smart manufacturing can:

  • Reduce manufacturing costs
  • Permit higher machine availability
  • Boost overall equipment effectiveness
  • Improve asset utilization
  • Allow for traceability of products and parts
  • Enhance supply chain
  • Ease new technology integration
  • Improve product quality
  • Reduce scrap rates
  • Minimize die crashes
  • Decrease unplanned downtime

These are big claims, but all achievable with the modernization of our systems, which is long overdue for most. According to the latest polls, 4 out of 10 manufacturers have little to no visibility into the real-time status of their manufacturing processes and an even higher percentage are utilizing at least some equipment that is far past its intended lifespan.

Half of manufacturers only become aware of system issues only after a breakdown occurs. This is unacceptable in 2020. Much like we expect our personal vehicles to alert us to upcoming issues — think of your service engine light or oil-life indicator —we need insight into the operation and performance of our manufacturing equipment.

Of course, joining the next industrial revolution comes at a cost, but if we put a dollar value on downtime and evaluate the cost benefit of the expected outcomes, it is hard to argue with the figures.

While we don’t need the start of a new year to make major changes, the flipping of the calendar page can give us the push we need to evaluate where we are and where we want to be. So, what are you waiting for?

Define your vision – Determine what you want to accomplish. Be clear and concise in articulating what you want to accomplish.

Set an objective for 2020 – You don’t have to change everything at once. Growth can come slower. What can you accomplish in the coming year?

Identify tactics and projects – Break down your vision into bite-size goals and projects. Prioritize realistic goals and set deadlines.

Link to KPIs – Make sure your smart manufacturing goals tie to key performance indicators. Having measurable results demonstrates just how effective the changes are and how they are improving business overall.

Assign responsibility – Designate owners to each step of the process. Make it someone’s responsibility to implement, track and report on the efforts. If it is everyone’s job, then it is no one’s job.

Tracking and Traceability in Mobility: A Step Towards IIoT

In today’s highly competitive automotive environment, it is becoming increasingly important for companies to drive out operating costs in order to ensure their plants maintain a healthy operating profit.

Improved operational efficiency in manufacturing is a goal of numerous measures. For example, in Tier 1 automotive parts manufacturing it is common place to have equipment that is designed to run numerous assemblies through one piece of capital equipment (Flexible Manufacturing). In order to accommodate multiple assemblies, different tooling is designed to be placed in this capital equipment. This reduces required plant floor real-estate and the costs normally required for unidimensional manufacturing equipment. However, with this flexibility new risks are introduced, such as running the machine with incorrect tooling which can cause increased scrap levels, incorrect assembly of parts and/or destruction/damage of expensive tooling, expedited freight, outsourcing costs, increased manpower, sorting and rework costs, and more.

Having operators manually enter recipes or tooling change information introduces the Human Error of Probability (HEP).  “The typical failure rates in businesses using common work practices range from 10 to 30 errors per hundred opportunities. The best performance possible in well managed workplaces using normal quality management methods are failure rates of 5 to 10 in every hundred opportunities.” (Sondalini)

Knowing the frequency of product change-over rates, you can quickly calculate the costs of these potential errors. One means of addressing this issue is to create Smart Tooling whereby RFID tags are affixed on the tooling and read/write antennas are mounted on the machinery and integrated into the control architecture of the capital equipment. The door to a scalable solution has now been opened in which each tool is assigned a unique ID or “license plate” identifying that specific tooling. Through proper integration of the capital equipment, the plant can now identify what tooling is in place at which OP station and may only run if the correct tooling is confirmed in place. In addition, one can then move toward predictive maintenance by placing process data onto the tag itself such as run time, parts produced, and tooling rework data. Collection and monitoring of this data moves the plant towards IIoT and predictive maintenance capabilities to inform key personnel when tooling is near end of life or re-work requirement thus contributing to improved OEE (Overall Equipment Effectiveness) rates.

Capture

For more information on RFID, visit www.balluff.com.

*Source: Mike Sondalini, Managing Director, Lifetime Reliability Solutions, Article: Unearth the answers and solve the causes of human error in your company by understanding the hidden truths in human error rate tables

Diversity in factory automation

This blog was originally posted on the Innovating Automation Blog.

Biodiversity is beneficial not only in biological ecosystems, but in industrial factory automation as well. Diversity helps to limit the effects of unpredictable events.

Typically, in factory automation a control unit collects data from sensors, analyzes this data and, according to its programmed instruction, triggers actuators to a defined operation. In most cases, a single-channel structure consisting of sensor, logic and output perfectly fulfills the application requirements. Yet in some cases two-channel structures are preferred to increase the reliability of the control concept.

Clamping control at machine tool spindles

spindle-position-control

To monitor clamping positions of tools in machine tool spindles, several options are possible: Sensors with binary output (e.g. PNP normally open) or sensors with continuous output (e.g. 0..10V or IO-Link) may be installed. The clamping process in many spindles is controlled with hydraulic actuators. This means the clamping force can be controlled by using pressure sensors which control the applied hydraulic pressure in the clamping cylinder.

The combined usage of both position and pressure sensors controls the clamping status in a better manner than using only one sensor principle. Typically, there are three clamping situations: 1) unclamped 2) clamped without object 3) clamped with object. In tooling spindles, the clamped position is usually achieved by using springs which force the mechanics to hold and clamp the object when no pressure is applied. A pneumatic or hydraulic actuator allows the worker to unclamp the object by providing force to overcome the spring load. Without hydraulic or pneumatic pressure, the clamped position should be detected by the position sensor. When enough pressure is being built up, after a short delay, the unclamped position should be achieved. Otherwise something must be wrong.

The advantage of diversity

By using two different sensor principles (in this case pressure sensing and position sensing) the risk of so-called common cause failures is reduced. The probability of concurrent effects of environmental impact on the different sensors is diminished, thereby increasing the detection rate of failures. The machine control can immediately react if the signals of pressure and position sensors do not match, simplifying monitoring of the clamping process.