Pressure Measurement Errors

Pressure Measurement Errors

Pressure measurement errors are discrepancies between the actual value of the measured pressure and the value indicated by the pressure sensor.

These errors can be categorized into four main types:

  1. Adjustable errors
  2. Systemic errors
  3. Random errors
  4. Temperature errors

For each of the main type, there are also the most related errors type included in, and for each type of error, we’ve tried to make it clear for below item, with the words as easy as to understand.

We’ve discussed most of the errors type in our previous post, such as

  1. What is A error
  2. What is the difference between A error and B error
  3. What is the relation-ship between A error and B error
  4. How such error affect pressure sensor performance
  5. How to migrate or reduce it
  6. Forecast the new technology in future

In today post, we hope to summarize all above content and try to work out the most common error check list, so you can find the full knowledge here, and find to check the most useful one easily,

All what we hope is to be your assistance in the process of pressure measurement.

But if you are still experiencing some problem or inquiry, you can feel free to contact us here.

Pressure Measurement Errors

Adjustable Errors

Adjustable errors in pressure sensors refer to the inaccuracies or deviations in a sensor’s readings that can be corrected or adjusted through calibration or compensation techniques.

Systematic Errors

Systematic errors in pressure sensors represent consistent and predictable inaccuracies in measurement that occur due to some identifiable factors. They can often be compensated for or eliminated with proper calibration.

Here are some common types of systematic errors:

Random Errors

Random errors in pressure sensors are those that occur unpredictably and without a consistent pattern. Unlike systematic errors, they cannot be corrected through calibration because they are not consistent. Here are the common types of random errors:

Temperature Errors

Temperature errors in pressure sensors refer to inaccuracies in the sensor’s readings that occur due to changes in ambient temperature. Here are the main ways in which temperature can affect a pressure sensor’s readings:

ESS3 Serial Silicon Piezo-resistive Pressure Sensor

ParametersTyp.Max.Unit
Nonlinearity0.20.5%FS
Hysteresis0.050.1%FS
Repeatability0.050.1%FS
Zero Output±1±2mV DC
FS output100mV DC
Input/ Output Impedance2.63.8
Zero Temperature Drift*±0.15±0.8%FS,@25℃
Sensitivity Temperature Drift*±0.2±0.7%FS,@25℃
Long-term Stability0.1 %FS/year
*The typical value of 0~10kPa & 0~20kPa's zero temperature drift is 0.4%FS@25℃, max value is 1.6%FS@25℃
*The typical value of 0~10kPa & 0~20kPa's sensitivity temperature drift is 0.4%FS@25℃, max value is 1.6%FS@25℃

After better understanding each of the relating technical data we listed above, you probably may have a rough idea for the pressure sensor you are looking for, and you know there is none sensor can beat all technical data at each aspect.

You know the most important is balancing,

You will balance between the one technic data which you wish the most in application and the one which you think not as important as the first one at this moment.

If you still can not make your decision, please feel free to contact us or drop us a line anytime.

ESS3 Series Piezoresistive Pressure Sensor package - v2.0

Pressure Sensor Noise and EMI

Pressure Sensor Noise and EMI

Introduction

Electrical noise, including electromagnetic interference (EMI), can have several negative effects on the performance of a pressure sensor.

It can introduce signal distortion, reduce sensitivity, cause offset drift, result in signal loss, and induce crosstalk.

These effects occur because electrical noise disrupts the proper functioning of the pressure sensor’s electrical circuits.

So it is necessary to learn the knowledge about electrical noise and EMI, and find ways to mitigate the impact which thy cause to pressure sensor.

What is electrical noise

Electrical noise is essentially unwanted or random signals that can interfere with the desired signal (the accurate pressure reading).

There may probably be three main types of electrical noise typically can affect pressure sensors performance:

they are

  1. Thermal Noise
  2. Shot Noise
  3. Flicker Noise

Pressure Sensor Noise and EMI- noise

Thermal Noise

This is caused by the random motion of charge carriers (essentially electrons) inside the electrical components, especially resistors.

It is also known as Johnson-Nyquist noise. The magnitude of thermal noise is directly proportional to the square root of the absolute temperature.

For example,

A typical resistance of 120 Ohms at room temperature (300 Kelvin) can produce thermal noise of about 1.8 microvolts RMS.

Shot Noise:

This arises from the fact that current is made up of discrete charges (electrons). The arrival times of these electrons are random, which results in fluctuations in the current, known as shot noise.

For example,

A current of 1 milliampere through a forward-biased diode could produce shot noise of about 1.3 microvolts RMS.

Flicker Noise, or 1/f noise:

This is a type of noise that increases in magnitude at lower frequencies. It’s often associated with imperfections in the materials and manufacturing processes of electronic components.

What is EMI

Pressure Sensor Noise and EMI-EMI

Electromagnetic interference (EMI), also referred to as radio-frequency interference (RFI) when in the radio frequency spectrum, is a disturbance generated by an external source that affects an electrical circuit.

This disturbance can degrade the performance of the circuit or even stop it from functioning.

EMI can significantly impact the performance of a pressure sensor, leading to inaccuracies, reduced sensitivity, or even complete failure.

Therefore, it is crucial to consider EMI mitigation strategies when designing or using pressure sensor systems in environments with high electromagnetic interference.

Relationship between electrical noise and EMI

Electromagnetic interference (EMI) and electrical noise are related concepts but have distinct differences.

Electromagnetic interference (EMI) refers to the unwanted electromagnetic signals or disturbances that can interfere with the proper functioning of electronic devices or systems.

EMI can be generated by various sources, such as power lines, radio frequency transmitters, motors, or other electronic equipment. It can manifest as electromagnetic radiation or conducted disturbances that can affect nearby electronic components or systems.

On the other hand, electrical noise refers to the random fluctuations or disturbances in an electrical signal that can occur due to various factors.

Electrical noise can be caused by internal or external sources, including thermal noise, shot noise, crosstalk, ground loops, or poor electrical connections. It can manifest as unwanted voltage or current variations that can degrade the quality of an electrical signal.

The relationship between EMI and electrical noise is that EMI can be one of the sources of electrical noise.

When EMI interferes with electronic devices or systems, it can introduce unwanted signals or disturbances into the electrical circuits, leading to electrical noise. In this context, EMI acts as an external source of electrical noise.

However, it’s important to note that not all electrical noise is caused by EMI. Electrical noise can also arise from internal factors within electronic components, such as thermal effects or semiconductor imperfections. Additionally, electrical noise can occur in isolated systems without any external EMI sources.

So, it is clear that EMI refers to unwanted electromagnetic disturbances that can interfere with electronic devices or systems, while electrical noise refers to random fluctuations or disturbances in an electrical signal. EMI can be one of the sources of electrical noise, but electrical noise can also arise from internal factors within electronic components.

How noise and EMI affect pressure senso performance

Noise and electromagnetic interference (EMI) can significantly affect the performance of a pressure sensor, leading to inaccurate readings and potential system failures.

Noise effects

like mentioned above, there are three main types of electrical noise that can impact pressure sensors: thermal noise, shot noise, and flicker noise.

Let’s consider a scenario where the combined root-mean-square (RMS) noise is 5 microvolts in a pressure sensor with a sensitivity of 1 millivolt per Pascal (Pa).

This would mean that the smallest detectable change in pressure is approximately 0.005 Pa.

If the level of noise increases, the smallest detectable change in pressure also increases, reducing the sensor’s ability to accurately measure small changes in pressure.

Electromagnetic Interference effects

EMI can induce unwanted currents or voltages in the sensor circuitry, leading to erroneous readings.

For instance, if the EMI causes an unwanted voltage fluctuation of 5 millivolts in a sensor with a sensitivity of 1 millivolt per Pa, this could result in an error of 5 Pa in the pressure reading.

In severe cases, EMI can even cause the sensor to fail completely.

The combined effects of noise and EMI can lead to significant errors in pressure readings.

For example, if a process requires a pressure to be maintained at 100 Pa with a tolerance of ±1 Pa, the presence of noise and EMI could lead to readings that vary from 95 to 105 Pa, which is outside the required tolerance.

How to migrate the effect of electrical noise and EMI for pressure sensor

Mitigating the effects of electrical noise and Electromagnetic Interference (EMI) in pressure sensors involves careful design, component selection, and installation techniques. Here’s how to address each:

1,Shielding

Shielding involves using a conductive enclosure around the sensor to block out external electromagnetic fields.

For instance, a copper shield with a thickness of about 0.09 mm can provide an attenuation of up to 60 dB for frequencies from 30 MHz to 1 GHz.

2, Filtering

Filters allow signals of certain frequencies to pass through while blocking others. If the pressure sensor operates at frequencies between 10 Hz and 100 Hz, a band-pass filter designed for these frequencies can help filter out noise occurring outside this frequency range.

3, Grounding

Grounding helps to provide a path for the unwanted noise signal to discharge safely. It’s important to ground the sensor and shield at one point to avoid creating ground loops, which could inadvertently introduce more noise.

4, Component Selection

Choose components with low susceptibility to noise and EMI.

For example, consider using operational amplifiers (op-amps) or voltage regulators with built-in noise reduction features.

5, Cable Practices

Use shielded cables to connect the pressure sensor, and avoid running these cables parallel to power lines or near other sources of EMI. The cable shield should also be grounded at one end to provide an additional path for unwanted signals to be discharged.

6, Distance

Keep the sensor as far away as possible from known sources of EMI, such as motors, power lines, and radio transmitters.

7, Signal Processing

Techniques like signal averaging, where multiple sensor readings are taken and averaged, can help to reduce random noise.

For instance, if each individual reading has a noise of up to ±1 Pa but the average of 100 readings is taken, this could reduce the noise level by a factor of 10, resulting in a noise level of only ±0.1 Pa.

Through above methods, it’s possible to significantly mitigate the effects of electrical noise and EMI on the performance of pressure sensors, ensuring reliable operation and accurate readings.

Wrap up

Electrical noise, including electromagnetic interference (EMI), can have several negative effects on the performance of a pressure sensor.

It can introduce signal distortion, reduce sensitivity, cause offset drift, result in signal loss, and induce crosstalk.

These effects occur because electrical noise disrupts the proper functioning of the pressure sensor’s electrical circuits.

To mitigate the impact of electrical noise and EMI on pressure sensor performance, various measures can be taken.

These include filtering, grounding, shielding, signal conditioning, and using high-quality components. These measures aim to minimize the interference caused by electrical noise and EMI, ensuring accurate and reliable pressure measureme

Pressure sensor Shock and Vibration

Pressure sensor Shock and Vibration

Introduction

Shock and vibration can significantly influence the performance of pressure sensors, potentially leading to inaccurate readings or even premature sensor failure.

If a pressure sensor is subject to a shock beyond its rated capacity, it can lead to physical damage to the sensor elements or the electronic components. For example, a piezoresistive sensor, which works by changing resistance under pressure, might experience a permanent change in its base resistance after a severe shock, leading to an offset in all subsequent measurements.

On the other hand, when a sensor is exposed to vibration levels beyond its rated capacity, it can lead to drift in the sensor’s readings, increased noise, and reduced sensitivity. In extreme cases, continuous high-level vibration can lead to mechanical failure of the sensor components.

What is the difference between shock and vibration

Shock and vibration are different types of mechanical disturbances that can affect various devices and structures, including pressure sensors. Though they might seem similar, they are distinct in their nature, duration, and the ways they can impact a system.

Shock

Shock is a sudden, short duration disturbance or impulse that is usually caused by a rapid acceleration or deceleration.

It’s like a quick, strong jolt.

Shocks are typically caused by events like impacts, drops, or collisions.

The intensity of a shock is often described in terms of “g” force, where 1g equals the acceleration due to gravity (9.81 m/s²).

For example, a shock rating of 50g means the device can withstand a sudden force 50 times the gravitational acceleration without sustaining damage.

The duration of a shock event is typically very short, often in the range of microseconds to milliseconds.

Vibration

On the other hand, vibration is a continuous, oscillatory motion that happens back and forth around a reference point.

It’s like a steady, rhythmic shaking.

Vibration can be caused by rotating machinery, structural resonance, wind buffeting, and many other sources.

The intensity of vibration is described in terms of frequency (how many times the vibration occurs in a second, measured in Hertz, Hz) and amplitude (the size of the vibration, often described in “g” force).

For example, a vibration rating of 5g at 10-500 Hz means the device can handle continuous shaking with an amplitude of 5g over that frequency range without performance degradation.

While both shock and vibration can impact the performance of a pressure sensor, they affect the pressure sensor in different ways and thus, require different mitigation strategies.

How shock and vibration affect pressure sensor performance

Pressure sensor Shock and Vibration-1

Shock and vibration can significantly impact the performance of pressure sensors, potentially leading to inaccurate readings or even premature sensor failure.

Firstly, shock refers to a sudden, high-impact force that the sensor might experience, often as a result of being dropped, hit, or when mounted on machinery that undergoes sudden movements.

A pressure sensor’s shock rating, usually given in ‘g’ units (1g = 9.81 m/s²), specifies the magnitude of shock it can withstand without damage or performance degradation.

For instance, a sensor with a shock rating of 100g can theoretically withstand an instantaneous acceleration or deceleration of 100 times the acceleration due to gravity without expected performance loss.

If a pressure sensor is subject to a shock beyond its rated capacity, it can lead to physical damage to the sensor elements or the electronic components.

For example, a piezoresistive sensor, which works by changing resistance under pressure, might experience a permanent change in its base resistance after a severe shock, leading to an offset in all subsequent measurements.

Vibration, on the other hand, involves repeated or continuous oscillatory motion. It’s commonly encountered in industrial environments where the sensor might be mounted on machinery that vibrates during operation.

Vibration can cause mechanical stress and fatigue in the sensor components, leading to gradual deterioration over time.

The vibration rating of a sensor, typically given in terms of frequency (Hz) and amplitude (g), indicates the level of continuous vibration it can endure.

For example, a sensor with a vibration rating of 20g at 50-2000 Hz means it can withstand continuous vibration with an amplitude of 20g within that frequency range.

When a sensor is exposed to vibration levels beyond its rated capacity, it can lead to drift in the sensor’s readings, increased noise, and reduced sensitivity.

In extreme cases, continuous high-level vibration can lead to mechanical failure of the sensor components.

How to migrate the effect of shock and vibration lead to pressure sensor

Pressure sensor Shock and Vibration-2

To minimize the impacts of shock and vibration, measures such as the use of damping materials in the sensor mounting, a ruggedized sensor design, strategic sensor placement, and digital filtering techniques can be employed.

These methods are designed to ensure that the pressure sensor can deliver reliable and accurate performance, even under challenging conditions.

1, Use of Damping Materials:

Incorporating damping materials in the sensor mounting design can absorb and dissipate some of the energy from shock or vibration.

Damping materials for pressure sensor production can reduce the impact of shock and vibration on the sensor.

These materials can absorb and dissipate energy, minimizing the transfer of shock and vibration from the environment to the sensor itself.

These materials are often incorporated into the sensor’s mounting design or housing. They can be made from a variety of substances, such as certain types of rubber or viscoelastic compounds, which have properties that allow them to absorb energy and return slowly to their original shape after being deformed.

2, Ruggedized Design:

A pressure sensor can be designed with ruggedized components and a sturdy enclosure to withstand higher levels of shock and vibration. This could involve using thick-walled stainless steel for the sensor housing and high-strength alloys for the sensing elements.

A “ruggedized design” in the view of pressure sensor production also refers to the design and construction techniques used to make the sensor more resilient to harsh conditions, such as high levels of shock, vibration, temperature extremes, and harsh environmental conditions.

Here are a few elements often found in ruggedized design:

  • Robust Housing:
  • Sealed Enclosures:
  • Enhanced Sensor Elements:
  • Secure Connections:.
  • Thick Circuitry:
  • Overload Protection:

3, Location

The sensor should be installed in a location that minimizes exposure to shock and vibration.

For example, on a vibrating machine, the sensor should not be placed at locations with high amplitude vibrations.

4, Signal Processing

Techniques such as digital filtering can be used to reduce the noise caused by vibration in the sensor’s readings.

For instance, a low-pass filter could be used to remove high-frequency noise that is beyond the frequency range of the expected pressure signal.

5, Shock and Vibration Testing

Sensors should be thoroughly tested for shock and vibration resistance. Such testing often includes subjecting the sensor to specified levels of shock and vibration and measuring its performance. The data from these tests can be used to make design adjustments to improve resistance.

6, Redundancy

In critical applications, using multiple sensors and averaging their readings can help compensate for any inaccuracies caused by shock or vibration. This approach assumes that any errors introduced by these factors will be random and not systematically bias the readings of all sensors in the same direction.

7, Calibration

Regular calibration of the sensor can also help to correct for any drift or offset in the sensor’s readings caused by shock or vibration.

By utilizing these strategies, it’s possible to significantly reduce the negative effects of shock and vibration on pressure sensor performance, leading to more accurate and reliable measurements.

Wrap up

Shock and vibration can significantly affect the performance of pressure sensors. Shock refers to a sudden high-impact force that can cause physical damage to the sensor or create a permanent offset in its readings. Vibration, on the other hand, is a continuous oscillatory motion that can lead to increased noise in the sensor’s readings, reduced sensitivity, and mechanical failure over time.

To mitigate these effects, a variety of strategies can be employed. Damping materials can be used to absorb shock and vibration energy, and a ruggedized sensor design can increase the sensor’s resistance to these forces. Strategic sensor placement can minimize exposure to shock and vibration, and digital filtering techniques can reduce noise in the sensor’s readings. Regular shock and vibration testing can help identify potential issues early, and redundancy and regular calibration can help ensure accurate and reliable measurements even in the presence of shock and vibration.

Key Takeaways

 

  1. Shock and vibration can cause physical damage, measurement offset, measurement noise, and reduced sensitivity in pressure sensors.
  1. Damping materials, ruggedized designs, and strategic sensor placement can help protect sensors from shock and vibration.
  1. Digital filtering techniques can be used to reduce measurement noise caused by vibration.
  1. Regular shock and vibration testing, along with the use of redundancy and regular calibration, can help ensure accurate and reliable sensor performance even in challenging conditions.

Offset Error | Zero-Point Error | Span Error

Offset Error | Zero-Point Error | Span Error

Offset error and Zero-point error

Offset error and zero-point error are two types of systematic errors that can occur in pressure sensors. They are closely related, as they both involve discrepancies between the sensor’s output and the actual value that should be measured.

Offset Error occurs when the sensor’s output deviates from the expected value across all measurement points. It can be thought of as a consistent “shift” in the sensor’s output.

For example, if a sensor is supposed to output 0 mV at 0 psi, but instead outputs 5 mV, it has an offset error of 5 mV.

This error persists across the pressure range – that is, at 10 psi, if it should read 100 mV but reads 105 mV instead, the offset error is still 5 mV.

Zero-Point Error, also known as zero-offset error, is a specific type of offset error. It refers to the discrepancy between the sensor’s output and the actual value when the input pressure is zero.

In other words, it’s the sensor’s output when it should be reading zero.

For instance, if a pressure sensor outputs 1 mV when no pressure is applied (0 psi), it has a zero-point error of 1 mV.

The relationship between offset error and zero-point error is such that zero-point error is a form of offset error.

However, while zero-point error only describes the deviation at zero input, offset error refers to the consistent deviation across all measurement points.

These errors can be corrected through a process called calibration.

By applying known pressures and recording the sensor’s output, a calibration curve can be created. Any consistent deviation from this curve can be identified as an offset error and corrected for in future measurements.

Similarly, by checking the sensor’s output at zero pressure, any zero-point error can be identified and corrected.

Take an example to understand Offset error & Zero-point error

Let’s consider a simple real-world example to understand the difference between offset error and zero-point error.

Let’s use a kitchen scale as a metaphor for a pressure sensor.

Offset Error:

Imagine you have a kitchen scale that you use to weigh ingredients.

  • Let’s say when you put a 500g block of cheese on it, the scale reads 505g.
  • Then, when you put a 1kg (1000g) loaf of bread on it, the scale reads 1.005kg.
  • The scale is consistently adding an extra 5g to whatever it’s supposed to measure.
  • This is like an offset error in a pressure sensor, where the sensor reading is consistently “off” by a certain amount across the entire range of measurements.

Easy understanding way of offset error

  • Put nothing on, it reads 5 g (0g+5g)
  • Put a 1000 g, it reads 1001 g (1000g+1kg)
  • Put a 10000 g, it reads 10001 g (10000g+1kg)
  • This is offset error
  • Over

Zero-Point Error:

  • Now, let’s assume that the same kitchen scale, when it’s empty and should read 0g, actually reads 3g.
  • That is, even when there’s nothing on the scale, it’s indicating a weight of 3g.
  • This is known as a zero-point error, where the sensor reading deviates from the actual value when the input (in this case, weight) is zero.

Easy understanding way of zero-point error

  • Put nothing on, it reads 3 g (0g+3g)
  • This is zero-point error
  • Over

So, in summary, an offset error is like the scale that always adds 5g to whatever it’s weighing (whether it’s cheese, bread, or anything else), while zero-point error is like the scale indicating 3g when there’s nothing on it.

In another words, zero-point error is a form of offset error, while zero-point error only describes the deviation at zero input, offset error refers to the consistent deviation across all measurement points.

Offset error and Span error

Offset error and span error are two types of systematic errors that can affect pressure sensors, and they relate to different aspects of a sensor’s performance.

Offset Error is a constant error that is present across all measurements. It can be thought of as a ‘shift’ in the sensor’s output.

For example,

If a sensor is supposed to output 0 mV at 0 psi, but instead outputs 2 mV, it has an offset error of 2 mV. This error is consistent, meaning if the pressure is 10 psi and the output should be 100 mV, but the sensor reads 102 mV, the offset error is still 2 mV.

Span Error, on the other hand, is related to the ‘scale’ of the sensor’s output.

It’s an error that affects the gain of the sensor’s output, which is the ratio of the output change to the input change.

For instance, if a sensor is supposed to output 100 mV for a 10 psi change in pressure (a gain of 10 mV/psi), but instead outputs 110 mV (a gain of 11 mV/psi), it has a span error.

In terms of their relationship, offset error and span error are independent of each other. A sensor can have an offset error without having a span error, and vice versa.

For instance, consider a sensor with a perfect gain of 10 mV/psi but an offset error of 2 mV.

At 0 psi, it outputs 2 mV (offset error), and at 10 psi, it outputs 102 mV – the correct output of 100 mV plus the 2 mV offset.

This sensor has no span error because the gain is correct, but it does have an offset error.

In contrast, consider a sensor with a perfect zero point (0 mV at 0 psi) but a span error. At 0 psi, it outputs 0 mV, but at 10 psi, it outputs 110 mV instead of the expected 100 mV. This sensor has no offset error, but it does have a span error.

To ensure accurate pressure measurements, it’s important to identify and correct both offset and span errors. This is typically done through calibration, where known pressures are applied, and the sensor’s output is adjusted to match the expected values.

Take an example to understand Offset error & Span error

Let’s consider a simple analogy of a weighing scale to understand the concepts of offset error and span error.

Imagine a fruit vendor using a scale to measure the weight of apples. When there are no apples on the scale, it should read 0 kilograms. However, due to some error in the scale, it reads 1 kilogram. This is similar to an offset error in a pressure sensor, where the sensor gives a non-zero reading when the pressure is zero.

Now, suppose the vendor puts a 2-kilogram bag of apples on the scale. Because of the earlier offset error, the scale reads 3 kilograms (2 kg of apples + 1 kg offset).

Easy understanding way offset error

  • Put nothing on, it reads 1 kg (0kg+1kg)
  • Put a 3-kg bag, it reads 4 kg (3kg+1kg)
  • Put a 30-kg bag, it reads 31 kg (30kg+1kg)
  • Put a 300-kg bag, it reads 301 kg (300kg+1kg)

No matter how many apples are weighed, the scale always reads 1 kilogram too much because of the offset error.

Next, let’s consider span error.

Let’s say the vendor has another scale that correctly reads 0 kilograms when it’s empty. However, when the vendor puts the 2-kilogram bag of apples on it, the scale reads 4 kilograms.

Easy understanding way of span error

  • Put nothing on, it reads 0 kg
  • Put a 2-kg bag, it reads 4 kg (2*2kg)
  • Put a 20-kg bag, it reads 40 kg (2*20kg)
  • Put a 200-kg bag, it reads 400 kg (2*200kg)

Here, the error is not just a constant shift; the scale is actually doubling the weight of the apples. This is an example of span error, where the sensor’s output is a certain proportion off from the actual value across the whole range of measurement.

Offset error is like a scale that always reads 1 kilogram too much, no matter the weight of the apples.

Span error is like a scale that doubles the actual weight of the apples.

Both errors can affect the accuracy of measurements,

Temperature Hysteresis

Temperature Hysteresis

What is Temperature Hysteresis for Pressure Sensor?

Temperature hysteresis is a specific type of error that can affect pressure sensors.

It refers to the phenomenon where the sensor’s response to temperature changes depends not only on the current temperature but also on the history of temperatures the sensor has been exposed to.

For example,

  • Let’s consider a pressure sensor that has been operating at 20°C and is then exposed to a temperature of 30°C.

 

  • The sensor’s output might change by a certain amount due to this temperature increase.

 

  • If the sensor is then cooled back down to 20°C, you might expect its output to return to the original value.

However, because of temperature hysteresis, the sensor’s output at 20°C after being heated and then cooled may not be the same as its output when it was first at 20°C.

This can be depicted with a graph where the X-axis represents the temperature and the Y-axis represents the sensor’s output. Without hysteresis, the graph would be a simple curve, but with hysteresis, the curve forms a loop.

Temperature Hysteresis Pressure Sensor-eastsensor

In terms of technical specifications, temperature hysteresis, as the same as pressure hysteresis, is typically expressed as a percentage of full scale (%FS).

For instance, a datasheet might specify the temperature hysteresis as ±0.2% FS.

This means that if the full-scale output of the sensor is 100 psi, the output, after going through a cycle of temperature changes and returning to the original temperature, could be off by as much as ±0.2 psi.

To minimize the effects of temperature hysteresis, manufacturers use materials with low hysteresis characteristics and apply special manufacturing processes.

In some cases, temperature hysteresis can be compensated for in software, especially in smart sensors where an onboard microcontroller can apply the necessary corrections to the output signal.

What is the different relationship between Temperature Hysteresis and Pressure Hysteresis?

Temperature hysteresis and pressure hysteresis are related, but they refer to different phenomena in the context of pressure sensors.

Temperature Hysteresis:

This refers to the change in the sensor’s output at a particular pressure due to changes in temperature, even when the pressure returns to its initial value.

For example,

If a sensor’s output is 0.47437 mv/V 12bar @25°C;

it might read 0.47061 mv/V 12bar @-40°C;

If heat back to 25°C, the reading might now be 0.47407 mv/V 12bar @25°C;

Above data was derived from the real test for ESS01 MCS Pressure Sensor;

Pressure Sensor Hysteresis -2-ESS01

Pressure Sensor Hysteresis -ESS01

Click to download the datasheet: Datasheet of ESS01 Pressure Sensor

Temperature hysteresis can be more pronounced in certain sensor materials and designs.

For example,

Silicon-based pressure sensors may exhibit different levels of temperature hysteresis compared to those based on other materials like stainless steel or titanium.

The effects of temperature hysteresis can be particularly noticeable in applications where the sensor is exposed to wide temperature swings or rapid temperature changes.

For example, in outdoor applications, pressure sensors may be exposed to significant daily and seasonal temperature variations.

In such cases, temperature hysteresis could cause significant shifts in the sensor’s output, even if the actual pressure remains constant.

Pressure Hysteresis:

This is the variation in the sensor’s output at a particular pressure, depending on whether the pressure is increasing or decreasing.

Let’s say a sensor reads 50 psi as the pressure increases from 40 psi.

If the pressure is then decreased from 60 psi to 50 psi, the sensor might now read 49.5 psi. This difference is due to pressure hysteresis.

In terms of technical specifications, both types of hysteresis are often expressed as a percentage of full scale (%FS).

For example, a sensor might have a temperature hysteresis of ±0.2% FS and a pressure hysteresis of ±0.1% FS.

This means that the sensor’s reading could vary by up to ±0.2% of the full-scale reading due to temperature changes and by up to ±0.1% due to pressure changes.

Pressure Sensor Hysteresis Linearity Repeatability

Pressure hysteresis can impact the measurement accuracy and repeatability of a pressure sensor in cyclic or pulsating pressure applications. Imagine a scenario where a pressure sensor is used to monitor the pressure in a system that regularly cycles between high and low pressures.

If the sensor exhibits significant pressure hysteresis, the sensor’s readings at a particular pressure point could vary depending on whether the pressure is on an upswing or a downswing.

Minimizing Hysteresis

The strategies to minimize these two types of hysteresis are similar and include careful material selection, advanced design techniques, precise manufacturing processes, and calibration and compensation techniques.

In addition, modern digital pressure sensors can incorporate real-time compensation algorithms to further reduce the impact of hysteresis on the sensor’s output.

However, they are typically addressed separately in the sensor design and specification because they are caused by different physical phenomena.

For details content how to reduce both kinds of hysteresis, please find the last part of the post directly

Take an easy understanding example

1.Temperature Hysteresis Example

Suppose we have a pressure sensor with a full-scale range of 100 psi and a specified temperature hysteresis of ±0.5% FS.

We start at room temperature (20°C), and the sensor reads a pressure of 60 psi.

If the temperature rises to 40°C, the sensor reading might change, let’s say to 61 psi.

This change is expected due to the temperature coefficient of the sensor.

However, if we cool the sensor back down to 20°C, we might expect the sensor to read 60 psi again, but due to temperature hysteresis, it might read 60.3 psi.

This is because of the ±0.5% FS temperature hysteresis, which means the reading could vary by as much as 0.5 psi (0.005 * 100 psi) after a cycle of temperature changes.

2. Pressure Hysteresis Example

Now, let’s consider a pressure sensor with a full-scale range of 100 psi and a specified pressure hysteresis of ±0.2% FS.

Let’s suppose the sensor reads 50 psi as the pressure is increasing from 40 psi.

If the pressure then decreases back down to 50 psi from a higher pressure (say 60 psi), we might expect the sensor to read 50 psi again.

However, due to pressure hysteresis, the sensor might now read 49.8 psi.

This is because of the ±0.2% FS pressure hysteresis, which means the reading could vary by as much as 0.2 psi (0.002 * 100 psi) after a cycle of pressure changes.

Pressure Sensor Hysteresis-eastsensor

These examples illustrate the importance of understanding and compensating for hysteresis in pressure sensors, particularly in applications where high accuracy is required.

Solutions to reduce both Temperature and Pressure Hysteresis

Reducing both temperature and pressure hysteresis in pressure sensors involves multiple strategies, each of which contributes to the overall accuracy and reliability of the sensor.

Material Selection:

Certain materials, such as specific grades of stainless steel or certain types of piezoresistive materials, exhibit lower hysteresis characteristics compared to others. These materials are often used in high-accuracy pressure sensors to minimize hysteresis.

Design Techniques:

The design of the pressure sensing element can significantly influence hysteresis. For instance, employing specific diaphragm designs or stress relief structures can help reduce both temperature and pressure hysteresis.

Manufacturing Processes:

Advanced manufacturing techniques, such as certain curing processes or annealing, can reduce the propensity of the sensor materials to exhibit hysteresis.

High-precision machining and assembly processes can also help ensure that the sensing elements perform as expected across a wide range of temperatures and pressures.

Calibration:

Calibration across the sensor’s full temperature and pressure range can help account for hysteresis effects. During calibration, the sensor’s output is recorded at various known temperatures and pressures, and these data are used to establish a correction curve or table.

Compensation Techniques:

Modern pressure sensors often incorporate microcontrollers that can apply real-time corrections to the sensor’s output based on the calibration data. These digital compensation techniques can significantly reduce the effects of both temperature and pressure hysteresis.

Sensor Fusion:

In some cases, the use of multiple sensors, possibly of different types or technologies, can help to compensate for the hysteresis of individual sensors. The outputs from the different sensors are combined, often in a weighted manner, to produce a final pressure reading.

Noted:

While these strategies can significantly reduce the effects of both temperature and pressure hysteresis, it’s important to note that completely eliminating hysteresis is currently beyond the reach of modern technology due to the inherent physical properties of the materials used in sensor construction.

However, with careful design, material selection, manufacturing, and calibration, it’s possible to create pressure sensors that offer high accuracy and reliability, even under changing temperature and pressure conditions.

Zero Point Error of Pressure Sensor

Zero Point Error of Pressure Sensor

Zero-point error is important for pressure sensors because it refers to the error or deviation from the true zero pressure reading. This error can occur due to various factors such as sensor drift, temperature changes, or manufacturing tolerances.

If the pressure sensor has a non-zero output when there is no applied pressure, it can lead to incorrect measurements and inaccurate data. This error can be especially critical in applications where precise pressure measurements are crucial, such as in industrial processes, medical devices, or aerospace systems.

What is the Zero-point Error of the pressure sensor?

The zero-point error of a pressure sensor refers to how accurately it measures zero pressure. Ideally, at zero pressure, the sensor output should read exactly zero. However, in reality, most pressure sensors will output a slight measurement error even at no pressure applied.

This zero-point error can result from many factors, like residual stress in the sensor materials, hysteresis effects, nonlinearity in the sensor, and temperature changes. The zero-point error spec provides an indication of a sensor’s accuracy at measuring very low pressures near zero.

A lower zero-point error specification means the sensor reads closer to zero when no pressure is present, indicating better accuracy for measuring very small pressures. The zero-point error is often listed as ±% of the sensor’s full scale output (FSO).

For example:

Zero-point error: ±1% FSO

This means the sensor output could read anywhere from -1% to +1% of its maximum rated output at zero pressure. So if the FSO is 100 psi, the zero-point error would be ±1 psi.

A pressure sensor with a lower zero-point error specification, like ±0.25% FSO will be more accurate at measuring pressures very close to zero, whereas a sensor with ±2% FSO zero-point error will exhibit a larger measurement error near zero pressure.

What is the difference between zero-point error and span error?

Zero-point and span errors are key specifications that indicate a pressure sensor’s accuracy. They refer to:

Zero-point error: The error in the sensor’s output when no pressure is applied. Measures the sensor’s accuracy at or near zero pressure.

Pressure Sensor Zero point error-2

Span error: The error in the sensor’s output at its full scale pressure rating. Measures the sensor’s accuracy at its maximum measurable pressure.

Pressure Sensor Span error

Both errors reflect the deviation from the sensor’s ideal I-P (input-output) transfer curve.

An ideal sensor’s zero-point error would be 0%, indicating the output is exactly 0 at zero pressure. And the span error would also be 0%, meaning the output matches the full-scale pressure rating exactly.

However, in real sensors, there are always some errors. The zero-point and span errors indicate:

  • How linear the sensor’s response is across its pressure range
  • How much the sensor’s output deviates from the ideal transfer curve
  • The overall accuracy of the sensor

So a pressure sensor with lower zero-point and span error specifications, like ±0.1% for both, will provide more accurate and linear measurements compared to a sensor with ±1% errors.

The errors tend to behave differently based on the sensor design and manufacturing variation. So, specifying both the zero-point and span errors gives a more complete picture of a sensor’s performance and accuracy.

How zero-point error affect pressure sensor accuracy

A pressure sensor’s zero-point error specification indicates how accurately it can measure pressures near zero. A lower zero-point error means the sensor is more accurate at detecting very small pressure changes.

For example, let’s consider two pressure sensors with these specifications:

  • Sensor A:
  • Zero-point error: ±1% full scale
  • Range: 0 to 100 psi
  • Sensor B:
  • Zero-point error: ±0.1% full scale
  • Range: 0 to 100 psi

Both sensors measure pressures from 0 to 100 psi. But Sensor B has a lower zero-point error of ±0.1%, while Sensor A’s is ±1%.

This means:

  • At zero pressure (0 psi), Sensor A’s reading could be off by up to ±1 psi (1% of 100 psi). So its measured value could range from -1 psi to +1 psi.
  • In contrast, at 0 psi, Sensor B’s reading would only be off by ±0.1 psi (0.1% of 100 psi). Its measured value would be closer to the actual 0 psi value.

So for measuring very small pressure changes near zero psi, Sensor B with the lower zero-point error will provide more accurate readings. It can detect smaller variations in pressure.

How to reduce zero-point error?

Improve design and materials –

Using stiffer materials, more symmetrical sensor designs, and optimized dimensions can minimize residual stress and hysteresis effects that cause zero-point error.

Using stiffer materials that are less prone to residual stress, hysteresis and drift, along with symmetrical sensor designs that distribute stresses and deflect more uniformly, can all reduce the various sources of zero-point errors in pressure sensors. This, in turn, enables the manufacture of sensors with lower zero-point error specifications and higher accuracy near zero pressure.

However, it requires engineering and materials expertise.

Increase the precision of manufacturing –

Tighter tolerances, more precise machining, and higher-accuracy assembly can reduce manufacturing variations that introduce zero-point error.

Improved dimensional tolerances can ensure that the sensor’s structures and components are manufactured within tighter acceptable limits. This means less variability in key parameters like thickness, length, spacing, alignment, etc.

Variations in dimensions beyond the tolerances can introduce uncertainties that contribute to zero-point error. So tighter tolerances translate to less dimensional variations and therefore lower zero-point error.

Perform offset trimming –

Mechanically or electrically adjusting the sensor’s output offset to minimize error at zero pressure. This is done during sensor calibration and testing. The offset is “trimmed” until the output at zero pressure is as close to the ideal value as possible.

Here are the steps:

  1. Place the sensor at zero pressure – This ensures no actual pressure is applied to the sensor that would cause a non-zero output.
  1. Measure the sensor’s initial output – This gives you a baseline reading of the sensor’s offset output at zero pressure, before any adjustments are made.
  2. Make an initial mechanical or electrical adjustment – This depends on the sensor type:
    • For diaphragm sensors, bend the diaphragm slightly to shift the output closer to zero.
    • For strain gauge sensors, adjust a potentiometer or trim resistor to alter the output voltage or current.
    • For silicon pressure sensors, tune a reference voltage to change the offset.
  1. Measure the sensor’s new output at zero pressure.
  1. Compare the new output to the ideal zero value (e.g. 0 V, 4mA).
  1. If needed, make further small adjustments in the same direction as before. Then remeasure the output
  2. Repeat steps 5 through 7 until the sensor’s output at zero pressure matches the ideal value within the specified tolerances.
  3. Record the final offset adjustment made and the resulting zero-point error for calibration data.

Perform digital compensation –

Using a microcontroller or compensation circuit to mathematically calculate and remove sources of zero-point error based on sensor data and models of error sources. This is an example of “self-compensation”.

By characterizing sources of zero-point error like offset, temperature sensitivity, and nonlinearity – and then applying compensation algorithms to mathematically remove these errors from the sensor output in real-time – the microcontroller is able to reduce the overall zero-point error of the sensor and improve its performance.

Use temperature compensation

Characterizing and compensating for the sensor’s change in zero-point error with temperature can also reduce errors from this source.

Temperature fluctuations cause the sensor’s physical dimensions and internal stresses to change. So temperature changes can significantly affect the zero-point error of a pressure sensor, this introduces apparent pressure that shows up as zero-point error.

For example, as temperature increases, sensor materials may expand introducing additional stresses that contribute to an offset output even at zero pressure. Therefore, characterizing how the sensor’s zero-point error changes with temperature is critical.

During testing, the sensor’s zero-point error is measured at different temperatures:

  • – At 20°C, the zero-point error may be ±0.2% full scale
  • – At 50°C, it may increase to ±0.4% full scale
  • – At -10°C it may decrease to ±0.1% full scale

ESS501V Testing Data

img: Testing data of ESS501V at 20.1°C, 30.1°C and 39.9°C

From this data, a temperature compensation model is developed to describe the sensor’s change in zero-point error over its full temperature range.

Then during operation, the sensor’s temperature is continuously monitored. The temperature compensation model is used to calculate what the sensor’s zero-point error should be at that temperature. So a corresponding adjustment is applied to the sensor output to compensate for and effectively “remove” that temperature-induced zero-point error.

The result is that the zero-point error is significantly reduced over the sensor’s full operating temperature range compared to without compensation.

Take an example to clarify zero-point error

Here is an example to clarify zero-point error:

Let’s assume that you have a pressure sensor with the following specifications:

  • Range: 0 to 100 psi
  • Full scale output: 5 V
  • Linearity: ±1% full scale
  • Zero-point error: ±2% full scale

This means:

  • The sensor’s output should increase linearly from 0 V at 0 psi up to 5 V at 100 psi.
  • The output will be within ±1% of the ideal linear output over the full pressure range.
  • At 0 psi, the output could read anywhere from -2% to +2% of full scale, which is -100 mV to +100 mV.

So at zero pressure, the sensor’s actual output could range from -0.1 V to +0.1 V.

This ±0.1 V variation is the sensor’s zero-point error, indicating a lack of accuracy near zero pressure.

Now consider another sensor with:

  • Zero-point error: ±0.5% full scale

This means:

  • At 0 psi, the output will only vary between -0.5% to +0.5% of full scale, which is -25 mV to +25 mV.
  • So the sensor’s actual zero pressure output will range from -0.025 V to +0.025 V.

This ±0.025 V variation is a significantly lower zero-point error, indicating higher measurement accuracy for very low pressures close to zero.

Obviously, a lower zero-point error specification like ±0.5% full scale instead of ±2% will

  • Allows the sensor output to vary less at zero pressure
  • Results in measurements that are closer to the ideal (error-free) output at 0 psi
  • Indicates better resolution and accuracy for detecting very small pressure changes near zero pressure