Give away medical masks when you place an order. learn more

Sensor Reliability: Challenges and Improvements



One of the major challenges and priorities in sensor technology is reliability. Reliability, however, isn’t necessarily a measurable single point; it’s the result of a number of considerations, among them:
  • The complexity of today’s sensor systems and networks
  • Sensor “consensus,” required in sensor fusion
  • Integration of diverse sensors and electronics into ever-smaller devices
  • The addition of advanced-degree sensor intelligence
  • The harsh and extreme environments into which sensors are placed
From the above it’s easy to see that reliability is a moving target. To add to the difficulty, sensors collect data resulting in measurement. What if the sensor, or in the case of sensor fusion, multiple sensors, capture information that is transformed into digital data for further evaluation and the data is faulty? On occasion, at least some sensor data is going to be imprecise or inconsistent. What is more, based on signal strength and transmission accuracy, another dimension of failure may be added: the possibility of not receiving the data at all, due to transmission failure. Sensor data is also susceptible to errors and interference such as noise. It is for all of these reasons that engineers must also be able to measure imprecision and true reliability for data that is gathered.

While it would seem working with multiple sensors compounds the problem, the addition of data from multiple sensors really does help in the long run. When combining reassured data from several information sources with intelligence from expert sources, more reliable and accurate results are obtained.

There are several methods of evaluating reliability. Each of the components that make up or affect reliability, for example, can be evaluated individually as well as considered together, such as in a total error band figure. The components involved include sensitivity, range, precision, resolution, accuracy, offset, linearity, dynamic linearity, hysteresis and response time. Accuracy, for example, can be obtained through manufacturer specifications, confidence and reliability calculation techniques, and operating time.

Evaluating sensor reliability includes probabilistic or statistical data that increase estimation reliability. Evidence theory is used, as is the Dempster-Shafer theory of belief functions (briefly put, it allows one to combine evidence from different sources and arrive at a degree of belief that takes into account all of the available evidence).

New findings on how "smart sensors" function give researchers the ability to improve their reliability. One method, proposed by J. Narayan of North Carolina State University, focuses on how sensors work and the materials used. In the Journal of Applied Physics,[1] Narayan recently described how vanadium oxide sensors work when attached to silicon chips. Normally, sensors are hardwired to a computer. But by making the sensor part of the computer chip itself, you have a smart sensor that can sense, manipulate, and respond to information. Understanding how sensors react and work together provides the basis for improved reliability, even in diverse temperature, environmental, and pressure conditions.

Another consideration is that when a single vendor with a narrow total error band guarantee provides the sensors in a design, the chances of coming up with a more reliable solution improve. The single subsystem source becomes completely responsible for the quality and reliability, as well as the ability to help you through design challenges.

Where reliability is critically important

There are many applications that require reliability. Let’s look at one of the many applications in which the absence of reliability can literally mean life and death. At Washington University’s School of Medicine in St. Louis, Missouri, researchers reported on an in-depth clinical trial that assessed the feasibility of wireless sensor networks for patient monitoring in general hospital units. In this paper[2],the authors provide a detailed analysis of the sensor system’s reliability. To quantify the reliability of the clinical monitoring system, they introduced the following metrics:
  • Network reliability is the fraction of packets delivered to the base station.
  • Sensing reliability is the fraction of packets delivered to the base station that had valid patient pulse and oxygenation readings. The pulse oximeter indicates the validity of each reading and uses an error code to represent invalid readings. The system sends both the valid readings and the error codes to the base station for reliability analysis.
  • Time-to-failure is the time interval during which a component operates continuously without a failure. A network failure refers to the case when a packet is not delivered to the base station, while a sensing failure refers to the pulse-oximeter obtaining an invalid measurement. The time-to-failure is a measure of how frequent failures occur.
  • Time-to-recovery is the time interval from the occurrence of a failure until the component recovers. The time-to-recovery is a measure of how quickly a component recovers after a failure.
The researchers found that the quality of pulse and oxygenation sensor readings was significantly affected by patient movement (which included movement of the arm on which the pulse oximeter was placed, finger tapping, or fidgeting, all of which may lead to invalid readings), sensor disconnections, sensor placement, and—believe it or not — nail polish on the patient’s fingers.

Interestingly, the researchers also found that the most unreliable part of the system was the 802.11 wireless link from the base station to the hospital’s wireless infrastructure. The poor link quality often prevented them from logging into the base station to determine if valid readings were obtained from the monitored patients.

A few good devices

Let’s now look at some representative sensor devices developed with stability and accuracy (and, hence, reliability) in mind. One example of an integrated IC that can be used as a switch in medical applications is NXP’s PCF8883 capacitive proximity switch with auto-calibration features. The switch combines a digital method to detect a change in capacitance on a remote sensing plate. Changes in the capacitance are automatically compensated using a continuous calibration.

Figure 1: NXP’s PCF8883 capacitive proximity switch. (Courtesy of NXP.)

Remote sensing plates can be connected directly to the IC or connected remotely using a coaxial cable. In this case, adjusting the sensitivity of the device, the automatic calibration can enhance reliability and the distance wherein it is still accurate is several meters.

Designed to meet stringent power budgets in mobile phones, the ADXL345 (Figure 2) from Analog Devices is a three-axis digital/MEMS motion sensor featuring resolution of 4 mg/LSB (least-significant bit) across all g ranges, single tap and double tap detection, activity and inactivity detection, free-fall detection, and user-programmable threshold levels. The device requires just 23 μA in measurement mode and 0.1 μA in standby mode at VS = 2.5 V typ. It includes I2C and three- and four-wire SPI (serial peripheral interface) digital interfaces and a voltage range of 1.8 to 3.6 V.

The ADXL345 has an output data range that scales from 0.1Hz to 3.2 kHz, unlike competing devices, which have fixed 100 Hz, 400 Hz, or 1 kHz data rates. This allows portable system designers to better manage energy consumption by precisely allocating power for a given system function and reserving unused power for other uses. The ADXL345 measures dynamic acceleration resulting from motion or shock and with a 10,000 g shock rating, is also suitable for applications such as hard-disk drive protection in personal computers.

The motion sensor incorporates an on-chip FIFO (first-in/first-out) memory block that stores up to 32 sample sets of x, y, and z data. By sampling input data it determines if the system should be actively responding to a change in movement or acceleration. It saves additional system power by off-loading that function from the host processor, which typically consumes a dominant amount of the system power budget. Allowing it to remain in sleep mode as long as possible can dramatically decrease overall power usage, as much as 75 percent of the budget, according to Analog Devices.



Figure 2: Low-power modes enable intelligent motion-based power management with threshold sensing and active acceleration measurement at extremely low power dissipation.

Summary

Through auto-calibration, the addition of intelligence, diagnostics and communications capability and testing, engineers can expect that, going forward sensors will have heightened reliability. Given the exploding demand for sensors and sensor systems featuring increased accuracy and reasonable cost, achieving greater reliability will remain an important aspect of not just R&D, but sensor design itself well into the future.

References

  1. Mechanism of Semiconductor Metal Transition of Vanadium Oxide Thin Films,” Dr. Roger Narayan, professor of biomedical engineering at NC State, and NC State Ph.D. students Tsung Han Yang, Ravi Aggarwal, A. Gupta, and H. Zhou. Research was presented at the 2011 Materials Research Society Spring Meeting in San Francisco.
  2. Reliable Clinical Monitoring using Wireless Sensor Networks: Experiences in a Step-down Hospital Unit” Octav Chipara, Chenyang Lu, Thomas C. Bailey, Gruia-Catalin Roman, Department of Computer Science and Engineering, Washington University, St. Louis, MO
Supplier