Give away medical masks when you place an order. learn more

Combining the Input of Multiple Sensors to Produce Better Overall Results

Sensor fusion is the combining of sensory data derived from a variety of sensors collecting information. In the case of a military surveillance satellite, for example, it might be the combining of input from radar, thermal/infrared detectors, and powerful digital cameras. In a consumer electronics device, the sensors might include an accelerometer, a compass, and a gyroscope. In either case, data from all of the sensors is “fused” and intelligent signal processing employed to take the data, interpret the results, and provide a more accurate, complete, and more dependable picture of what is going on than is available by using any of these sensor sources individually. Fusion can also result in seeing things from a new prospective, such as calculating and displaying 3-D depth by combining two-dimensional images from two cameras at slightly different viewpoints.

Sensor technology news today speaks of the greater integration of miniature motion sensors, augmented reality, and 3-D motion. Application designers are demanding accurate 3-D motion data so they can usher in a new range of physical-interaction-based apps. Data from such integrated sensors as pressure sensors, cameras, GPS, and more will be merged to deliver on those demands. When we look at the future of 3-D and orientation sensing, it is not only games that will benefit from further advances, but also machine interaction, mapping, and navigation. By prototyping 3-D experiences, and physical orientation of devices within 3-D environments, the engineering challenges such as stability, noise, jitter, clean data vs. complex data, and the particular strengths and weaknesses of the underlying sensors can became clear.

Let us look at one example to see how using the right sensors together solved a design challenge. In a post on the “Building Windows 8” blog, Microsoft Windows Division President Steven Sinofsky discussed his company’s experience emulating and prototyping 3-D motion for future tablets running Windows 8. The company’s goal is to ensure that when a user moves the device while looking at the screen, the virtual environment would appear to stay stationary.

Microsoft initially tried typical 6-axis motion sensing offered through the inclusion of a 3-D accelerometer and 3-D magnetometer. “Right away,” Sinofsky writes, “we encountered an issue: ‘noise’ in the data from the accelerometer sensor was causing jittery movement of the 3-D environment even when the device was held stationary. We were able to see this noise clearly by capturing accelerometer data and charting it.”

“Without noise, the lines on the chart would be straight, with no vertical deviation. The conventional way to remove such noise is to apply a low-pass filter to the raw data stream. When we implemented this mitigation in our prototype, the resultant motion was smooth and stable (jitter-free), but the low-pass filter introduced another problem: the app lost responsiveness and felt sluggish when responding to motion. We needed a way to compensate for this jitter without reducing responsiveness.”

From these experiments, Microsoft discovered that this combination of sensors “could not provide the fluid and responsive experience we wanted. The accelerometer sensor was not providing clean data, and could not be used alone to determine device orientation. The magnetometer was slow to update and was susceptible to electromagnetic interference (think of a compass needle that sticks in one position occasionally).”

To solve this problem, Microsoft eventually added a third sensor in the form of a 3-D gyro for sensing rotational speed. Further experimentation demonstrated that using all three sensors (in what the company calls a 9-axis sensor fusion system) could solve the problem.

Says Sinofsky: “It turns out that an accelerometer, magnetometer, and a gyro can complement each-other’s weaknesses, effectively filling in gaps in data and data responsiveness. Using a combination of these sensors it is possible to create a better, more responsive, and more fluid experience than the sensors can provide individually.”

Figure 1 shows two types of outputs: pass-through outputs in which the sensor data is passed directly to an application, and sensor fusion outputs in which the sensor data is synthesized into more powerful data types.

Figure 1: A 9-axis sensor fusion system.

The “magic” of sensor fusion is to mathematically combine the data from all sensors to produce more sophisticated outputs, including tilt-compensation, yaw, pitch, and roll and device orientation to produce fast, fluid, and responsive reactions to natural motions. As 9-axis solutions give way to 10-axis sensors, the sensors, programmable microcontrollers, and a wireless link all will be combined in one package.

Automotive apps

An increasingly important sensor fusion application is automotive. According to Frost & Sullivan’s Praveen Chandrasekar, “Reliability and higher system performance are the main factors driving sensor fusion. Sensor fusion is expected to gain maximum priority in driving phases, where the driver needs assistance from safety and comfort features to ensure riding comfort, stability, and safety.” To date, he sees the greatest advantages in motorway driving, night driving, and driving in adverse conditions.

In a recent announcement by General Motors, beginning this fall, General Motors is offering advanced active safety and driver assistance systems on certain 2013 models. This driver assistance package is the first GM system that will use sensor fusion to alert drivers of road hazards and help them avoid crashes through the use of radar, cameras, and ultrasonic sensors to provide such safety features as rear automatic braking, land departure warning, and blind zone alert.

GM is also considering sensor fusion an important building block in the development of semi-autonomous and fully autonomous vehicles designed to maintain lane position and adapt to traffic environments. GM expects self-driving technology to enable semi and fully autonomous driving by the end of the decade.

The ultra-miniature gyro sensor XV-8000CB by Epson Toyocom Corp. (Figure 2) is an excellent example of an automotive navigation system sensor.

Figure 2: Ultra-miniature car navigation system sensors by Epson Toyocom.

This 5 V operable gyroscope sensor features an extremely small package size SMD (5 x 3.2 x 1.3 mm) and high stability using a vibration crystal.

Automotive sensor data fusion applications also include pedestrian detection and pre-crash warning. Pedestrian detection takes place with the use of low-cost thermopile infrared sensors that detect infrared radiation in the 8 to 14 µm band. Since these sensors do not detect the actual location of objects, a sensor array does the work.

In pre-crash situations, radar and laser scanner sensor fusion is used. Pre-crash situations occur where environmental sensing systems are used to establish the inevitability of a crash. The advantage of having such a system is that it buys time for the driver to react as the system predicts precisely the impact time and position. Using sensor fusion from data provided by a laser scanner and multiple radar sensors the system can make a threat assessment and a pre-crash determination.

Automotive sensors do not necessarily involve interior and exterior safety. Take, for example, Freescale’s MMA955xLRM (Figure 3) intelligent motion sensing platform. Its applications include fleet monitoring, tracking including dead reckoning, system auto wake-up on movement, detection, shock recording, and anti-theft functionality.

Figure 3: Block diagram of the MMA955xL.

The MMA955xL 3-axis accelerometer is a member of Freescale’s Xtrinsic family of intelligent sensor platforms. The device incorporates dedicated accelerometer MEMS transducers, signal conditioning, data conversion and a 32-bit, programmable microcontroller. It creates an intelligent, high-precision motion-sensing platform able to manage multiple sensor inputs and make system-level decisions required for such sophisticated applications such as gesture recognition, pedometer functionality, and eCompass tilt compensation and calibration.

Using its master I2C module, the MMA955xL can manage secondary sensors such as pressure sensors, magnetometers or gyroscopes to allow sensor initialization, calibration, data compensation and computation functions to be off-loaded from the system application processor where multiple sensor inputs can be easily consolidated. The MMA955xL acts as an intelligent sensing hub and highly configurable decision engine. Total system power consumption is low, as the application processor stays powered down until absolutely needed.

Sensors for handheld devices

For handheld devices, man-machine interfaces, virtual reality, and navigation, consider Bosch Sensortec’s BMA250 (Figure 4), a digital, triaxial ±2 to ±16 g acceleration sensor with intelligent on-chip motion-triggered interrupt controller. Housed in a small, 12-pin LGA package, its footprint is only 2 x 2 mm with a height of 0.95 mm. The on-chip interrupt controller provides motion-triggered interrupt-signal generation for new data, slope detection, tap sensing, orientation recognition, flat detection, and low-g/high-g detection without the use of a microcontroller. It enables measurement of acceleration in three perpendicular axes. An ASIC converts the output of a MEMS that works according to the differential capacitance principle. Package and interfaces of the BMA250 match a multitude of hardware requirements. The BMA senses tilt, motion, and shock vibration in cell phones, handhelds, computer peripherals, man-machine interfaces, virtual reality features, and game controllers.

Figure 4: The Bosch Sensortec BMA250 acceleration sensor.

The Bosch Sensortec BMC050 is a fully compensated electronic compass or eCompass delivered in a small footprint. Used for determining precise tilt-compensated geomagnetic heading information and for providing accurate acceleration sensor data, the 6-axis digital compass comes in a 3 x 3 x 0.95 mm LGA package using FlipCore geomagnetic sensing technology and MEMS sensor technology for the accelerometer.

Measuring the earth’s geomagnetic field in three dimensions, it also measures dynamic and static acceleration and the tilt of the sensor using the 3-axis accelerometer. This tilt information is important when the geomagnetic sensor cannot be held flat (parallel to the surface of the earth). Combining this output information of the BMC050, the eCompass software delivers precise heading data that is independent from the orientation of the device. The accelerometer integrated within this device provides the precision and functionality of the 10-bit digital accelerometer, the BMA250, discussed above.

Target applications for the tilt-compensated orientation information provided by the BMC050 include navigation (e.g., GPS enhancement/map rotation), location based services (LBS) or augmented reality, and in combination with a pressure sensor, indoor navigation. Given its small footprint, low power consumption and high level of functional integration it is well-suited to personal mobile devices including mobile phones, notebooks, MP3 players, etc.

Essentially, sensor fusion is a case where the whole is greater than the sum of the parts. As sensors provide increased reliability, speed, accuracy, and advanced features, the concept of sensor fusion will remain in the forefront of those trends.