In this first article of a two-part series updating the “Five Senses of Sensors” articles published in TechZone in 2011 (Sound, Vision, Taste, Smell, and Touch), we will discuss advances in sensor technology that mimic and mirror human smell, taste, and hearing. The article will focus on changes in the underlying sensors and how applications have grown, particularly in light of the emergence of the Internet of Things (IoT).
The sense of smell
An electrochemical nose, also called an e-nose, is an artificial olfaction device with an array of chemical gas sensors, a sampling system, and a pattern-classification algorithm to recognize, identify, and compare gases, vapors, or odors. In this way the e-nose mimics the human olfactory system. These devices have been successfully used in a wide variety of applications including detection of food quality, wastewater management, measurement, and detection of air and water pollution, in health care, and in warfare. One of their strengths is that the data gathered can be interpreted without bias.
In food safety the most common use is to determine the quality of tea, milk, alcoholic beverages, fruits, meats and fishes, cheese, and other dairy products. Gas sensors include methane, ethanol, toluene, o-xylene (an aromatic hydrocarbon based on benzene), CO2, and CH4.
For medical applications, e-nose devices are being explored for the detection of the cancer biomarkers necessary for early diagnosis and fast treatment. For example, researchers at the University of Tampere in Finland have used a device that conducts molecular analysis of the air above urine samples, testing it for the volatile organic compounds associated with prostate cancer. In a study they published last year the scientists claimed the method had a detection rate of 78 percent.
The use of nanomaterials in e-nose applications is gaining ground, as has the capability of creating sensors with ultrahigh sensitivities and fast response (due in part to a smaller structure). The smaller sensor size also promotes integration into a larger number of devices. An attractive class of materials for functional nanodevices is metal-oxide semiconductors. They offer simple operation and ease of fabrication and the potential for compatibility with microelectronic processing, as well as low cost and low-power consumption.
There are still many challenges to be overcome, including fully understanding the nanomaterial growth mechanism to assure sufficient quality. Aligning nanomaterials between predefined electrodes and to form proper contacts that directly influence device performance is also not an easy feat.
Among recent sensor breakthroughs are devices that could give smartphones a sense of smell. Developed by Honeywell's ACS Labs it utilizes a new type of MEMS vacuum pump, hundreds of times smaller than previously available. In human olfaction, lungs bring odor to the olfactory epithelium layer inside the nose, while the e-nose uses a pump. The Honeywell device promises to initially provide an “add-on sense of smell” for spectrometers, but it may also end up in smartphones that can sense everything from toxic chemicals to pollen to general air quality.
There are many classes of e-noses, including those with conductive polymer, surface acoustic wave, calorimetric, and polymer composite. Often there are several types or classes of sensors used in these applications, including optical-sensor systems, mass and ion mobility spectrometry, gas chromatography, infrared spectrometry, and chemical sensors. An example of a gas sensor used for CO2 detection is Amphenol’s Telaire 6613 CO2 Module (Figure 1). The small, compact module is designed to integrate into existing controls and equipment to meet the volume, cost, and delivery expectations of OEMs.
Figure 1: The Telaire 6613 C02 module.
All units are factory calibrated to measure concentration levels to 2000 and 5000 ppm. Dual-channel sensors are also available for higher concentrations. The affordable, reliable, flexible sensor platform is designed to interact with other MPU devices.
The sense of taste
The electronic tongue (e-tongue) uses an array of liquid sensors that mimic the human sense of taste, without the intrusion of other senses such as human vision and olfaction that often interfere with our taste perception. Within a few years, researchers anticipate that a machine that experiences flavor will determine the precise chemical structure of food and why people like it. Digital “taste buds” also will help us to eat smarter and healthier.
The e-tongue measures and compares tastes using sensors to receive information from target chemicals and then sends it to a pattern-recognition system. The result is the detection of taste based on the human palate. There are five basic types of tastes: sweet, bitter, salty, sour, and umami (a Japanese word that can be translated as “deliciousness” or “pleasant, savory taste”). To mimic the human tongue, sensors are used in multiplexed arrays containing multiple taste receptors.
E-tongues often are used in liquid environments to classify the contents of the liquid, identify the liquid itself, or sometimes to discriminate between samples. Most e-tongues are based either on potentiometric or amperometric sensors. The taste sensors have artificial polyvinyl chloride (PVC)/lipid membranes that interact with a target solution such as caffeinated beverages. The membrane potential of the lipid membrane changes – which is the sensor output or measurement. Investigating potential change results in measuring the “taste” provided by the output of the chemical substances. With the array, multiple sensors provide this output and form a unique fingerprint.
While e-tongue technology has advanced the past several years, it is taste accuracy that has become a priority. For example, in 2014 researchers managed to distinguish between different varieties of beer using an electronic tongue with an accuracy of approximately 82 percent, while other e-tongue prototypes have demonstrated ability to successfully distinguish between wines.
The sense of hearing
Hearing systems are increasingly being trained by “listening” to sounds, detecting patterns and building models to decompose sounds. One of the most common applications for sensors in this segment is in hearing aids. Digital advances have made today’s hearing aids smaller, smarter and, fortunately, easier to use.
The most advanced hearing aids are now interacting with other devices, such as smartphones and digital music players, to deliver sounds directly and wirelessly to the listener. Recent improvements are based on better microprocessors and noise-reduction software so that the hearing aid can be selective about the types of sound it amplifies, muffles, or suppresses.
Much of the focus of current research is on directionality and speech enhancement. Sound systems can employ digital-signal processing to automatically shift between two different types of microphones in order to pick up either a single speaker’s voice or sound coming from all around. Digital-speech enhancement can now increase the intensity and audibility of some segments of human speech.
Research projects are underway to reduce the size and cost of hearing aids, improve their directional capabilities, and identify and amplify desired sounds such as a human voice while muting background noise. Researchers are also working hard to extend battery life through the use of tiny microphones mounted on MEMS chips. These chips enable multiple microphones to be placed inside a device small enough to fit in a user’s ear without rapidly draining the batteries.
For example, while flies ordinarily have no sense of hearing at all, one subset, the Ormia ochracea, a parasitic fly, can determine the direction of a sound to within two degrees, which seems impossible given the tiny size of the fly. Cornell scientists are studying the extremely tiny insect parasite as the basis of an effort to develop a man-made directional-listening system based on the fly’s auditory apparatus, naturally small enough to fit inside a hearing aid.
Sensors that detect sound or “hear” are essentially microphones with sophisticated signal-processing capability. In robotics, sound sensors are used in a myriad of applications. One sensor particularly well-suited for sound-based application is the Parallax Sound Impact Sensor (manufacturer’s part number 29132, Figure 2) that provides noise control to a project and responds to loud noises such as a clap of the hands.
Through the on-board microphone, this sensor detects changes in decibel level, which triggers a high pulse to be sent through the signal pin of the sensor. This change can be read by an I/O pin of any Parallax microcontroller. Detection range up to 3 meters away and an on-board potentiometer provides an adjustable range of detection.
Figure 2: The Parallax Sound Impact sensor.
Targeting speech recognition, the STMicroelectronics MP34DB01 MEMS audio sensor digital microphone (timing waveforms are presented in Figure 3) is an ultra-compact, low-power, omnidirectional, digital MEMS microphone built with a capacitive sensing element and an IC interface with stereo-operation capability.
Figure 3: Timing waveforms of the MP34DB01.
The IC interface is manufactured using a CMOS process and features a single supply voltage, low-power consumption, and omni-directional sensitivity. The MP34DB01 has an acoustic overload point of 120 dBSPL with a claimed “best on the market” 62.6 dB signal-to-noise ratio and -26 dBFS sensitivity.
The MP34DB01 is available in a bottom-port, SMD-compliant, EMI-shielded package and is guaranteed by the supplier to operate over an extended temperature range from -40°C to +85°C.
In summary, there’s no doubt that going forward we will see more developments in smell-, taste-, and hearing-based sensor technology used a variety of applications. In Part 2 of this series we will examine sensors involved in touch and vision.
For more information about the parts discussed in this article, use the links provided to access product pages on the Hotenda website.