"The Five Senses of Sensors" is a five-part series discussing the availability and advances in sensors that mirror and mimic human senses. This second segment provides an overview of sensors used in applications requiring vision.
Whether the task is to move a robotic arm, read a barcode, inspect a part, prevent a vehicle from drifting into the next lane of traffic, or allow a Mars Rover to "see," vision sensors are at the heart of the application. Advances in image sensors are familiar to most consumers through a myriad of toy and cell phone applications. Furthermore, there is a wide-range of uses for vision sensors enabling major advancements in industries, including automotive, industrial/machine, medical, computer, military, CMOS vs. CCD, and vision sensor networks.
According to iSuppli market research, the U.S. Department of Transportation's mandate for backup cameras in new vehicles will stimulate an increase in sales of new vehicles with rear-view park assist cameras by four times during the next seven years. By the fall of 2014, all new cars weighing less than 10,000 pounds sold in the U.S. will be required to have backup cameras. The goal is to reduce the number of back-over collisions caused by blind spots behind cars. iSuppli forecasts that through 2017, 71.2 million new cars in the U.S. will be sold with rear-view cameras for park assistance, compared with a previous forecast of only 19.1 million for the same period.
Even without such mandates, automotive vision system sales are growing based on the combined decrease in the cost and improvements in the features of two basic technologies—digital signal processors (DSPs) and digital still cameras. Vision sensors allow for two separate but related activities—monitoring the environment to raise the level of safety and monitoring the drivers to ensure their ability to control the vehicle.
Monitoring the environment involves awareness of the vehicle's surroundings, gathering information, and playing it back to the driver in a form that can be used for intelligent decision making. For example, vision sensors are used for lane drift correction, barrier and object detection, assistance in parallel parking, monitoring blind spots, maintaining an appropriately safe distance between moving vehicles, and back-up observation and warning capabilities.
Inside the car, the vision-based assistance monitors the driver for drowsiness, inattention, or intoxication. Smart airbag deployment based on the height and weight of passengers is also dependent on these sensors.
Today, a variety of sensors collect the same information from different perspectives and sources such as radar, infrared, and digital cameras. Data from all the sensors is fused (hence the term sensor fusion) and interpreted, providing a more complete picture of what is going on.
When evaluating vision sensors for automotive use, consider the following:
- Dynamic range
- Frame rate
- Aspect ratio
- Stereo vs. mono vision
- Camera placement (internal and external)
- Intelligent cameras
- Data processing
- Pattern recognition requirements
According to a recent report by the Automated Imaging Association (AIA), sales of machine vision components and systems in North America increased 54 percent in 2010 to nearly $1.8 billion. All major machine vision supplier markets were included in the report's results, including cameras, lighting, optics, imaging boards, application-specific machine vision systems, software and smart cameras.
Reportlinker.com also released a report about machine vision systems titled "Global Machine Vision and Vision Guided Robotics Market (2010-2015)." In its analysis, the company states, "Recent advancements in the machine vision technology, such as smart cameras and vision guided robotics, has increased the scope of the machine vision market for a wider application in the industrial and non-industrial sectors. The Asia Pacific market is expected to grow faster. The European market is expected to grow better than the U.S. mainly due to the impact of the recent economic crisis in the U.S."
A third group, research firm Frost & Sullivan, also states its observation that rapid advances in 3-D vision sensors and image processing algorithms are resulting in a rise in the adoption of 3-D vision systems. These systems are used as tools to solve challenging and complex vision tasks on the manufacturing lines of various industrial sectors, such as robot applications to enhance robot flexibility and intelligence.
Machine vision, or using computer vision to analyze industrial/manufacturing operations, is also rapidly expanding in speed and search area. Typical uses include automated inspection and optical character recognition that is dependent upon a camera module image, high-frame rates, and the ability to cover a large search area.
One example is the Omron International Automation compact FQ-series of vision sensors, an inspection solution for packaging applications that includes high-speed, true color processing capabilities with high-power LED lighting and lenses. Since the camera and image processor is in the sensing head, the FQ-series of vision sensors can be used with either a PC or a TouchFinder terminal for setup and monitoring.
Overall, machine vision supports manufacturing quality control, by providing a process of optical sorting and supporting position/orientation information for accurate robotic arm movement. Ultimately machine visualization aids in dramatically increasing yield.
The variety of available systems ranges from complete and complex vision systems to compact touch screen image sensors without a PC or any added electronics. Touch screen image sensors can combine three sensors into a single housing:
- A match sensor to compare the target object to a stored reference point
- An area sensor that identifies target features within a region of interest
- A second area sensor that examines an area for specific features but can adjust for motion
Once the type is selected, and a simple image captured, the sensor's touch screen LCD display is used to configure the sensor, setting inspection parameters, and designating the minimum and maximum pass count.
Captured images and data can be downloaded from the sensor to a USB drive through the sensor's USB port.
OmniVision is at the forefront of medical imaging, offering a wide range of products including small, disposable diagnostic cameras, endoscopic tools, prosthetic eyes, and ultra-small flexible scopes for gastro-intestinal applications. The company's implementations of vision in medical devices enable significant efficiency improvements and reductions in crosstalk in CMOS imagers, and its CameraCube technology enables cost-effective complete, high-quality miniature cameras.
For example, the OVM7690 (Figure 1) is based on CameraCube technology, and combines the functionality of a single-chip image sensor, embedded processor and wafer-level optics in a small 2.5 x 2.9 x 2.5-mm package. Used in surveillance, medical imaging, and mobile phones, it is an all-in-one camera solution that operates at up to 30 frames per second in VGA resolution. Users can control the image quality, formatting, and output data transfer.
Figure 1: OVM7690 block diagram. (Source: OmniVision)
OmniVision's Backside illumination (BSI) sensor uses the backside of the chip to collect light, so that the circuitry of metal and dielectric layers does not obstruct incoming light. BSI offers a direct path for light to reach the sensor; increasing sensitivity and efficiency, while providing less pixel-to-pixel cross talk compared to standard FSI sensors of equal pixel size.
Computer vision is the sensor system most familiar to consumers. Light-sensitive cameras, range, sensors, radar and more are included in this category. The number of pixels used for each system corresponds to light intensity, depth, absorption or reflectance of electromagnetic waves, or nuclear magnetic resonance.
Computer vision is often seen as part of artificial intelligence or computer science, and its vision systems rely on image sensors to detect electromagnetic radiation in the form of visible or infrared light.
Considerations surrounding computer vision involve noise reduction, contrast enhancement for adequate detection, using the appropriate scale within the space involved, object size, viewpoint, recognition categories, and such features as corners, lines, edges, and ridges that must be "seen."
The detection of soldiers or vehicles in enemy territory and in missile guidance systems are examples of military use of vision sensors. Autonomous or unmanned vehicles use computer vision for navigation, creating maps of the environment and detecting obstacles. This type of vision can also be seen in space exploration: NASA's Mars Rovers, Spirit, and Opportunity.
Night vision, as used by the military for detection and security, is based on charge coupled device technology (CCD). CCD-based night vision cameras use either infrared light or thermal imaging. With infrared light, LEDs are used to illuminate the target environment up to 100 ft away. These systems provide a gray-scale image (an absence of RGB), and are affordable for military and consumers. Thermal imaging detects the heat that radiates off of an object and converts the data to an image. There is no need, in this case, for an external illumination source. Thermal imaging systems are more complex and costly than infrared systems.
Although these night-vision tools typically use CCD, CMOS can also be used.
CMOS vs. CCD
In the past, CCD image sensors were considered to be superior over CMOS sensors. CCD technology offered a higher dynamic range and resolution and its electronic shutter removed the need to add a mechanical one. CCDs, however, have slower rates, need more power dissipation, and are more expensive to build and integrate than CMOS sensors.
CMOS sensors, in comparison, have higher speed performance, lower power requirements and better integration capabilities. Advances in CMOS technology are closing the quality gap between the two alternatives, and the feature gap as well. For example, shutters that enable image capture of moving objects without distortion – an inherent characteristic of CCDs – are now available with CMOS.
While CCDs still provide the highest quality, CMOS are catching up in industrial, medical, and also some military apps.
Vision sensor networks
Vision sensor networks – including several sensors connected together – are used when an application requires more sophisticated data than a single sensor can provide. Additionally, vision sensor networks support a growing number of interesting vision-based applications with unique performance, complexity, and sometimes significant quality-of-service challenges. Within these systems, low-power camera nodes provide information from a monitored site—performing distributed and collaborative processing of their collected data.
Using multiple cameras in the network enhances reliability and provides a different viewpoint. The large amount of image data produced by these cameras, however, within already constrained network resources, is requiring new methods of data processing, communication, and sensor management to be developed. In other words, these vision sensor networks are still quite the moving targets.