Applications, Electronics & Scientific Guide

Advanced Driver Assistance System for Vehicle Safety

Camera-based safety functions in the vehicle have to cope with high brightness dynamics, capture details in the dark and at the same time not overshoot when illuminated. ON Semiconductor introduces such HD image sensors that also suppress the flicker of pulsed light sources and can be integrated as a sensor platform in different vehicle classes.

An increasing interest in autonomous vehicles calls for increased use of driver assistance systems ( ADAS ). Image sensors form the basis for camera-based systems that act as eyes of the car. They improve the safety when reversing and parking with the help of shifting cameras and 360 ° surround view systems. Also, front cameras offer automatic functions to prevent collisions. Dedicated Short Range Communication for Vehicle Safety is another use of latest technology.

New regulations from legislators and security initiatives such as the NCAP mean that more and more cameras are moving into motor vehicles in the shortest possible time. The automakers now integrate these image systems in vehicle platforms of almost all classes. According to Japanese market research firm Techno Systems Research, the number of vehicle cameras produced annually worldwide has more than doubled from 47 million in 2013 to 110 million in 2017. In 2024, this number is expected to rise to over 200 million.

Camera-based safety functions in the vehicle

In addition to the Parking and Maneuvering Assistant, where four cameras on each side of the vehicle provide image details for a 360 ° view, camera systems can also issue blind-off signaling and approaching vehicles in the blind spot, warning the driver of the lane change.

Camera monitor systems (CMS) even replace mirrors and produce conventional side mirror views without blind spots on display in the vehicle. If the legislation changes the regulations, it could also eliminate the exterior mirrors, which can be very beneficial for fuel economy and vehicle design.

Front camera systems detect what is happening in front of the vehicle and allow additional safety and comfort functions to be implemented. These systems can mitigate collisions by automatic emergency braking and bring the vehicle to a halt as soon as an unexpected obstacle is detected. Adaptive cruise control helps the driver – especially when stressful traffic conditions such as traffic jams occur.

Image quality and resolution

All of these applications are based on image sensors as a core component in the camera – but they often make different demands on image quality, resolution and sensor size. For example, the required image quality for, for example, a reversing camera which displays images prepared for the driver on display may have different requirements than a front camera which, together with algorithms, enables the automatic emergency braking function of the ADAS.

Also, the resolution of the image sensor for a particular display format may differ depending on the image processing application that needs a precise minimum number of pixels over an object to allow the algorithm to recognize and identify the object directly. The sensor resolution is, therefore, an essential factor.

Restrictions on recorded image sequences

In front camera systems, image processing algorithms at the image sensor output require training or adaptation for different views or image representations when they need to recognize pedestrians, vehicles, and objects or make decisions. Frequently, such observations are recorded over an extended period on test drives, which is extremely costly. Also, they must be made using the same system that will also be used in the later product. Also, the image quality must be fixed before recording a data set. Fine tuning of the image quality, as well as development, requires a lot of time to produce a working sensor whose automatic functions such as exposure and white balance provide optimal images even when environmental conditions change.

A single, scalable platform of image sensors offering the same performance at comparable resolutions, therefore, reduces cost and cost for the manufacturer. If this platform is used in several different systems and products, this also saves investment in development and image customization.

High brightness dynamics is required

The performance of the image sensor and its ability to capture a wide range of scene content or its dynamic range is critical to the selection of an image processing system in the vehicle. The dynamic range is a measure of how much contrast the sensor can capture in a scene, or just how well it can achieve very bright areas but also dark areas or shadows in the same view.

Automotive cameras are often confronted with this contrast problem when, for example, the low sun shines on image sensors while the car passes through an underpass. If the dynamic range of the sensor is then too little, essential details in the scene can go undetected, so that the driver or the image processing algorithm does not notice an object in the view and an unsafe situation arises.

Why pulsed light sources flicker

Another aspect manufacturers have to deal with is the increasing use of light emitting diodes (LED) in traffic signs as well as vehicle headlights and taillights. This increases the challenges in image processing in vehicles. LED lighting is controlled by pulse width modulation (PWM): the LEDs are turned on and off with varying duty cycles to control light intensity and save power. The switching frequency is so high that the human eye can not follow and thus can not perceive the flickering of these lights. However, because the sampling rate of the image sensors runs asynchronously with the LED switching frequency, they will detect a time when the LEDs are dark and sometimes when they are on. The image sensor also shows this flicker effect in the image.

To make matters worse, there is no standard for the frequency of pulsed LEDs in vehicles or traffic signs. For example, with front-end cameras with image processing algorithms, this flickering may cause them to misread or miss traffic signs. When displaying CMS or reversing cameras, this flicker can be both annoying and confusing to the driver.

Strong contrast due to variable exposure times

An example scene contains a high-brightness light source, such as the sun on the horizon, and a pedestrian on the roadside, which appears darker in the shade of a tree. To capture this scene with high dynamics, an image sensor compensates for the bright parts of the scene by a short exposure time to avoid overdriving. If pulsed light sources, such as LED headlights of a car in the same scene, the image sensor can capture at brief exposure times moments in which the LEDs are just dark, so less brightly lit details remain clearly visible. The combination of several exposures creates an image with a high dynamic range.

In a bright daylight scene, however, some or all of the exposures will miss the LEDs and cause the headlights to flicker. If the exposure time is extended to capture the scene while the headlight LEDs are light-keyed, the parts of the scene illuminated by the bright sun are overexposed, significantly reducing the dynamic range in the image and causing loss of detail and ultimately unacceptable image quality.

To achieve a high dynamic range

This problem replies an image sensor which can achieve a higher dynamic range within a single image, thus bright areas with a sufficiently long exposure and the pulsed light sources during their A capture phase, without overexposing the scene. The ON Semiconductor Hayabusa image sensor platform offers this solution and is based on a new pixel technology, Super Exposure). The sensor stores much more charge, which makes the exposure times five times longer before saturation occurs compared to conventional image sensors of the same size found in automotive applications today. With this pixel technology, Hayabusa image sensors enable imaging with a high dynamic range of more than 120 dB with simultaneous LED flicker reduction.

Part of the solution lies in the construction of the Hayabusa super-exposure pixel. Thanks to a new design and manufacturing process, each back-illuminated three μ m pixel stores more than 100,000 charge electrons generated by the incoming light. This is significantly more than the 20,000 atoms that conventional CMOS image sensors of the same pixel size store. Therefore, a single super-exposure exposure can capture 95 dB of dynamics and cover most of the scene. Hayabusa sensors offer the ability to add a second very short exposure to extend the dynamic range to over 120 dB by capturing the brightest parts of a scene.

Getting the LED flickering under control

To mitigate the effect of LED flickering while providing high dynamic range output, the super-exposure may be timed to capture the entire period of the lowest frequency pulsed LED in a scene. If this 90 Hz corresponds to an exposure time of approximately 11 ms (the LED can only be switched on one-tenth of this time or less). With a charge capacity of 100,000 electrons, the sensor can be exposed for as long as possible without losing details in the bright areas of the scene. The second, shorter exposure time is then added together with a proprietary built-in algorithm.

This increases the dynamic range while maintaining the areas in the scene that contain a pulsed LED – in case they were missed during the second exposure. The sensor can thus detect more than 120 dB dynamic range in the scene and represent areas with pulsed LEDs that flicker with a conventional sensor. This high performance makes the Hayabusa platform the optimal solution for the development of vehicle cameras requiring high dynamic range with LED flicker suppression. Since all the sensors on this platform offer the same performance, manufacturers can choose between different sensors, thus minimizing development effort. Also, the data for adapting the sensor algorithms can be used again.

An image sensor platform that offers a lot

The Hayabusa platform of automotive-qualified image sensors offers a resolution of one to five megapixels (MP). This gives manufacturers scalability and the ability to use them in different applications. The platform’s first sensor (AR0233AT) is a high-dynamic-range 2.6-MP sensor with LED flicker suppression. He outputs 1080p video at 60 frames per second (fps).

An image sensor platform that successfully solves two of the most important technical challenges in image processing in vehicles also meets the requirements of car manufacturers. With a platform that delivers consistently high performance across multiple sensors, developers can spend thousands of hours on algorithm customization scene data to implement similar ADAS capabilities for different vehicles, choosing an image sensor that best suits their application , This allows both high-performance, high-resolution, high-resolution systems and low-cost, low-resolution systems to be used in a variety of vehicle platforms with minimal effort. The Hayabusa image sensors can improve the vehicle manufacturers’ ADAS offering, giving customers more choice. More importantly, more and more vehicles are receiving systems that increase safety for all road users.

Leave a Comment

Your email address will not be published. Required fields are marked *


This site uses Akismet to reduce spam. Learn how your comment data is processed.