How to choose the sensing module of advanced autopilot system

2022-07-25 0 By

As we all know, the evolution from distributed architecture to centralized domain controller architecture has become an irreversible trend of the next generation of autonomous driving systems.For the next generation of centralized domain control framework to the autopilot system, a domain controller because of the powerful computation ability and abundant software interface to the hardware support, core function module allows more focused on the domain controller, the system function integration greatly improved, so for the function of perception and performance of hardware requirements.However, the emergence of a domain controller does not represent the underlying hardware of ECU mass disappeared, a lot of ECU function will be weakened (software and processing functions relegated, the execution is preserved), most of the sensor can also transmit data directly to the domain controller, or after the preliminary data processing to the domain controller, a lot of complicated calculation can be done in a domain controllerEven most of the control functions are completed in the domain controller. Many of the original ECUs only need to execute the commands of the domain controller. That is to say, the peripheral parts only focus on their own basic functions, while the central domain controller focuses on the implementation of system-level functions.In addition, the standardization of the data interaction interface will make these parts become standard parts, thus reducing the development/manufacturing cost of these parts.Camera, especially such as the eye of the autopilot, in L2 phase, the visual perception of intelligent driving unit is usually placed in called camera assembly parts, the assembly parts are contains a camera module itself, also includes processing camera perception of the environment information of software algorithm module,For example, ISP, ENCODE, neural network, deep learning unit and other AI algorithms.However, in the next generation of advanced autonomous driving systems, these perceptions, which were previously handled by the camera module, will be centrally processed by the AI chip on the domain controller.So, the question is, for this type of autonomous driving system architecture mode, what changes will happen to the requirements of the camera module itself, and what new requirements will be brought about?This paper mainly introduces the basic knowledge of vehicle camera, including the basic principle of camera model composition, camera type, camera selection method, camera installation method, common problems in the process of camera installation, etc.It provides a good reference for the design of automatic driving related parts.Data and features determine the upper limit of machine learning, and models and algorithms only approximate this upper limit.The whole vehicle camera module is divided into several different large modules, according to the layout of different areas and the realization of different functions, the whole camera module is divided into forward perception module, cabin monitoring module, imaging perception module, external imaging module.For high-order automatic driving system, the previous visual perception system is taken as an example. The classification of camera types can be generally divided into: monocular perception module for more powerful remote small target types and structural detection;Binocular sensing module is more sensitive to depth information such as distance and speed;And a night vision camera module that is more efficient at infrared information of targets driven at night.The following figure shows the performance of different detection functions of several cameras.2. Type of camera module From the perspective of camera structure, it mainly includes lens, base, infrared filter, image sensor, PCB and FPC, among which image sensor and lens have the greatest influence on imaging quality.Image sensor is the device that converts optical signal into electrical signal, and it is the most important part of camera, which is divided into CCD and CMOS.CMOS image sensor chip adopts CMOS technology, which can integrate image acquisition unit and signal processing unit into one chip.Its working principle is to integrate photosensitive element array, image signal amplifier, signal reading circuit, analog-to-digital conversion circuit, image signal processor and controller on a chip.Although the imaging quality is not as good as CCD, CMOS is rapidly favored by major manufacturers because of its power consumption (only 1/10 of CCD chip), small size, light weight, high integration and low price.At present, CMOS image sensor chip is used in the mainstream car camera.In terms of different detection functions of cameras, they can be divided into front view, side view, rear view camera, cabin camera and so on.3, camera detection principle In fact, before entering the real image processing algorithm, the image into the image camera module has been preliminary digital signal processing (ISP) processing in the semiconductor synthesis chip processing end of the camera module.This process includes raw data processing (such as bad spot correction, black level correction), color processing (white balance, de-manic, de-mosaics, GAMMA, etc.).Among the common 3 a digital imaging using the AF autofocus algorithm, AE automatic exposure algorithm and AWB automatic white balance algorithm, and realizes the image contrast, the improvement of the main subject of exposure or underexposure situation, make the picture color difference under different light compensation, and presents the high quality of image information,Very good guarantee image accurate color restore degree, present perfect day and night monitoring effect.Another technique used is wide-dynamic range HDR, which allows the camera to see the features of the image under very strong contrast.It should be noted that if the camera model rent algorithm is sophisticated enough, there is no need to perform basic ISP image processing before the camera is output to the back-end AI chip for deep learning, while the AI chip can concentrate more computing power and resources for back-end deep learning, neural network and other situations.This can not only greatly reduce the processing power of the AI chip, but also achieve better Raw data processing performance to a large extent.Of course, the ISP built into the camera module actually has higher requirements on whether it carries a better semiconductor processing chip, which is also a reason to add the camera module component.In the actual process, camera module and AI chip often assume the dual ISP processing process.Camera imaging performance will greatly affect the subsequent AI chip’s recognition of the environment, especially its deep learning algorithm and computing power are strengths of the original data input from the camera module.Image size, resolution, field of view, pixel size, dynamic range, frame rate and other factors in the original data are the main influencing factors.On the whole, for the camera module, the main concern of the indicators mainly include the following: 1, imaging unit forms with the main application of CMOS camera imaging element, for example, the image sensor is the photosensitive element, image array signal amplifier, signal readout circuit, analog-to-digital conversion circuit, image signal processor and controller integrated on a chip.In CMOS chip, each pixel has its own signal amplifier for charge-voltage conversion. In order to read the whole image signal, the signal bandwidth of the output amplifier is required to be wide, and the bandwidth of each pixel amplifier is low, which greatly reduces the chip power consumption.There are subjective and objective factors that affect the imaging effect. The subjective factor refers to the actual reflected light ability of the dynamic target in the environment.For example, low-lighting tunnels or rainy and foggy days are factors that objectively reduce the image quality. In view of the low quality of image imaging caused by objective factors, it is often necessary to use active light supplement (generally including DMS camera and TOF camera) or color compensation to improve the quality.The subjective factors are for the camera module itself, such as SIGNal-to-noise ratio, resolution, width dynamic, gray scale, color restoration degree and so on.2, integration for the camera module, considering its processing capacity needs to face more original scene requirements, need to integrate signal amplifier you, signal reading circuit, AD conversion circuit, image signal processor and controller on a chip.The function of the chip – level camera will be realized in the front – end module.3, the acquisition speed camera module needs to output the photosensitive elements one by one, and has multiple charge-voltage converters and row and row switch control, and the reading speed is basically more than 500F /s.And for the high resolution camera module, the output of sub-window after sampling is often needed, and higher speed can be obtained when only the image of sub-window is output.For example, horizon’s J3 chip is widely used to process 8 million pixels of images, which needs to be down-sampled and directly input into the string adder to meet the processing capacity of J3.4. Noise processing At present, various camera suppliers are more prone to CMOS camera modules, which are often unable to achieve effective noise isolation due to the lack of PN junction or silicon dioxide isolation layer.The distance between the components and circuits is very close, and the interference is serious.In this part, a higher requirement is put forward for the noise elimination technology of the front module.5. Power consumption The original camera assembly is often located in the front windshield, and its own integrated AI chip will call deep learning algorithm for a large number of operations.Therefore, its power consumption is also very large.In the architecture of the next generation of autonomous driving system, the camera only serves as the original image recognition function. As a module, it can reduce the consumption of the computing unit to its power consumption unit.For the autonomous driving control system designed based on the architecture of centralized domain controller, the interface data input by the camera will no longer be the CAN data that CAN be directly used for algorithm control, but the original image data. The commonly used camera Rawdata input interface forms include FPD Link III, MIPI and DVP.Among them, FPD-Link was the first application of the LVDS specification, and since FPD-Link was the first successful use of LVDS, many display engineers used LVDS terminology instead of FPD-Link.LVDS is also the main way of camera image transmission, and MIPI-CSI-2 (Camera Serial Interface) protocol is a sub-protocol of MIPI alliance, which is widely used and has the characteristics of high speed and low power consumption.LVDS is a transmission protocol specially for the transmission medium configuration.For the autonomous driving system, every new module needs to be adapted to the environment.The adaptation process is as follows: With CAS as the carrier, simulation is established to determine the placement position by using CATIA, CAD and other software, and the corresponding location information is released to the camera module supplier. The camera module supplier performs module tuning, ISP tuning and hardware modification, and then carries out subjective and objective tests.The module tuning results are determined by industry or enterprise standards, and the optimized modules can be further tested.Whether the camera module selection is reasonable depends on the final module conformity test report.Comprehensive summarize the above situation, the influence factors of camera module identification including module performance, arrangement position, detection environment such as light, adopting image tuning part can solve the above problems, the effects of late can also be generated through the guidance of standardized subjective and objective test standard, optimization algorithm needs, integrated into the product requirements.