The basis of every modern advanced autonomous automation is necessarily high-quality machine vision. These are not just optical cameras, but simply all the sensor mechanisms capable of visualizing the environment. Without information from vision, it is not possible to manage anything autonomously. Of course, this also applies to future autonomous vehicles.
Current approach to vision in vehicles
If the car or other transport equipment is to be at least in some way autonomous or at least significantly help the driver, then one of the most important parts of the systems is his vision. More precisely, sensing the environment around the vehicle and creating relevant and especially highly reliable information about events and objects around the vehicle at any speed and under any conditions. Therefore, so-called machine vision must provide multiple information redundancies, especially in vehicles. In other words, it must monitor its surroundings by several different means/technologies in order to prevent one method of identification from working properly in the event of a failure or due to unsuitable conditions, so that the other sensing means represent it. And as is known from statistics,
Therefore, for the safe and reliable operation of autonomous vehicles, it is necessary to combine different scanning technologies, which include not only optical lenses, but also camera scanning systems. These can be referred to as “on-board” vision technologies, with the addition of external environmental sensing technologies, such as continuous reception and online map updates, which are continuously sent to the vehicle with up-to-date obstacle information.
A vehicle equipped in this way can thus “see around the corner”, ie know about possible critical situations or obstacles in operation before it can be identified directly in the vehicle by means installed, ie even before he could see them. driver. It is not only information about traffic, which everyone can use in navigation and maps Google or Seznam, but it is also much more accurate and up-to-date information such as roadside vehicles, excavations, or even currently only standing vehicles, a fallen tree or people walking on the road. It is used here to “connect” information from other vehicles through a fast online cloud evaluation of each vehicle’s vision in favor of other vehicles. So such shared vehicle intelligence, where, for example, one vehicle sees people on the curb with machine vision, and thanks to that the next vehicle on the same route receives information that there are people on the road that need to be focused on and watched for. The automation of the car will therefore slightly reduce the speed in advance, so that any sudden braking (when pedestrians “jump into the road”) can be performed without any consequences of the vehicle crew and pedestrians. Fast computer technology should be able to calculate in a flash according to the operating conditions (weather information and road conditions also obtained from the cloud system), mechanical braking properties and possible speed of movement of people at what speed can be passed to them, if necessary to brake or to avoid, it was possible to do everything safely for all the reactions of the traffic participants.
A new IEEE 2846 standard is then introduced to standardize, test and assess safety in autonomous vehicles, which will include a formal mathematical model based on automated vehicle decision making rules that will be formally verifiable (with math), technology neutral (meaning anyone can use it) and adjustable to allow regional adaptation by local governments. It will also include the test methodology and tools necessary to perform AV verification to assess compliance with the standard.