« Take-over evaluation by driving simulator (7) | トップページ | Evaluation of automated driving ethics by driving simulator (2) »

2020年11月11日 (水)

Evaluation of automated driving ethics by driving simulator (1)

​There are five challenges to achieving Level 5 automated driving: handling sensor performance inadequacies, take-over, mixing manual and automated driving, ​liability in the event of an accident, and ethics. ​Among these, in addition to the take-over described so far, we consider the ethical issue and the problem of the mixture of manual and automated driving. ​Let's start with ethical issues.

​The problem with the ethics of automated driving is the case where the trolley problem is applied to automated driving. ​That is, in a situation where the brake of the automated driving vehicle is broken and cannot stop suddenly, there is one pedestrian in the driving lane and another pedestrian in the other lane. ​Automated driving vehicles will either turn to the roadside to avoid contact with pedestrian groups and collide with the roadside zone, or they will keep driving and crash into the pedestrian. ​In this ultimate situation, the automated driving can be programmed to choose. ​This is not a matter of developer preference. ​What should we base our decision on, not my taste?

​In response to this problem, the Massachusetts Institute of Technology (MIT) Media Lab launched a project called Moral Machine. ​It collects people's judgments by answering questionnaires about the various automated driving trolley problems on the Moral Machine site. There are a lot of questions on the moral machine, and you can post new situations. ​The questions are based on the assumption that the automated driving vehicle with the broken brake will be damaged by going straight or changing lanes on a two-lane road, and the choice between going straight or changing lanes is made in the following situations for examples.

​・If you go straight, five pedestrians will die, and if you change lanes, you'll hit an obstacle on the road and one driver will die.

​・The automated driving vehicle is being driven by a pregnant woman and a child who is just three years old, and the number of pedestrians who thought there were five is actually four people and one dog, two of whom are criminals, one elderly and one homeless. ​In addition, everyone may cross the street ignoring the red traffic light.

​There are a number of other settings. ​In the above questions, you might choose lane change in the first setup, but what percentage of the next question would you choose lane change? When you answer the moral machine question, it shows you your choices and the results of your average choices. ​The author's answers were close to the average, but some of the questions were noticeably off.​ In other words, the moral machine has no right answer. ​So far, 40 million people around the world have tended to answer questions, but the trends vary by region and culture.

​The parameters of choice aree the difference between survivors and victims, sex, age, human or animal, body shape, social status, passenger or pedestrian, compliance with traffic rules, and whether to change course. ​Among these parameters, a common trend among respondents, regardless of nationality, age, or religion, is to prefer humans to animals, large numbers to small numbers, and young people to the elderly. ​It also tends to give priority to compliance with social status and traffic rules. ​However, there is no consistent trend for other parameters. However, there are regional trends. For example, in Finland, where the income gap is small, passengers or pedestrians, or social status is not a determining factor. ​In Central and South America, however, many felt that homeless people and criminals do not have to be helped. ​Japan's trend is slightly different from that of the rest of the world. ​That is, they do not value the number of lives saved. ​In other words, they tend to focus on who to help rather than the number. ​They also tend not to change their course.

​In addition, the tendency to assist pedestrians rather than passengers is most prominent in the world. ​By comparison, Norway is second and Singapore third in this trend, while China tends to help its crews rather than pedestrians. France tends to focus on the number of survivors and to help younger people more than older people. ​In Taiwan, China and South Korea, however, there is a strong tendency to help the elderly. ​These results show that there is no solution that the world of the trolley problem can accept. ​But because most of Moral Machine's respondents are young men, the results of a survey across all generations may change.

Let's assume that automated driving vehicles have a fair judgment in the trolley problem of minimizing the number of victims. ​If the victims include drivers, would you like to ride the automated driving vehicle? The specifications of automobiles are basically uniform worldwide. ​Automated driving vehicles will not change this basic principle. However, the driver's ethics are not the same in countries and regions around the world, according to the results of the Moral Machine. ​Is it all right to change the specifications depending on the destination so that different ethics are applied to different parts of the world?

One of the factors for changing the specifications is the traffic infrastructure situation in the area where it is used. ​For example, suppose an automobile lane is one lane away from each other, and there is an area with a bicycle lane next to the automobile lane. ​In order to avoid the risk of contact with oncoming vehicles, the vehicle is programmed to travel a little further from the oncoming lane and a little closer to the bike lane. ​This reduces the risk of oncoming vehicle contact, but increases the risk of bicycle contact. ​That means there will be a statistically significant increase in crashes between automated driving vehicles and bicycles in the region. ​In this way, changing the specification of automated driving will change the type of accident. ​For this reason, the author thinks it would be better to standardize the specifications of automated driving, which is related to accidents.

​With regard to ethics, it is not possible to unify ethics itself, so why not unify the process of reaching a conclusion? ​In other words, research and development of programs that reflect the preferences of local users in each country by the same mechanism, while maintaining the principles.

|

« Take-over evaluation by driving simulator (7) | トップページ | Evaluation of automated driving ethics by driving simulator (2) »

コメント

コメントを書く



(ウェブ上には掲載しません)




« Take-over evaluation by driving simulator (7) | トップページ | Evaluation of automated driving ethics by driving simulator (2) »