« Evaluation of automated driving ethics by driving simulator (1) | トップページ | Evaluation of automated driving ethics by driving simulator (3) »

2020年11月12日 (木)

Evaluation of automated driving ethics by driving simulator (2)

Now let's consider how to program the automated driving ethic. ​Drivers should have a clear sense of ethics, and the automated driving ethic should be the same. The simple one is to set the rule of which to choose as the production rule, and select logically by if-then-else. ​Changing the production rules of the database by destination would allow for a variety of specifications. However, this approach requires that all confronting situations be embedded in the database. ​The actual situation may be ambiguous or unexpected. ​As a result, automated driving decisions are quickly broken in situations that if-then-else cannot determine. ​In order to deal with situations that don't fit in the database or are unexpected, automated driving vehicles will need to learn about the real world. ​That would allow automated driving vehicles to judge a wide range of situations.

Before proceeding with the development of the program, let's see what the society thinks. ​First of all, the three famous principles of robotics. Automated driving vehicles think and act on their own based on information obtained through sensing. ​In other words, since they can be called robots, they refer to the three principles of robotics. The Three Principles of Robotics were proposed in the work of science fiction writer Isaac Asimov. ​In the 1950 "I, Robot", it is described as follows in the 2058 "robotics handbook", 56 edition.

​Article1 : Robots must not harm humans. ​In addition, they shall not cause harm to human beings by overlooking such danger.
​Article2 : Robots must obey orders given to humans. ​Provided, however, that this shall not apply where the order issued contravenes Article 1.
​Article3 : The robot must protect itself unless it is likely to violate Articles 1 and 2 above.

​Suppose that the driver is being carried by the robot following these three principles. ​How will the robot behave when the trolley problem occurs? Let's see how the automated driving vehicle that follows the three principles of robotics behaves in the trolley problem. ​When the driver is not in the vehicle, the driver should change the lane in accordance with Article 1 and cause a self-injury accident in the situation where the driver is a pedestrian in the straight line and the driver is the obstacle in the lane change. If there were five people going straight and one person changing lanes, they would try to change lanes to minimize the number of victims. ​However, it turns out that the three robot principles alone are not enough because they are against Article 1. What would it do if it had a driver and one person going straight and the obstacle changing lanes? ​Going straight is against Article 1, and changing lanes is also against Article 1 because it harms the driver. ​Even if the driver instructs the driver to go straight ahead, it is against Article 1, but even if the driver changes lanes, it is against Article 1 and cannot make a decision. ​Again, the three principles of robotics are not enough. ​New rules are needed to ensure that decisions are made in accordance with ethical standards.

​Therefore, IEEE(Institute of Electrical and Electronics Engineers) has established the Moral Conscious Design Initiative Guidelines (Ethically Aligned Design). ​This is being revised, taking into account the opinions of practitioners and researchers. The IEEE Moral Conscious Design Initiative Guidelines target artificial intelligence AI and autonomous systems as AS(autonomous systems). ​Autonomous driving will be powered by AI, but it will be autonomous. AI and AS are collectively called A/IS. ​As the major principles of A/IS, the following three points are listed.

​1.Universal human values: A/IS can be a huge force for good in society if it is designed to respect human rights, harmonize with human values, increase happiness overall, and protect us. ​This value is necessary not only for engineers but also for policymakers and should benefit everyone, not for the benefit of a group or a single country.

​2.Political Decision and Data Handling: When properly designed and implemented, A/IS has great potential to foster culturally appropriate political freedom and democracy. ​These systems improve political efficiency, but digital data should be protected in a verifiable manner.

​3.Technical Dependency: A/IS should provide reliable services, and trust means that A/IS can enhance human-driven value and securely and actively achieve its designed objectives. ​The technology should monitor that its operation is consistent with human values and meets the prescribed ethical objectives of respecting rights. ​In addition, the verification and verification process should be developed to be auditable and authoritative.

​The IEEE Moral Conscious Design Initiative Guidelines set out the following four issues to be considered in order to realize the three principles.

​1.Human Rights Framing: How can A/IS ensure that human rights are not invaded?
​2.Accountability Principle Framing: How can A/IS ensure that A/IS is accountable?
​3.Transparency Principles: How can A/IS ensure the transparency of A/IS?
​4.Educational Framing: How can A/IS maximize the benefits of A/IS technology and minimize the risk of misuse?

This means that the automated driving vehicle ethic program (system) must be framed so as not to invade human rights, that the decision-making process can be output ex post facto, and that it must be developed to have the fewest problems in the region at the time of the ex post facto verification. In addition, the following three points should be given priority as the implementation principle within the development team of automated driving.

​1.Identify the norms and values of each system affected by the A/IS.
​2.The norms and values of the system are implemented in A/IS.
​3.Assess the coordination and collaterality of norms and values between people and A/IS within each system and development team.

​In order to develop the program that reflects ethics in automated driving in accordance with the IEEE Moral Conscious Design Initiative Guidelines, The author suggest, for example, the following:.

​1.In order not to invade human rights, when the automated driving vehicle is no driver, priority is given to life other than that of the automated driving vehicle.
​ (Pets if going straight, obstacles if changing a lane, choose to collide with the obstacles.)
​2.If the automated driving vehicle occupants are present, priority should be given to occupant safety.
​ (Pedestrians if going straight, obstacles if changing the lane, the automated driving vehicle will not change the lane. In case of this, the regulations shall be established in advance.)
​3.When damage is inflicted on persons other than crew members, judgment shall be made in accordance with the ethics of the country or region where the automated vehicles are used.

​This is just an example, and the contents are not the author's taste or idea, and there are no references. ​The idea is to place these basic principles on the premise of program development. Then, what kind of program should be made to conform to the ethics of the country/region in the item 3?

 

|

« Evaluation of automated driving ethics by driving simulator (1) | トップページ | Evaluation of automated driving ethics by driving simulator (3) »

コメント

コメントを書く



(ウェブ上には掲載しません)




« Evaluation of automated driving ethics by driving simulator (1) | トップページ | Evaluation of automated driving ethics by driving simulator (3) »