英文摘要 |
In recent years, road transportation has entered conditional driving automation (Level 4) from partial driving automation (Level 2). In the future, the relationship between drivers and Level 4 self-driving cars will be ''human-machine cooperation''. Humans and machines ''can'' and ''must'' be close to work together to complete the driving task. This article uses Level 4 self-driving cars as the research situation to explore the driver's decision-making behavior when encountering an emergency after starting automatic driving. The results found that when autopilot is activated and encounters a recognizable chasing situation, most drivers will choose to hand over the driving control to Artificial Intelligence (AI); as for those who choose to use AI to fulfill prior driving decisions, the decision mostly chooses to reduce casualties, but the result of reducing casualties actually leads to harm to passersby who obey the traffic rules. Our findings indicate that ethical issues are inevitably involved in human-robot cooperation and conflict, and also make us aware of the implications of re-examining traffic regulations and human safety. |