Lu Feng

Associate Professor
University of Virginia

Project: NSF CRII: CPS: Cognitive Trust in Human-Autonomous Vehicle Interactions

nsf Sponsor: This project is supported by National Science Foundation CRII CNS-1755784 (April 2018 - present).

Objective

We are witnessing accelerating technological advances in autonomous vehicles. Driver assistance functionalities such as adaptive cruise control and lane keeping are prevalent, while fully autonomous vehicles are being developed and tested on public roads. Our decisions of whether to rely or not on automation technology are guided by trust. Acting based on inappropriate trust may lead to catastrophic outcomes. A pertinent example is the 2016 Tesla fatal car accident while on autopilot mode, which is a result of over-reliance (“overtrust”) by the driver. As the degree of autonomy of vehicles increases and the nature of human-autonomy interactions becomes more complex, key questions that need to be asked are how to ensure safety and trust in human- autonomous vehicle partnerships? How do we know when to trust an autonomous vehicle? How much should we trust? And how much should the autonomous vehicles trust us? This project targets a major gap in developing design methodologies for capturing the social, trust-based decisions within human-autonomy partnerships. Its objective is to develop languages and algorithms for formally expressing and reasoning about trust in human-autonomous vehicle interactions.

Driving Simulation Testbed

We have built a (semi-)autonomous driving simulation testbed in our lab at the University of Virginia. The following figure shows our testbed setup. The hardware platform is based on the Force Dynamics 401CR driving simulator, which is a four-axis motion platform that tilts and rotates to simulate the experience of being in a vehicle. The human driver interacts with the driving simulator through the PreScan software, which can be programmed to simulate (semi-)autonomous driving scenarios. While driving, the human driver is monitored by sensors including Advanced Brain Monitoring B-Alert X24 EEG System, Shimmer3 EMG (Electromyogram) sensor, Shimmer3 ECG (Electrocardiogram) sensor, Shimmer3 GSR (galvanic skin response) sensor, and Tobii Pro Eye-tracking Glasses. We also use the iMotions Biometric Research Platform to integrate all these sensing devices. This is a plug & play platform that provides an easy way to combine the multi modal sensing data from various devices (biometric sensors, eye tracking, user surveys). It synchronizes the time stamps of all the device data for analysis and visualization.

testbed

We are currently conducting human subject experiments (under UVA IRB-HSR protocol #20606) to collect data for better understanding the factors that can influence human's trust on autonomous driving. On average, each participant takes about 3 hours to complete the experiments where they are asked to sit in an indoor driving simulator (located in Rice Hall) and drive in (semi-)autonomous mode for 16 trials with different simulated scenarios. The participants are asked to press buttons on the driving simulator to increase or decrease their trust levels on the autonomy. The participants are also asked to wear a set of physiological sensors, including eye-tracking glasses, GSR, EEG, and EMG sensors, and fill in a questionnaire towards the end of the experiment. We are also analyzing the data collected from the experiments, with the aims to identify key factors behind trust changes and build mathematical models to represent the evolution of trust dynamics. This effort will help to answer Thrust I of this project.

Impact

The results of this project will contribute to the design and development of trustworthy autonomous vehicles. The findings of this project can have an impact on multiple disciplines, including autonomy, cyber-physical systems, formal methods, and human factors. This project is also providing research and training opportunities for multiple PhD and undergraduate students at the University of Virginia, who will become the next generation work force for developing safe and trustworthy autonomous systems.

Research Progress & Publications

  • A Case Study of Trust on Autonomous Driving [PDF] [DOI]
    Shili Sheng, Erfan Pakdamanian, Kyungtae Han, BaekGyu Kim, Prashant Tiwari, Inki Kim, and Lu Feng.
    22nd IEEE Intelligent Transportation Systems Conference (ITSC), 2019
    Abstract: As autonomous vehicles have benefited the society, understanding the dynamic change of humans’ trust during human-autonomous vehicle interaction can help to improve the safety and performance of autonomous driving. We designed and conducted a human subjects study involving 19 participants. Each participant was asked to enter their trust level in a Likert scale in real-time during experiments on a driving simulator. We also collected physiological data (e.g., heart rate, pupil size) of participants as complementary indicators of trust. We used analysis of variance (ANOVA) and Signal Temporal Logic (STL) to analyze the experimental data. Our results show the influence of different factors (e.g., automation alarms, weather conditions) on trust, and the individual variability in human reaction time and trust change.

  • Towards Transparent Robotic Planning via Contrastive Explanations [PDF] [DOI]
    IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2020
    Abstract: Providing explanations of chosen robotic actions can help to increase the transparency of robotic planning and improve users’ trust. Social sciences suggest that the best explanations are contrastive, explaining not just why one action is taken, but why one action is taken instead of another. We formalize the notion of contrastive explanations for robotic planning policies based on Markov decision processes, drawing on insights from the social sciences. We present methods for the automated generation of contrastive explanations with three key factors: selectiveness, constrictiveness and responsibility. The results of a user study with 100 participants on the Amazon Mechanical Turk platform show that our generated contrastive explanations can help to increase users’ understanding and trust of robotic planning policies, while reducing users’ cognitive burden.

  • Trust-Based Route Planning for Autonomous Driving [PDF] [DOI]
    Shili Sheng, Erfan Pakdamanian, Kyungtae Han, BaekGyu Kim, John Lenneman, Ziran Wang, and Lu Feng.
    ACM/IEEE International Conference on Cyber-Physical Systems (ICCPS), 2021
    Abstract: Route planning is a widely used services in people's daily life. Traditional route planning methods mostly focus on minimizing the traveling distance or time and lack the consideration of human factors. As trust is the key determinant of people's adoption of an autonomous driving vehicle, We aim to present a novel route planning method for autonomous driving vehicles by accounting for trust. Since trust is an observable state of mind, it is essential to understand 1) how trust evolves during the interaction with the autonomous driving vehicle and 2) what decision (take over from the system or not) a human driver may make when facing incidents on the road. To attack these challenges, we developed a trust dynamics model and a driver behavior model, respectively. We further integrated these two models into a partially observable Markov decision process (POMDP) in order to obtain optimal route policy. We recruited 22 participants to drive in a high-fidelity driving simulator. The results tell that route planning accounting for trust achieves better satisfactory and less take-over rate, thereby demonstrating the effectiveness of our model.

  • Multi-Objective Controller Synthesis with Uncertain Human Preferences [PDF]
    Shenghui Chen, Kayla Boggess, David Parker, and Lu Feng.
    ACM/IEEE International Conference on Cyber-Physical Systems (ICCPS), 2022 (accepted)
    Abstract: Complex real-world applications of cyber-physical systems give rise to the need for multi-objective controller synthesis, which concerns the problem of computing an optimal controller subject to multiple (possibly conflicting) criteria. The relative importance of objectives is often specified by human decision-makers. However, there is inherent uncertainty in human preferences (e.g., due to artifacts resulting from different preference elicitation methods). In this paper, we formalize the notion of uncertain human preferences, and present a novel approach that accounts for this uncertainty in the context of multi-objective controller synthesis for Markov decision processes (MDPs). Our approach is based on mixed-integer linear programming and synthesizes an optimally permissive multi-strategy that satisfies uncertain human preferences with respect to a multi-objective property. Experimental results on a range of large case studies show that the proposed approach is feasible and scalable across varying MDP model sizes and uncertainty levels of human preferences. Evaluation via an online user study also demonstrates the quality and benefits of the synthesized controllers.

  • Planning for Automated Vehicles with Human Trust
    Shili Sheng, Erfan Pakdamanian, Kyungtae Han, Ziran Wang, John Lenneman, David Parker, and Lu Feng.
    ACM Transactions on Cyber-Physical Systems - Special Issue of “Best of ICCPS 2021”, 2022 (under review)
    Abstract: Recent work has considered personalized route planning based on user profiles, but none of it accounts for human trust. We argue that human trust is an important factor to consider when planning routes for automated vehicles. This paper presents a trust-based route planning approach for automated vehicles. We formalize the human-vehicle interaction as a partially observable Markov decision process (POMDP) and model trust as a partially observable state variable of the POMDP, representing the human's hidden mental state. We build data-driven models of human trust dynamics and takeover decisions, which are incorporated in the POMDP framework, using data collected from an online user study with 100 participants on the Amazon Mechanical Turk platform. We compute optimal routes for automated vehicles by solving optimal policies in the POMDP planning, and evaluate the resulting routes via human subject experiments with 22 participants on a driving simulator. The experimental results show that participants taking the trust-based route generally reported more positive responses in the after-driving survey than those taking the baseline (trust-free) route. In addition, we analyze the trade-offs between multiple planning objectives (e.g., trust, distance, energy consumption) via multi-objective optimization of the POMDP. We also identify a set of open issues and implications for real-world deployment of the proposed approach in automated vehicles.