Lu FengAssociate Professor
University of Virginia Research OverviewMy research focuses on assuring the safety and trustworthiness of embodied AI systems that perceive, reason, and act in the physical world, often in collaboration with humans. I study how learning-enabled and autonomous systems can be designed, verified, and monitored to operate reliably under uncertainty and interaction, drawing on formal methods, control, and machine learning. My work spans both design-time assurance and runtime mechanisms, with applications to autonomous driving, healthcare AI, and robotics. Key research directions include: Runtime Safety Architectures for Embodied AIMy research develops runtime safety architectures for embodied AI systems that operate in dynamic physical and human environments, where failures are safety-critical and offline guarantees are insufficient. I study how safety constraints can be enforced during execution through mechanisms such as shielding, predictive monitoring with uncertainty, and logic-based runtime enforcement. This work addresses fundamental challenges arising from partial observability, distribution shift, and interaction with other agents, including humans. Applications span autonomous driving, robotics, healthcare, and large-scale cyber-physical systems, where safety must be assured continuously rather than verified once at design time.
Adaptive and Learning-Based Decision-Making in Safety-Critical SystemsThis line of work focuses on learning and adaptation under safety constraints, where data-driven methods must be integrated carefully into systems with irreversible consequences. I study how reinforcement learning, reward design, and structured learning architectures can be shaped by formal specifications, uncertainty reasoning, and interpretability requirements. Rather than treating learning as an unconstrained optimization process, this work investigates how learning objectives and representations can be designed to respect safety, structure, and domain knowledge, enabling adaptation without undermining system guarantees.
Human–AI Collaboration and Trust in Embodied SystemsMany embodied AI systems interact closely with humans, whose behavior, preferences, and trust evolve over time. My research investigates human–AI collaboration as a safety-critical problem, where humans are treated as first-class agents in the loop rather than external users. I study trust-aware planning, explanation and transparency for learning-based policies, and decision-making under uncertain human preferences. A key theme is treating explanations and trust calibration as runtime control signals, enabling safer and more effective collaboration in domains such as autonomous driving, human–robot interaction, and multi-agent systems.
Please check my Google Scholar page for a more complete list of publications. SponsorsI gratefully acknowledge ongoing and past support from National Science Foundation (CCF-2131511, CCF-1942836, CNS-1739333, CRII CNS-1755784), National Institutes of Health, Office of Naval Research, Air Force Office of Scientific Research,Toyota InfoTech Labs, Assuring Autonomy International Programme, 4-VA Collaborative Research Grant, James S. McDonnell Foundation, Center for Innovative Technology, Northrop Grumman Corporation, and UVa SEAS.
|