My research focuses on assuring the safety, security and reliability of cyber-physical systems (CPS), which are smart systems that include co-engineered interacting networks of physical and computational components. CPS empower the coming fourth industrial revolution, through providing new functionalities to improve quality of life and enabling technological advances in critical areas (e.g., healthcare, transportation, energy, and manufacturing). My research mission is to develop theoretical methods and practical tools that support the modeling, design and analysis of CPS—to make them safe, secure, and reliable—while accounting for uncertainties and imperfections of human interaction. My work can be roughly grouped into following themes.
Probabilistic model checking is a powerful formal verification method for analyzing quantitative properties (e.g., probabilities, timeliness, resources) of systems that exhibit random or probabilistic behavior. Practical tools such as the
PRISM model checker
have enabled the use of probabilistic model checking for analyzing systems from many different
including communication and multimedia protocols, randomized distributed algorithms, security protocols, biological systems and many others.
Scalability, however, is a key challenge of probabilistic model checking, due to the state explosion problem. My doctoral thesis work aims to address this challenge, by developing, for the first time, fully automated compositional verification techniques for probabilistic systems. The main insight was to adopt assume-guarantee reasoning, in which individual system components are verified under assumptions about their environment. The overall system correctness is then established by combining verification results of individual components using proof rules. An important question that arises is how to devise suitable assumptions? The traditional solution requires non-trivial manual effort to derive assumptions. To enable fully automated probabilistic compositional verification, I developed methods to automate the generation of probabilistic assumptions via active learning. I proposed three probabilistic compositional verification frameworks, which differ not only in the format of assume-guarantee proof rules, but also in the type of models and assumptions, targeting for both asynchronous and synchronous probabilistic systems.
Medical CPS are life-critical, context-aware, networked systems of medical devices. They are increasingly used to improve treatment outcomes, enhance patient safety, as well as reduce caregiver burden. To achieve safe and effective medical CPS, many challenges need to be addressed. I have worked on the following projects.
In collaboration with clinicians at Penn Medicine and other colleagues, I studied the behavior of Type 1 Diabetes patients (e.g., how they interact with insulin pumps). I developed methods to build data-driven behavior models, using a clinical dataset collected from real patients, which were demonstrated to be useful for boosting the effectiveness of patient education and peer support (see more details here). In another recent work, I proposed a general framework for building data-driven behavior models for human operators of medical CPS. Such models could assist in developing safer, more robust, and even personalized medical devices.
Most medical devices are safety-critical real-time embedded systems that need to meet stringent timing requirements. The timed behavior of the embedded software is typically modeled independently of the platform-specific timing semantics (e.g., delay due to scheduling or I/O handling) in the model-based development. Although this approach helps to reduce the complexity of the model, it leads to timing gaps between the model and its implementation. It is therefore challenging to guarantee that the implementation preserves timing requirements verified on the model. My colleagues and I tackled this challenge by developing a platform-specific timing verification framework, and a method to produce platform-specific code from platform-independent models.
Medical devices are increasingly being designed with network interfaces. These interfaces are currently being used to stream information from the medical device (e.g., vital sign readings) into a health IT system (e.g., Electronic Health Records). In future, approaches such as those promoted by the Medical Device Plug and Play (MDPnP) program would enable caregivers to assemble at the bedside a collection of interoperable medical devices, perhaps made by different manufacturers, to provide therapy for a specific clinical scenario. It is challenging to provide safety assurance for such assembled systems. Since safety is an emergent property, how can we assess a priori the safety of an on-demand plug & play medical system? My colleagues and I proposed safety assurance patterns and model-based approaches (see also here) to address this challenge.
Advances in automation have the potential to reduce the workload required for human operators and/or enable a cooperative partnership between humans and autonomous systems. Examples include “self-driving” cars that only require drivers’ occasional supervisory control, and robots that work with or beside people to leverage the different strengths of humans and robots. Indeed, “there is a need for research focused on human interaction with systems that operate in the physical world, particularly around issues of safety, trust, and predictability of response”, as noted in a PCAST report. In collaboration with researchers at the Air Force Research Lab and other colleagues, I have worked on the following problems.
In the current practice, human operators and autonomy share the control of UAV missions. Uncertainties in human interaction pose challenges for assuring correct UAV operation. Moreover, human factors (e.g., workload, proficiency, and fatigue) may affect the mission performance. I proposed an approach to synthesize human-in-the-loop control protocols for UAV, via drawing insights from human factors, building abstractions of operator-UAV interaction, and utilizing quantitative synthesis techniques. Simulated experiments showed that my approach could enable the automated synthesis of operator-dependent optimal control protocols, taking into account operator performance characteristics.
Robotics often need to meet rigorous safety and reliability requirements. The increased autonomous capabilities of robotics, however, can lead to an increase in system complexity and a corresponding decrease in system transparency, making it more difficult to identify errors in robotics planning. I proposed a notion of structured probabilistic counterexample, which captures requirements violations resulting from complex probabilistic robotic behavior, providing useful and meaningful information for human operators to diagnose robotics planning inconsistencies. I also developed methods for the automated generation of such counterexamples.
Recent news reported high-profile attacks against CPS (e.g., automobiles and medical devices). CPS are vulnerable to malicious attacks beyond the standard cyber attacks, because of the interaction between information technology and the physical world. For example, non-invasive attacks may occur, when the physical environment is compromised to allow the injection of malicious signal. Conventional information security approaches have limited utility in such cases. Thus, new approaches are needed for CPS-specific security challenges. First, CPS are typically safety-critical, making it necessary to ensure that the system remains safe even under attacks. In addition, the correct functioning of many CPS requires collecting and processing personal data, and hence it is important to ensure privacy of the data. Furthermore, since many CPS involve humans in the loop, human behavior (whether deliberate or inadvertent) can threaten the security of CPS and must be considered as part of more robust cybersecurity solutions. I have two ongoing projects related to CPS security and human factors. My long-term vision is to propose holistic solutions that facilitate an early integration of human factors into the development of security and privacy-aware CPS.