Active Research Projects
This frontier project establishes the Center for Trustworthy Machine
Learning (CTML), a large-scale, multi-institution, multi-disciplinary
effort whose goal is to develop scientific understanding of the risks
inherent to machine learning, and to develop the tools, metrics, and
methods to manage and mitigate them.
Econometrically Inferring and Using Individual Privacy Preferences
with Denis Nekipelov (UVA Economics)
This project combines research on mechanism design and econometrics to
provide a new perspective on privacy. Our goal is to develop methods
that use ideas from econometrics to reveal concrete privacy preferences
for individuals and aggregate distributions, and connect those
preferences to formal privacy models, including differential
Privacy-preserving machine learning combining secure multi-party
computation with differential privacy and other privacy techniques.
Previous Research Projects
These projects are no longer active, but current projects build on many
of the ideas and tools developed by these projects.
Adversarial Machine Learning
An evolutionary framework based on genetic programming for automatically
finding variants that evade detection by machine learning-based malware
Web/Mobile Application Security
An integrated suite of techniques for protecting
applications and their data from hostile environments.
Quantifying the risks of side-channel leaks in web
applications using a dynamic, black-box approach.
Jonathan Burket, Austin DeVinney, Casey Mihaloew (part of AFOSR MURI)
A secure web application framework that provides rich data policies for Ruby on Rails.
Mechanisms that allow clients to enforce meaningful security policies on
untrusted content in mashup web pages.
Protecting privacy for social network applications using privacy-by-proxy.
Security through Diversity
Designing for Measurable Security
with Sal Stolfo
and Steve Bellovin
(Columbia University) (Air Force Office of Scientific Research)
Protect systems from sophisticated and motivated adversaries by
automatically and continuously changing the attack surface of a running
Using structured artificial diversity
to provide high security assurances against large classes of attacks.
with Jack Davidson, John Knight, and Anh Nguyen-Tuong (DARPA)
Using automatically generated diversity at
various levels of abstraction to protect computer systems.
New approaches to cryptography, protocol, and system
design to provide adequate security on low-power devices.
How computing in the physical world impacts security.
Getting sensible behavior from collections of unreliable, unorganized
Techniques for automatically inferring temporal properties of
real world software using dynamic analysis.
Protect vulnerable programs by storing security-critical data in a
separate protected store.
Reducing the cost and improving the scalability of program analysis using
lightweight static analysis (Splint
Uses the disk processor to improve virus detection and response by
recognizing viruses by their disk-level activity.