My Research [dissertation] [robots] [publications] [resume] [masters] [UVA Vision/Robotics Group]

My research focuses on designing systems of representation for autonomous mobile robots. I have created a task based design methodology that allows the designer to develop the agent's internal representation of its environment based on the agent's goals and capabilities.

Primarily, I am interested in representation that is useful for the "action oriented" part of an agent's architecture, i.e. not the planner. My work begins with the assumption that the world is dynamic and so robots must have a fairly direct coupling between perception and action if they are to be effective. This means they cannot afford to do a lot of high-level inferencing (planning) as part of their low-level control and this effects the kinds of representation structures that they can use efficiently and effectively. By analyzing the tasks that the robot must perform, the designer can answer three basic questions:

  • What to represent?
  • How to structure that representation?
  • How to keep that representation consistent with the changing state of a dynamic environment?
  • I advocate the use of representation in all layers of an agent architecture. Although some (like Brooks) have advocated that intelligent behavior can be achieved without representation, I believe this to be somewhat of a strawman. The question is how to balance the informational needs of any particular component of the agent architecture (what knowledge needs to be captured in the representation) against the cost of keeping the representation consistent with the state of a dynamic world.

    Design Methodology

    Robots Designed Using the Methodology
    Bruce

    Spot

    Marcus

    My Publications

    Book chapter

    D. Kortenkamp, E. Huber and G. Wasson. 1998. Integrating a Behavior-based Approach to Active Stereo Vision with an Intelligent Control Architecture for Mobile Robots. in Hybrid Information Processing in Adaptive Autonomous Vehicles, eds. Gerhard K. Kraetzschmar and Gunther Palm. Springer-Verlag.

    Journal

    G. Wasson, D. Kortenkamp, and E. Huber. 1999. Integrating Active Perception with an Autonomous Robot Architecture. Journal of Robotics and Autonomous Systems 29: 175-186.

    F. Brill, G. Wasson, G. Ferrer, and W. Martin. 1998. The Effective Field of View Paradigm: Adding Representation to a Reactive System. Journal of Engineering Applications of Artificial Intelligence 11 (Special Issue on Machine Vision for Intelligent Vehicles and Autonomous Robots): 189-201.

    Conference

    G. Wasson, and W. Martin. 1998. Multi-tiered Representation for Autonomous Agents. Proceedings of SPIE, Mobile Robots XIII and Intelligent Transportation Systems. Volume 3523: 4-12.

    G. Wasson, D. Kortenkamp, and E. Huber. 1998. Integrating Active Perception with an Autonomous Robot Architecture. Autonomous Agents 98: 325-331.

    G. Wasson, G. Ferrer, and W. Martin. 1997. Perception, Action and Effective Representation in Multi-Layered Systems. GI/VI-97: 73-80.

    G. Wasson, G. Ferrer, and W. Martin. 1997. Systems for Perception, Action and Effective Representation. FLAIRS-97 - Special Track on Real-Time Planning and Reacting: 352-356.

    G. Wasson, G. Ferrer, and W. Martin. 1997. Hide and Seek: Effective Use of Memory in Perception/Action Systems. Autonomous Agents 97: 492-493. full version

    Workshop

    G. Wasson and W. Martin. 1999. Design of Autonomous Agent Representation. IJCAI Workshop on Adaptive Spatial Representation of Dynamic Environments. to appear.

    G. Wasson, E. Huber, and D. Kortenkamp. 1998. A Behavior Based, Visual Architecture for Autonomous Robots. CVPR 98 Workshop on Perception for Mobile Agents: 89-94.

    G. Wasson, and W. Martin. 1996. Integration and Action in Perception/Action Systems with Access to Non-local Space Information. AAAI-96 workshop on Planning, Action and Control: Bridging the Gap.
     

    My Masters Research

    Acoustics provide a rich source of information which can be exploited to reduce the amount of data that must be processed by vision systems. My research has been in using acoustics to control visual attention. I have built a system which tracks multiple targets of interest in our laboratory, using both audio and visual information. The system maintains a set of deictic markers for objects (generally humans) and attempts to update their positions using data from multiple sensors.

    Sound source direction is estimated based on the time difference between the reception of a source's signal at each of a pair of microphones. This time difference combined with other information derives the source's angular position. The estimation of time delay is done using phase correlation. My thesis on the subject can be accessed here.

    wasson@virginia.edu