Computer Science Colloquia
Tuesday, October 1, 2013
David Luebke
Guest Lecture in Computational Photography Class
Location: Rice Hall, Room 340
12:30 - 1:45
Near-Eye Light Field Displays
ABSTRACT
Public interest in virtual reality and augmented reality is at an
all-time high, fueled by hobbyist products such as the Oculus Rift and
Google Glass, and the exciting possibilities they represent. Near-eye
display, which projects images directly into a viewer's eye, is the key
technology challenge for such products. Why do such displays always have
either bulky optics that hang off the face, such as the Rift, or very
limited field of view, such as Glass? They confront a fundamental
problem: the unaided human eye cannot accommodate, or focus, on objects
placed in close proximity. The "Holy Grail" of near-eye display would be
a display as thin, as light, and covering as wide a field of view as a
pair of sunglasses. It should be capable of presenting different parts
of the scene at different focal depths, solving the
accommodation-convergence depth cue conflict that plagues virtual
reality, 3D movies, and 3D TV. For extra credit, it should be able to
accommodate a user's eyeglass prescription, replacing rather than going
on top of the user's spectacles. We have built just such a display -
with a couple of big caveats.
I will describe a new light-field-based approach to near-eye display
that allows for dramatically thinner and lighter head-mounted display
capable of depicting accurate accommodation, convergence, and
binocular-disparity depth cues. Such near-eye light field displays
depict sharp images from out-of-focus display elements by synthesizing
light fields that correspond to virtual scenes located within the
viewer's natural accommodation range. Building on related integral
imaging displays and microlens-based light-field cameras, we optimize
performance in the context of near-eye viewing. Near-eye light field
displays support continuous accommodation of the eye throughout a finite
depth of field; as a result, binocular configurations provide a means to
address the accommodation convergence conflict that occurs with existing
stereoscopic displays. We have built film-based static image prototypes
(which I will hand around), a binocular OLED-based prototype
head-mounted display (which I will show videos of) and a GPU-accelerated
stereoscopic light field renderer (which invokes many interesting
computer graphics research questions).
Bio:
David Luebke helped found NVIDIA Research in 2006 after eight years
teaching computer science on the faculty of the University of Virginia.
David is currently Senior Director of Research at NVIDIA, where he
continues the research on computer graphics and GPU architecture that
led to his pioneering work on GPU computing. His honors include the
NVIDIA Distinguished Inventor award, the NSF CAREER and DOE Early Career
PI awards, and the ACM Symposium on Interactive 3D Graphics "Test of
Time Award". Dr. Luebke has co-authored a book, a SIGGRAPH Electronic
Theater piece, a major museum exhibit visited by over 110,000 people,
and dozens of papers, articles, chapters, and patents on computer
graphics and GPU computing.