Internet-Scale Image Processing
The staggering amount of visual data (images, videos, etc.) available over the Internet suggests radically new "data-driven" approaches to longstanding problems in image processing. We are studying several problems in this context including single image super-resolution. The goal of a super-resolution algorithm is to compute a high-resolution version of a digital image. We envision this process leveraging examples harvested from the Internet that show the relationship of image data at neighboring spatial scales. However, scaling existing techniques to handle this amount of data requires re-thinking their computational structure and developing distributed algorithms that can take advantage of a cluster of computers. This research is supported by the National Science Foundation through a Cluster Exploratory (CluE) grant IIS-0844416 ("Image Super-Resolution using Trillions of Examples") and by resources made available by Google and IBM. You can read more about this project here.
Recent Publications
- Building and Using a Database of One Trillion Natural Image Patches, Sean Arietta and Jason Lawrence, IEEE Computer Graphics and Applications, 31(1), January/February 2011.