Image Super-resolution
using Trillions of Examples

We envision a new
generation of upsampling algorithms that draw upon the many images
publicly available over the Internet. Consider the hypothetical photo
enlargement system depicted above. A user enters an image they wish
to “intelligently upsample”. While we would not expect to find a
higher-resolution version of this exact same image on the Internet, we
are very likely to find many with very similar components captured at
higher resolutions. We are developing a system that evaluates 50
million on-line images, collects those with similar yet
higher-resolution parts, and uses them to synthesize an enhanced
version of the input. In the example above, such a system would
select images that contain similar colored and shaped eyes, similar
fur texture, and similar foliage. This project will study the
appropriate theoretical framework, search algorithms, and data
collection and processing techniques for example-based image
super-resolution at massive Internet scales.
This website will serve as a repository for publications,
datasets and software as they become available.
This project is partially funded by the National Science Foundation
(IIS-084416) through their
Cluster
Exploratory (CluE) program and supported by a computer cluster
maintained by Google and IBM.