Abstract

We propose an inference procedure for deep convolutional neural networks (CNNs) when partial evidence is available. Our method consists of a general feedback-based propagation approach (feedback-prop) that boosts the prediction accuracy for an arbitrary set of unknown target labels when the values for a non-overlapping arbitrary set of target labels are known. We show that existing models trained in a multi-label or multi-task setting can readily take advantage of feedback-prop without any retraining or fine-tuning. Our feedback-prop inference procedure is general, simple, reliable, and works on different challenging visual recognition tasks. We present two variants of feedback-prop based on layer-wise and residual iterative updates. We experiment using several multi-task models and show that feedback-prop is effective in all of them. Our results unveil a previously unreported but interesting dynamic property of deep CNNs. We also present an associated technical approach that takes advantage of this property for inference under partial evidence in general visual recognition tasks.

Model

pull figure
Overview of our feedback-prop iterative inference procedure consisting of three basic steps - (a) full forward propagation to predict initial scores for all labels, (b) truncated backward propagation to update intermediate activations based on the partial evidence (known labels), and (c) truncated forward propagation to update the scores for the unknown labels.

Example Results

pull figure
Qualitative examples for visual concept prediction for News Images. Second row shows results of a multi-label prediction model (no feedback-prop), the next row shows results obtained using LF where words from surrounding news text (shown in blue) are used as partial evidence. Predictions also among the true labels are highlighted in bold. While news text contains many words that seem marginally relevant, feedback-prop still leverages them effectively to improve predictions. Surrounding news text provides high-level feedback to make predictions that would otherwise be hard.

Team

Citation

@InProceedings{Wang_2018_CVPR,
  title = {Feedback-Prop: Convolutional Neural Network Inference Under Partial Evidence},
  author = {Wang, Tianlu and Yamaguchi, Kota and Ordonez, Vicente},
  booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
  month = {June},
  year = {2018}
  }