Photometric Stereo using Internet Images

Boxin Shi1,2   Kenji Inose3   Yasuyuki Matsushita4    Ping Tan5   Sai-Kit Yeung1   Katsushi Ikeuchi3

1. Singapore University of Technology and Design   2. MIT Media Lab   3. The University of Tokyo
4. Microsoft Research Asia   5. Simon Fraser University


Pipeline of our method, which contains 6 main steps. We take unorganized Internet images as input to first generate a shape
(normal) prior, which is then used to produce high quality surface reconstruction with great details.


Photometric stereo using unorganized Internet images is very challenging, because the input images are captured under unknown general illuminations, with uncontrolled cameras. We propose to solve this difficult problem by a simple yet effective approach that makes use of a coarse shape prior. The shape prior is obtained from multi-view stereo and will be useful in twofold: resolving the shape-light ambiguity in uncalibrated photometric stereo and guiding the estimated normals to produce the high quality 3D surface. By assuming the surface albedo is not highly contrasted, we also propose a novel linear approximation of the nonlinear camera responses with our normal estimation algorithm. We evaluate our method using synthetic data and demonstrate the surface improvement on real data over multi-view stereo results.

Paper (with supplementary material)


@ article{InternetPS14,
author = {Boxin Shi and Kenji Inose and Yasuyuki Matsushita and Ping Tan and Sai-Kit Yeung and Katsushi Ikeuchi},
title = {Photometric Stereo using Internet Images},
journal = {International Conference on 3D Vision (3DV)},
year = {2014},

Poster (Download)



Input image examples

Estimated surface normal Reconstructed surface

Video (Download [22M])


Part of this work is finished while Boxin Shi was an intern at MSRA and a Ph.D. candidate at the University of Tokyo. Boxin Shi is supported by SUTD-MIT joint postdoctoral fellowship. Sai-Kit Yeung is supported by SUTD StartUp Grant ISTD 2011 016, SUTD-MIT International Design Center Grant IDG31300106, and Singapore MOE Academic Research Fund MOE2013-T2-1-159.