A paper that was just submitted to the Cornell University Library describes how to obtain accurate 3D body models and texture of arbitrary people from a single, monocular video in which a person is moving. The paper was authored by Thiemo Alldieck, Marcus Magnor, Weipeng Xu, Christian Theobalt and Gerard Pons-Moll; it will be presented at the upcoming Computer Vision and Pattern Recognition conference in Salt Lake City.
Based on a parametric body model, the researchers present a robust processing pipeline achieving 3D model fits with 5 mm accuracy, for clothed people as well. The authors’ main contribution is a method to non-rigidly deform the silhouette cones corresponding to the dynamic human silhouettes, resulting in a visual hull in a common reference frame that enables surface reconstruction.
As Sciencemag – that first reported the news – explains, the system has three stages: “First, it analyzes a video a few seconds long of someone moving—preferably turning 360° to show all sides—and for each frame creates a silhouette separating the person from the background. Based on machine learning techniques—in which computers learn a task from many examples—it roughly estimates the 3D body shape and location of joints. In the second stage, it “unposes” the virtual human created from each frame, making them all stand with arms out in a T shape, and combines information about the T-posed people into one, more accurate model. Finally, in the third stage, it applies color and texture to the model based on recorded hair, clothing, and skin.”
This enables efficient estimation of a consensus 3D shape, texture and implanted animation skeleton based on a large number of frames. The researchers presented evaluation results for a number of test subjects and analyzed overall performance. Requiring only a smartphone or webcam, this method enables everyone to create their own fully animatable digital double.
This system can be used for integration in 3D software programs such as video games or 3D animations, for social VR applications or virtual try-on for online fashion shopping. Eventually, it could easily be modified to export accurate 3D models, ending the ongoing dilemma of how to give someone a 3D replica of themselves as a surprise gift.