Since before Eadweard Muybridge famously photographed the motions of a running horse in the 1870s, humans have sought to understand how our bodies and those of animals move through space. Evidently, this drive to dissect and understand movement continues today, as a team of researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a new, state-of-the-art technique for analyzing movement.
The process proposed by CSAIL involves an algorithm that converts 2D videos into 3D printed “motion sculptures.” Aptly named MoSculp, the project offers a deeper, more tactile visualization of the human body in motion and has the potential to offer unprecedented insight into the movements of professional athletes, dancers or anyone, really.
Up until now, people have mostly relied on 2D video representations to study movement. And while videos are more advanced than static photos, for example, they are limited in their visualization of movement because they do not clearly show the 3D structure of the person in motion. As the researchers say, having access to a person’s full geometry in analyzing movement can give valuable insight into the subtle motions or postures that make people faster or more precise.
“Imagine you have a video of Roger Federer serving a ball in a tennis match, and a video of yourself learning tennis,” elaborated Xiuming Zhang, PhD student and lead author of the corresponding study. “You could then build motion sculptures of both scenarios to compare them and more comprehensively study where you need to improve.”
The MoSculp process begins with a video sequence of a person moving through space. This video is then input into the MoSculp system which automatically identifies 2D key points on the subject’s body—like the hip, knee, ankle. Next, it detects the “best possible poses” from those points and generates what the researchers call 3D skeletons.
These 3D skeleton structures are then stitched together and a 3D printable motion sculpture is generated. The motion sculpture itself consists of a number of full body models as well as a continuous movement path that connects them. The MoSculp system also lets users customize the figures to focus on different body parts, assign various materials and adjust lighting.
“Dance and highly-skilled athletic motions often seem like ‘moving sculptures’ but they only create fleeting and ephemeral shapes,” said Courtney Brigham, communications lead at Adobe. “This work shows how to take motions and turn them into real sculptures with objective visualizations of movement, providing a way for athletes to analyze their movements for training, requiring no more equipment than a mobile camera and some computing time.”
In their study, the researchers found that over 75% of subjects said that the 3D printed MoSculp figures offered a more accurate and detailed visualization of motion compared to more standard photographic methods. That being said, the technique is better suited to larger movements, such as throwing a ball.
At this stage in the MoSculp system’s development, it can only be used for single-figure motions, but the researchers are planning to extend the program’s capabilities to include multiple bodies in motion. The hope with accommodating multiple people in the 3D printed motion sculptures is to investigate social disorders, interpersonal interactions and even team dynamics.
The MIT CSAIL researchers will soon present the innovative project at the upcoming User Interface Software and Technology (UIST) conference in Berlin.