MPI-IS develops tech to make virtual clothing try-on
Researchers at the Max Planck Institute for Intelligent Systems have developed technology to digitally capture clothing on moving people, turn it into a 3D digital form, and dress virtual avatars. This new technology makes virtual clothing try-on practical. Capturing and transferring existing garments to new people simplifies the virtual try-on process.
Traditional virtual clothing try-on involves getting the 2D clothing pattern from the manufacturer, sizing this to a body, and simulating how the clothing drapes on the body. The new technique replaces garment simulation with garment capture. Capturing and transferring existing garments to new people greatly simplifies the process of virtual try-on.
“Our approach is to scan a person wearing the garment, separate the clothing from the person, and then rendering it on top of a new person,” says Dr. Gerard Pons-Moll, research scientist at MPI-IS and principal investigator of the project. “This process captures all the detail present in real clothing, including how it moves, which is hard to replicate with simulation,” says Pons-Moll. The ClothCap (ClothCapture) results exceed the realism of existing approaches.
ClothCap uses 4D movies of people recorded with a unique 4D high-resolution scanner. The system uses 66 cameras and projectors to illuminate the person being scanned. “This scanner captures every wrinkle of clothing at high resolution. It is like having 66 eyes looking at a person from every possible angle.” says Michael Black, director at MPI-IS. “This allows us to study humans in motion like never before.”
Like any movie, you can replay it but you cannot change the actors or their clothing. Instead “ClothCap” computes the body shape and motion under clothing while separating and tracking the garments on the body as it moves. “The software turns the captured scans into separate meshes corresponding to the clothing and the body,” says Dr. Sergi Pujades, postdoctoral researcher at MPI and one of the main authors of this work.
Traditional marker based motion capture record only the skeletal motion; placing hundreds of those markers on the clothing is unpractical and it is not well understood how to map the captured clothing to new characters. ClothCap makes it easy because the clothing is captured in correspondence with the body. “The algorithm literally subtracts the clothing from the recorded subject and adds it to new body to produce a realistic result,” says Gerard Pons-Moll. “It’s like doing arithmetic with people and their clothing, it’s cool!” he continues.
“Motion capture revolutionised the fields of animation, gaming and biomechanics. Much in the same way that motion capture replaced a lot of manual editing of motion from scratch, techniques like ClothCap could replace the way clothed characters are edited and animated digitally,” explains Pons-Moll. This is the first method to capture clothing in motion and animate new characters with it. “Other methods capture people in clothing but nobody could realistically animate new subjects with the captured clothing,” says Sergi Pujades.
ClothCap provides a foundational technology for virtual clothing try-on. “First a retailer needs to scan a professional model in a variety of poses and clothing to create a digital wardrobe of clothing items. Then a user can select an item and visualise how it looks on their virtual avatar,” says Michael Black.
The work has several limitations. For example, cloth wrinkles do not change with body shape and ClothCap does not allow the synthesis of novel motions. The team plans to address such limitations in future work. (SV)