Researchers From ETH Zurich and Max Plank Propose HOOD: A New Method that Leverages Graph Neural Networks, Multi-Level Message Passing, and Unsupervised Training to Enable Efficient Prediction of Realistic Clothing Dynamics

      

Telepresence, virtual try-on, video games, and many more applications that depend on high-fidelity digital humans require the ability to simulate appealing and realistic clothing behavior. Using simulations based on physical laws is a popular method for producing natural dynamic movements. While physical simulation may provide amazing results, it is expensive to compute, sensitive to beginning circumstances, and requires experienced animators; cutting-edge methods are not built to meet the rigorous computational budgets needed for real-time applications. Deep learning-based techniques are starting to produce efficient and high-quality outcomes. 

However, several restrictions have, up until now, prevented such methods from realizing their full potential. First, current techniques compute garment deformations largely as a function of body posture and rely on linear-blend skinning. While skinning-based plans can provide impressive results for tightly fitting clothes like shirts and sportswear, they need help with dresses, skirts, and other items of loose-fitting clothing that do not precisely mimic body motion. Importantly, many cutting-edge learning-based techniques are garment-specific and can only forecast deformations for the specific outfit they were caught on. Application is constrained by the requirement to retrain these techniques for every garment. 

Researchers from ETH Zurich and Max Planck Institute for Intelligent Systems in this study provide a unique method for forecasting dynamic garment deformations graph neural networks (GNNs). Through logical inference regarding the relationship between local deformations, pressures, and accelerations, their approach learns to anticipate the behavior of physically realistic fabrics. Their approach directly generalizes to arbitrary body forms and motions due to its localization, independent of the garment’s overall structure and shape. Although GNNs have shown promise in replacing physics-based simulation, applying this idea to clothes simulation produces unsatisfactory results. A given mesh’s feature vectors for vertices and their one-ring neighborhood is transformed locally using GNNs (implemented as MLPs).

Each transformation’s messages are then used to update feature vectors. The recurrence of this procedure allows signals to diffuse throughout the mesh. However, a predetermined number of message-passing stages limits the signal transmission to a certain radius. In modeling clothes, where elastic waves brought on by stretching flow swiftly through the material, this results in quasi-global and instantaneous long-range coupling between vertices. There are too few steps, which slow down signal transmission and cause uncomfortable overstretching artifacts, which give garments an unnatural, rubbery look. Increased computer time is the price of stupidly increasing iterations. 

The fact that the maximum size and resolution of simulation meshes are unknown a priori, which would enable choosing a conservative, appropriately high number of iterations, only exacerbates this issue. They suggest a message-passing system across a hierarchical network that interleaves propagation phases at various degrees of resolution to solve this issue. This allows for the effective treatment of fast-moving waves resulting from stiff stretching modes at broad sizes while providing the key required to describe local detail, such as folds and wrinkles, at finer scales. Through tests, they demonstrate how their graph representation enhances predictions for comparable computing budgets on both a qualitative and quantitative level. 

By adopting an incremental potential for implicit time stepping as a loss function, they combine the ideas of graph-based neural networks with different simulations to increase the generalization potential of their method. Because of this formulation, they no longer require any ground-truth (GT) annotations. This enables their network to be trained entirely unsupervised while simultaneously learning multi-scale clothing dynamics, the influence of material parameters, collision reaction, and frictional contact with the underlying body. The graph formulation also enables us to simulate the unbuttoning of a shirt in motion and clothing with varying and changing topologies. 

Graph neural networks, multi-level message forwarding, and unsupervised training are combined in their HOOD approach, enabling real-time prediction of realistic clothing dynamics for various clothing styles and body types. They experimentally demonstrate that, compared to cutting-edge methods, their method offers strategic advantages regarding flexibility and generality. In particular, they show that a single trained network:

Effectively predicts physically-realistic dynamic motion for a wide range of garments.

Generalizes to new garment types and shapes not seen during training.

Permits run-time changes in material properties and garment sizes.

Supports dynamic topology changes like opening zippers or unbuttoning shirts.

Models and code are available for research on GitHub.

Check Out the Project Page, Github Link, and Paper. Don’t forget to join our 25k+ ML SubRedditDiscord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more. If you have any questions regarding the above article or if we missed anything, feel free to email us at Asif@marktechpost.com

Featured Tools:

StoryBird AI

Taplio

tinyEinstein

AdCreative.ai

SaneBox

Katch

Check Out 100’s AI Tools in AI Tools Club

The post Researchers From ETH Zurich and Max Plank Propose HOOD: A New Method that Leverages Graph Neural Networks, Multi-Level Message Passing, and Unsupervised Training to Enable Efficient Prediction of Realistic Clothing Dynamics appeared first on MarkTechPost.

 Read More Artificial Intelligence Category – MarkTechPost 

​  


Posted

in

by

Tags:

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *