Text-guided 3D Human Generation from 2D Collections


Tsu-Jui Fu1   Wenhan Xiong2   Yixin Nie2
Jingyu Liu2   Barlas Oğuz2   William Yang Wang1
1UC Santa Barbara   2Meta
Conference on Empirical Methods in Natural Language Processing (EMNLP) 2023 (Findings)


Abstract

3D human modeling has been widely used for engaging interaction in gaming, film, and animation. The customization of these characters is crucial for creativity and scalability, which highlights the importance of controllability. In this work, we introduce Text-guided 3D Human Generation (T3H), where a model is to generate a 3D human, guided by the fashion description. There are two goals: 1) the 3D human should render articulately, and 2) its outfit is controlled by the given text. To address this T3H task, we propose Compositional Cross-modal Human (CCH). CCH adopts cross-modal attention to fuse compositional human rendering with the extracted fashion semantics. Each human body part perceives relevant textual guidance as its visual patterns. We incorporate the human prior and semantic discrimination to enhance 3D geometry transformation and fine-grained consistency, enabling it to learn from 2D collections for data efficiency. We conduct evaluations on DeepFashion and SHHQ with diverse fashion attributes covering the shape, fabric, and color of upper and lower clothing. Extensive experiments demonstrate that CCH achieves superior results for T3H with high efficiency.



Qualitative Results

👇 press the tab for different datasets



Pose-guided T3H



Animatable T3H


The woman is wearing a floral-patterned long-sleeved sweater and long denim pants
+ walking

She is dressed in a long-sleeved chiffon shirt with striped three-point shorts
+ shooting basketball

This man is wearing a
striped medium sleeve
and long cotton trousers
+ squatting

She is sporting a graphic-patterned tank top with floral-patterned three-point shorts
+ kicking soccer