MoLE: Enhancing Human-centric Text-to-image Diffusion via Mixture of Low-rank Experts

J Zhu, Y Chen, M Ding, P Luo, L Wang… - arXiv preprint arXiv …, 2024 - arxiv.org
J Zhu, Y Chen, M Ding, P Luo, L Wang, J Wang
arXiv preprint arXiv:2410.23332, 2024arxiv.org
Text-to-image diffusion has attracted vast attention due to its impressive image-generation
capabilities. However, when it comes to human-centric text-to-image generation, particularly
in the context of faces and hands, the results often fall short of naturalness due to insufficient
training priors. We alleviate the issue in this work from two perspectives. 1) From the data
aspect, we carefully collect a human-centric dataset comprising over one million high-quality
human-in-the-scene images and two specific sets of close-up images of faces and hands …
Text-to-image diffusion has attracted vast attention due to its impressive image-generation capabilities. However, when it comes to human-centric text-to-image generation, particularly in the context of faces and hands, the results often fall short of naturalness due to insufficient training priors. We alleviate the issue in this work from two perspectives. 1) From the data aspect, we carefully collect a human-centric dataset comprising over one million high-quality human-in-the-scene images and two specific sets of close-up images of faces and hands. These datasets collectively provide a rich prior knowledge base to enhance the human-centric image generation capabilities of the diffusion model. 2) On the methodological front, we propose a simple yet effective method called Mixture of Low-rank Experts (MoLE) by considering low-rank modules trained on close-up hand and face images respectively as experts. This concept draws inspiration from our observation of low-rank refinement, where a low-rank module trained by a customized close-up dataset has the potential to enhance the corresponding image part when applied at an appropriate scale. To validate the superiority of MoLE in the context of human-centric image generation compared to state-of-the-art, we construct two benchmarks and perform evaluations with diverse metrics and human studies. Datasets, model, and code are released at https://sites.google.com/view/mole4diffuser/.
arxiv.org
Showing the best result for this search. See all results