SELMA: Learning and Merging Skill-Specific Text-to-Image Experts with Auto-Generated Data

University of North Carolina, Chapel Hill
*: equal contribution
Teaser
Comparison of different fine-tuning paradigms for text-to-image (T2I) generation models. (a) Supervised Fine-tuning (SFT): a T2I model is trained with image-text pairs from existing datasets. (b) Fine-tuning with Human Preference (e.g., RL/DPO): humans annotate their preferences on images by ranking/scoring in terms of text alignments, and a T2I model is trained to maximize the human preference scores. (c) SELMA: instead of collecting image-text pairs or human preference annotations, we automatically collect image-text pairs for desired skills with LLM and T2I model, and create a multi-skill T2I model by learning and merging skill-specific expert models.

Abstract

Recent text-to-image (T2I) generation models have demonstrated impressive capabilities in creating images from text descriptions. However, these T2I generation models often fall short of generating images that precisely match the details of the text inputs, such as incorrect spatial relationship or missing objects. In this paper, we introduce SELMA: Skill-Specific Expert Learning and Merging with Auto-Generated Data, a novel paradigm to improve the faithfulness of T2I models by fine-tuning models on automatically generated, multi-skill image-text datasets, with skill-specific expert learning and merging. First, SELMA leverages an LLM's in-context learning capability to generate multiple datasets of text prompts that can teach different skills, and then generates the images with a T2I model based on the prompts. Next, SELMA adapts the T2I model to the new skills by learning multiple single-skill LoRA (low-rank adaptation) experts followed by expert merging. Our independent expert fine-tuning specializes multiple models for different skills, and expert merging helps build a joint multi-skill T2I model that can generate faithful images given diverse text prompts, while mitigating the knowledge conflict from different datasets. We empirically demonstrate that SELMA significantly improves the semantic alignment and text faithfulness of state-of-the-art T2I diffusion models on multiple benchmarks (+2.1% on TIFA and +6.9% on DSG), human preference metrics (PickScore, ImageReward, and HPS), as well as human evaluation. Moreover, fine-tuning with image-text pairs auto-collected via SELMA shows comparable performance to fine-tuning with ground truth data. Lastly, we show that fine-tuning with images from a weaker T2I model can help improve the generation quality of a stronger T2I model, suggesting promising weak-to-strong generalization in T2I models.


Method

Teaser

Illustration of the four-stage pipeline of SELMA. (a) Prompt Generation: Given a short skill description and a few (i.e., three) seed examples about a specific skill, we generate prompts to teach the skill with an LLM, while maintaining prompt diversity via text-similarity based filtering. (b) Image Generation: Given the LLM-generated text prompts, we generate training images with a T2I model. (c) Skill-Specific Expert Learning: We learn skill-specific expert T2I models based on LoRA fine-tuning. (d) Merging Expert Models: We obtain a multi-skill T2I model by merging the skill-specific LoRA parameters.


Generation Examples

Teaser

SELMA helps improve SDXL in various skills, including counting, text rendering, spatial relationships, and attribute binding.


Experiments



SELMA vs. Different Alignment Methods for T2I Generation

Teaser
Comparison of SELMA and different text-to-image alignment methods on text faithfulness and human preference metrics. SELMA achieves the best performance in all five metrics when adapted on different base models (i.e., SD v1.4, SD v2, and SDXL).

Effectiveness of Learning & Merging Skill-Specific Experts

Teaser
Comparison of single LoRA and LoRA Merging in text faithfulness and human preference. We show that learning and merging skill-specific LoRA experts is more effective than single LoRA on multiple datasets.

Effectiveness of Auto-Generated Data

Teaser
DSG accuracy of SD v2 fine-tuned with different image-text pairs. We find that fine-tuning with auto-generated data can achieve comparable performance to fine-tuning with ground truth data.

Weak to Strong Generalization

Teaser
Comparison of different image generators for creating training images. In addition to using the same model being trained as an image generator, we also experiment with using a smaller model as an image generator (No. 4.). SDXL is a bigger and stronger model than SD v2. We show that weaker T2I models can help stronger T2I models.

Human Evaluation

Teaser
Human Evaluation on 200 sampled text prompts from DSG, where we show the win vs. lose percentages of SDXL and SDXL+SELMA. SDXL+SELMA is preferred by human annotators than SDXL in terms of text alignment.

BibTeX

@article{li2024selma,
  author    = {Jialu Li*, Jaemin Cho*, Yi-Lin Sung, Jaehong Yoon, and Mohit Bansal},
  title     = {SELMA: Learning and Merging Skill-Specific Text-to-Image Experts with Auto-Generated Data},
  journal   = {arxiv},
  year      = {2024},
  url       = {https://arxiv.org/abs/2403.06952}
}