Abstract
Recent advances in text-to-image diffusion models have achieved remarkable success in generating high-quality, realistic images from textual descriptions. However, these approaches have faced challenges in precisely aligning the generated visual content with the textual concepts described in the prompts. In this article, we propose a two-stage coarse-to-fine semantic realignment method, named RealignDiff, aimed at improving the alignment between text and images in text-to-image diffusion models. In the coarse semantic realignment phase, a novel caption reward, leveraging the BLIP-2 model, is proposed to evaluate the semantic discrepancy between the generated image caption and the given text prompt. Subsequently, the fine semantic realignment stage uses a local dense caption generation module and a reweighting attention modulation module to refine the previously generated images from a local semantic view. Experimental results on the MS-COCO and ViLG-300 datasets demonstrate that the proposed two-stage coarse-to-fine semantic realignment method outperforms other baseline realignment techniques by a substantial margin in both visual quality and semantic similarity with the input prompt.
| Original language | English |
|---|---|
| Pages (from-to) | 19010-19023 |
| Number of pages | 14 |
| Journal | IEEE Transactions on Neural Networks and Learning Systems |
| Volume | 36 |
| Issue number | 10 |
| DOIs | |
| Publication status | Published - 2025 |
Keywords
- Diffusion model
- fine semantic realignment
- text-to-image generation
ASJC Scopus subject areas
- Software
- Computer Science Applications
- Computer Networks and Communications
- Artificial Intelligence