teaser

Abstract

Without using explicit reward, direct preference optimization (DPO) employs paired human preference data to fine-tune generative models, a method that has garnered considerable attention in large language models (LLMs). However, exploration of aligning text-to-image (T2I) diffusion models with human preferences remains limited. In comparison to supervised fine-tuning, existing methods that align diffusion model suffer from low training efficiency and subpar generation quality due to the long Markov chain process and the intractability of the reverse process. To address these limitations, we introduce DDIM-InPO, an efficient method for direct preference alignment of diffusion models. Our approach conceptualizes diffusion model as a single-step generative model, allowing us to fine-tune the outputs of specific latent variables selectively. In order to accomplish this objective, we first assign implicit rewards to any latent variable directly via a reparameterization technique. Then we construct an Inversion technique to estimate appropriate latent variables for preference optimization. This modification process enables the diffusion model to only fine-tune the outputs of latent variables that have a strong correlation with the preference dataset. Experimental results indicate that our DDIM-InPO achieves state-of-the-art performance with just 400 steps of fine-tuning, surpassing all preference aligning baselines for T2I diffusion models in human preference evaluation tasks.

Results

Samples generated by InPO-SDXL exhibit strong visual appeal and align well with human preferences.

compare

Samples generated by InPO-SDXL exhibit enhanced light and spatial structure, better realism and detail capture, greater imaginative design, stable color consistency, optimized multi-instance layout and text integration in visuals. These are some hidden advantages aligned with human preferences.

count

The advantages of InPO-SDXL are seamlessly transferred to conditional generation tasks.

count

Poster

BibTeX

@inproceedings{lu2025inpo,
  title={InPO: Inversion Preference Optimization with Reparametrized DDIM for Efficient Diffusion Model Alignment},
  author={Lu, Yunhong and Wang, Qichao and Cao, Hengyuan and Wang, Xierui and Xu, Xiaoyin and Zhang, Min},
  booktitle={Proceedings of the Computer Vision and Pattern Recognition Conference},
  pages={28629--28639},
  year={2025}
}