teaser

Abstract

Direct Preference Optimization (DPO) aligns text-to-image (T2I) generation models with human preferences using pairwise preference data. Although substantial resources are expended in collecting and labeling datasets, a critical aspect is often neglected: preferences vary across individuals and should be represented with more granularity. To address this, we propose SmPO-Diffusion, a novel method for modeling preference distributions to improve the DPO objective, along with a numerical upper bound estimation for the diffusion optimization objective. First, we introduce a smoothed preference distribution to replace the original binary distribution. We employ a reward model to simulate human preferences and apply preference likelihood averaging to improve the DPO loss, such that the loss function approaches zero when preferences are similar. Furthermore, we utilize an inversion technique to simulate the trajectory preference distribution of the diffusion model, enabling more accurate alignment with the optimization objective. Our approach effectively mitigates issues of excessive optimization and objective misalignment present in existing methods through straightforward modifications. Our SmPO-Diffusion achieves state-of-the-art performance in preference evaluation, outperforming baselines across metrics with lower training costs.

Pipeline

We present two steps of SmPO-Diffusion: (1) Smoothed preference modeling. We calculate smoothed preference labels for all image pairs in the dataset. (2) Optimization via Renoise Inversion. We use Renoise Inversionto estimate the diffusion model sampling trajectory, maximizing the smoothed log-likelihood of preferences to diffusion models.

compare

Results

Samples generated by SmPO-SDXL exhibit strong visual appeal and align well with human preferences.

compare

The advantages of SmPO-SDXL are seamlessly transferred to conditional generation tasks.

count

BibTeX

@article{lu2025smoothed,
  title={Smoothed Preference Optimization via ReNoise Inversion for Aligning Diffusion Models with Varied Human Preferences},
  author={Lu, Yunhong and Wang, Qichao and Cao, Hengyuan and Xu, Xiaoyin and Zhang, Min},
  journal={arXiv preprint arXiv:2506.02698},
  year={2025}
}