CoRe: Context-Regularized Text Embedding Learning for Text-to-Image Personalization

Feize Wu1*, Yun Pang1*, Junyi Zhang1*, Lianyu Pang1*, Jian Yin1, Baoquan Zhao1, Qing Li2, Xudong Mao1+,
1Sun Yat-sen University
2The Hong Kong Polytechnic University
*Indicates Equal Contribution

+Corresponding author

Abstract

Recent advances in text-to-image personalization have enabled high-quality and controllable image synthesis for user-provided concepts. However, existing methods still struggle to balance identity preservation with text alignment. Our approach is based on the fact that generating prompt-aligned images requires a precise semantic understanding of the prompt, which involves accurately processing the interactions between the new concept and its surrounding context tokens within the CLIP text encoder. To address this, we aim to embed the new concept properly into the input embedding space of the text encoder, allowing for seamless integration with existing tokens. We introduce Context Regularization (CoRe), which enhances the learning of the new concept's text embedding by regularizing its context tokens in the prompt. This is based on the insight that appropriate output vectors of the text encoder for the context tokens can only be achieved if the new concept's text embedding is correctly learned. CoRe can be applied to arbitrary prompts without requiring the generation of corresponding images, thus improving the generalization of the learned text embedding. Additionally, CoRe can serve as a test-time optimization technique to further enhance the generations for specific prompts. Comprehensive experiments demonstrate that our method outperforms several baseline methods in both identity preservation and text alignment.

Motivation

Cosine Similarity Comparison Cross Attention Map Visualization

For the four similar prompts (``{} in the desert''), we show the cosine similarity between the output embeddings of each token (left), and the cross-attention map visualization of each token (right). Replacing ``dog'' with ``puppy'' or ``cat'' results in similar output embeddings and attention maps for other tokens. In contrast, using the overfitted S* by Textual Inversion significantly alters the output embeddings and attention maps for other tokens.

Overview

Multi-Stage Finetuning of AttnDreamBooth

Our method enhances the text embedding learning for S* by regularizing its context tokens. Specifically, we randomly select a regularization prompt (e.g., ``S* in the desert'') and a reference prompt (e.g., ``Dog in the desert'') from the prompt set. During training, the proposed context embedding regularization and context attention regularization are applied together with the diffusion loss, which encourages the representations of the context tokens surrounding S* to align with those in the reference prompt. These regularization terms make the text embedding of S* more compatible with existing tokens.

Comparisons to Baselines

Face Results

More Results

BibTeX


        @article{wu2024core,
          title={CoRe: Context-Regularized Text Embedding Learning for Text-to-Image Personalization},
          author={Wu, Feize and Pang, Yun and Zhang, Junyi and Pang, Lianyu and Yin, Jian and Zhao, Baoquan and Li, Qing and Mao, Xudong},
          journal={arXiv preprint arXiv:2408.15914},
          year={2024}
        }