The Consistency Critic: Correcting Inconsistencies in Generated Images via Reference-Guided Attentive Alignment

Ziheng Ouyang1 Yiren Song2 Yaoli Liu3 Shihao Zhu1
Qibin Hou1 Ming-Ming Cheng1 Mike Zheng Shou2

1VCIP, Nankai University    2Show Lab, National University of Singapore
3State Key Laboratory of CAD&CG, Zhejiang University

Paper Code HF Demo HF Model HF Dataset

Visual Result

Additional Visual Results Across Multiple Languages and Scenarios

Comparison Results

Abstract

Previous works have explored various customized generation tasks given a reference image, but they still face limitations in generating consistent fine-grained details. In this paper, our aim is to solve the inconsistency problem of generated images by applying a reference-guided post-editing approach and present our ImageCritic. We first construct a dataset of reference-degraded-target triplets obtained via VLM-based selection and explicit degradation, which effectively simulates the common inaccuracies or inconsistencies observed in existing generation models. Furthermore, building on a thorough examination of the model's attention mechanisms and intrinsic representations, we accordingly devise an attention alignment loss and a detail encoder to precisely rectify inconsistencies. ImageCritic can be integrated into an agent framework to automatically detect inconsistencies and correct them with multi-round and local editing in complex scenarios. Extensive experiments demonstrate that ImageCritic can effectively resolve detail-related issues in various customized generation scenarios, providing significant improvements over existing methods.

Method

Dataset Curation

Additional Curation Details