UniTok: A Unified Tokenizer for
Visual Generation and Understanding

Chuofan Ma1,2,   Yi Jiang2†,   Junfeng Wu2,3,  
Jihan Yang1,   Xin Yu1,   Zehuan Yuan2*,   Bingyue Peng2,   Xiaojuan Qi1†*
1HKU,    2ByteDance,    3HUST

Abstract

The representation disparity between visual generation and understanding imposes a critical challenge in integrating these capabilities into a single framework. To bridge this gap, we introduce UniTok, a discrete visual tokenizer that encodes fine-grained details for generation while also capturing high-level semantics for understanding. Despite recent studies have shown that these objectives could induce loss conflicts in training, we reveal that the underlying bottleneck stems from limited representational capacity of discrete tokens. We address this by introducing multi-codebook quantization, which divides vector quantization with several independent sub-codebooks to expand the latent feature space, while avoiding training instability caused by overlarge codebooks. Our method significantly raises the upper limit of unified discrete tokenizers to match or even surpass domain-specific continuous tokenizers. For instance, UniTok achieves a remarkable rFID of 0.38 (versus 0.87 for SD-VAE) and a zero-shot accuracy of 78.6% (versus 76.2% for CLIP) on ImageNet. The code is available at https://github.com/FoundationVision/UniTok.

Teaser image.


Method

Quantization Bottleneck. A promising paradigm to build unified tokenizers is to combine CLIP and VQVAE training. However, it typically results in slow convergence and suboptimal performance. While previous studies tend to attribute this to conflicts between divergent training objectives, we show that the bottleneck primarily stems from the quantization process. Specifically, token factorization and discretization, which are essential for VQVAE, significantly compromise the expressiveness of tokens.

(a): The unified tokenizer training paradigm, which integrates CLIP supervision into VQVAE training. (b): Roadmap to build UniTok. The blue bars illustrate the progressive changes in VQA performance from CLIP tokenizer to unified tokenizer, while the purple bars outline the improvements in UniTok.

Multi-Codebook Quantization. In light of the bottleneck, we seek to expand the codebook size and latent code dimension to better approximate the continuous feature space. However, naively doing so results in low codebook utilization and diminished performance gains. Drawing inspiration from the divide-and-conquer algorithm, we introduce multi-codebook quantization, which divides a visual token into multiple chunks and discretizes each with independent sub-codebooks. This effectively scales up the latent space with the number of sub-codebooks, while circumventing the problem associated with large codebooks.

Unified MLLM. Built upon UniTok, we construct a unified MLLM capable of both multimodal generation and understanding. Particularly, we leverage the MLLM framework introduced in Liquid, which models vision and language sequences with a universal next-token prediction loss. But instead of learning the visual codebook from scratch, we reuse code embeddings of UniTok by projecting them to the MLLM token space with an MLP projector. Our model sets a new state-of-the-art among unified autoregressive MLLMs on both visual understanding and generation benchmarks.

Results

Tokenizer Performance

Understanding Performance

Generation Performance

Ablation Study

BibTeX

@article{unitok,
  title={UniTok: A Unified Tokenizer for Visual Generation and Understanding},
  author={Ma, Chuofan and Jiang, Yi and Wu, Junfeng and Yang, Jihan and Yu, Xin and Yuan, Zehuan and Peng, Bingyue and Qi, Xiaojuan},
  journal={arXiv preprint arXiv:2502.20321},
  year={2025}
}