Quantization Bottleneck. A promising paradigm to build unified tokenizers is to combine CLIP and VQVAE training. However, it typically results in slow convergence and suboptimal performance. While previous studies tend to attribute this to conflicts between divergent training objectives, we show that the bottleneck primarily stems from the quantization process. Specifically, token factorization and discretization, which are essential for VQVAE, significantly compromise the expressiveness of tokens.
(a): The unified tokenizer training paradigm, which integrates CLIP supervision into VQVAE training. (b): Roadmap to build UniTok. The blue bars illustrate the progressive changes in VQA performance from CLIP tokenizer to unified tokenizer, while the purple bars outline the improvements in UniTok.
Multi-Codebook Quantization. In light of the bottleneck, we seek to expand the codebook size and latent code dimension to better approximate the continuous feature space. However, naively doing so results in low codebook utilization and diminished performance gains. Drawing inspiration from the divide-and-conquer algorithm, we introduce multi-codebook quantization, which divides a visual token into multiple chunks and discretizes each with independent sub-codebooks. This effectively scales up the latent space with the number of sub-codebooks, while circumventing the problem associated with large codebooks.
Unified MLLM. Built upon UniTok, we construct a unified MLLM capable of both multimodal generation and understanding. Particularly, we leverage the MLLM framework introduced in Liquid, which models vision and language sequences with a universal next-token prediction loss. But instead of learning the visual codebook from scratch, we reuse code embeddings of UniTok by projecting them to the MLLM token space with an MLP projector. Our model sets a new state-of-the-art among unified autoregressive MLLMs on both visual understanding and generation benchmarks.
@article{unitok,
title={UniTok: A Unified Tokenizer for Visual Generation and Understanding},
author={Ma, Chuofan and Jiang, Yi and Wu, Junfeng and Yang, Jihan and Yu, Xin and Yuan, Zehuan and Peng, Bingyue and Qi, Xiaojuan},
journal={arXiv preprint arXiv:2502.20321},
year={2025}
}