We present Liquid, an auto-regressive generation paradigm that seamlessly integrates visual comprehension and generation by tokenizing images into discrete codes and learning these code embeddings alongside text tokens within a shared feature space for both vision and language. Unlike previous multimodal large language model (MLLM), Liquid achieves this integration using a single large language model (LLM), eliminating the need for external pretrained visual embeddings such as CLIP. For the first time, Liquid uncovers a scaling law that performance drop unavoidably brought by the unified training of visual and language tasks diminishes as the model size increases. Furthermore, the unified token space enables visual generation and comprehension tasks to mutually enhance each other, effectively removing the typical interference seen in earlier models. We show that existing LLMs can serve as strong foundations for Liquid, saving 100× in training costs while outperforming Chameleon in multimodal capabilities and maintaining language performance comparable to mainstream LLMs like LLAMA2. Liquid also outperforms models like SD v2.1 and SD-XL (FID of 5.47 on MJHQ-30K), excelling in both vision-language and text-only tasks. This work demonstrates that LLMs such as Qwen2.5 and GEMMA2 are powerful multimodal generators, offering a scalable solution for enhancing both vision-language understanding and generation.
Compare with other auto-regressive based methods, Liquid achieves a better overall score on GenAI-Bench under both basic prompts and advanced prompts. This suggests that the images generated by Liquid align better semantically with the input text prompts. On MJHQ-30K, Liquid not only has a lower FID than all other auto-regressive methods but also surpasses most well-known diffusion models, indicating that LLMs are also capable of generating high-quality images.
“Visual Gen." refers to the data used for training text-to-image generation, while “Visual Und." refers to the data used for visual understanding. Compared to the baseline, adding more visual understanding data enhances the visual generation capability, improving the semantic consistency between the generated content and the prompt. Conversely, increasing the visual generation data similarly aids in enhancing the model's visual understanding ability. This indicates that when the tokens for visual generation and understanding are unified, they share a common optimization objective and can mutually enhance each other.
We explore the visual generation performance of LLMs ranging from 0.5B to 32B in size after mixed training with language data and text-to-image data. As shown in figure belows, with the increase in model size and training iterations, the validation loss smoothly decreases, while token accuracy and VQA Score consistently increase.
Larger models eventually obtain more robust visual generation outcomes. Samples are drawn from Liquid models of 4 different sizes (0.5B, 1B, 2B, 9B) and 3 different training steps (5K, 15K, 40K).
To validate whether acquiring image understanding and generation capabilities has any impact on the original language abilities of the LLMs, we report overall zero-shot performance across a suite of popular benchmarks.
Mix training results in higher validation loss on visual generation tasks for models of every size. However, its impact on the VQA Score diminishes as the size of the model increases.
@article{wu2024liquid,
title={Liquid: Language Models are Scalable and Unified Multi-modal Generators},
author={Wu, Junfeng and Jiang, Yi and Ma, Chuofan and Liu, Yuliang and Zhao, Hengshuang and Yuan, Zehuan and Bai, Song and Bai, Xiang},
journal={arXiv preprint arXiv:2412.04332},
year={2024}
}