Text-driven image generation using diffusion models has recently gained significant attention. To enable more flexible image manipulation and editing, recent research has expanded from single image generation to transparent layer generation and multi-layer compositions. However, existing approaches often fail to provide a thorough exploration of multi-layer structures, leading to inconsistent inter-layer interactions, such as occlusion relationships, spatial layout, and shadowing. In this paper, we introduce DreamLayer, a novel framework that enables coherent text-driven generation of multiple image layers, by explicitly modeling the relationship between transparent foreground and background layers. DreamLayer incorporates three key components, i.e., Context-Aware Cross-Attention (CACA) for global-local information exchange, Layer-Shared Self-Attention (LSSA) for establishing robust inter-layer connections, and Information Retained Harmonization (IRH) for refining fusion details at the latent level. By leveraging a coherent full-image context, DreamLayer builds inter-layer connections through attention mechanisms and applies a harmonization step to achieve seamless layer fusion. To facilitate research in multi-layer generation, we construct a high-quality, diverse multi-layer dataset including 400k samples. Extensive experiments and user studies demonstrate that DreamLayer generates more coherent and well-aligned layers, with broad applicability, including latent-space image editing and image-to-layer decomposition.
DreamLayer can handle multiple tasks: (a) Text-to-layer (b) Image-to-layer (c) Latent-space editing.
The DreamLayer Framework for Multi-Layer Image Generation.
Overview of the Attention Mechanism in DreamLayer.
The pipeline of multi-layer data preparation. We utilize GPT-4 to process a randomly selected base prompt, structuring it into a background prompt and multiple foreground prompts. After generating the image using a diffusion model, we apply an open-set detection model GroundingDINO to identify the positions of the foreground objects and use the DepthAnything model to obtain a depth map. Based on the depth order, we sequentially extract the foreground layers and fill in the missing areas with an inpainting model.
The pipeline of multi-layer data preparation.
Multi-layer Dataset: Each image consists of a multi-layer structure, including a background and multiple foreground objects, with each foreground object represented as a transparent layer.
Visulization of two-layer images in our multi-layer dataset.
Visulization of three-layer images in our multi-layer dataset.
Visulization of four-layer images in our multi-layer dataset.
Qualitative Results of two-layer images generated by DreamLayer.
Qualitative Results of three-layer images generated by DreamLayer.
Qualitative Results of four-layer images generated by DreamLayer.
@article{huang_dreamlayer,
title = {DreamLayer: Simultaneous Multi-Layer Generation via Diffusion Model},
author = {Junjia Huang, Pengxiang Yan, Jinhang Cai, Jiyang Liu, Zhao Wang, Yitong Wang, Xinglong Wu, Guanbin Li},
journal = {arXiv preprint arXiv:2503.12838},
year = {2025}
}