Multimodal AI

SkyReels-V4: Multi-modal Video-Audio Generation, Inpainting and Editing model

GGuibin ChenDDixuan LinJJiangping YangYYouqiang ZhangZZhengcong FeiDDebang LiSSheng ChenCChaofeng AoNNuo PangYYiming WangYYikun DouZZheng ChenMMingyuan FanTTuanhui LiMMingshan ChangHHao ZhangXXiaopeng SunJJingtao XuYYuqiang XieJJiahua WangZZhiheng XuWWeiming XiongYYuzhe JinBBaoxuan GuBBinjie MaoYYunjie YuJJujie HeYYuhao FengSShiwen TuCChaojie WangRRui YanWWei ShenJJingchen WuPPeng ZhaoXXuanyue ZhongZZhuangzhuang LiuKKaifei WangFFuxiang ZhangWWeikai XuWWenyan LiuBBinglu ZhangYYu ShenTTianhui XiongBBin PengLLiang ZengXXuchen SongHHaoxiang GuoPPeiyu WangYYahui Zhou
Published
February 25, 2026
Authors
49
Word Count
9,436

SkyReels-V4 generates perfectly synchronized video and audio with generation, inpainting, and editing capabilities.

Abstract

SkyReels V4 is a unified multi modal video foundation model for joint video audio generation, inpainting, and editing. The model adopts a dual stream Multimodal Diffusion Transformer (MMDiT) architecture, where one branch synthesizes video and the other generates temporally aligned audio, while sharing a powerful text encoder based on the Multimodal Large Language Models (MMLM). SkyReels V4 accepts rich multi modal instructions, including text, images, video clips, masks, and audio references. By combining the MMLMs multi modal instruction following capability with in context learning in the video branch MMDiT, the model can inject fine grained visual guidance under complex conditioning, while the audio branch MMDiT simultaneously leverages audio references to guide sound generation. On the video side, we adopt a channel concatenation formulation that unifies a wide range of inpainting style tasks, such as image to video, video extension, and video editing under a single interface, and naturally extends to vision referenced inpainting and editing via multi modal prompts. SkyReels V4 supports up to 1080p resolution, 32 FPS, and 15 second duration, enabling high fidelity, multi shot, cinema level video generation with synchronized audio. To make such high resolution, long-duration generation computationally feasible, we introduce an efficiency strategy: Joint generation of low resolution full sequences and high-resolution keyframes, followed by dedicated super-resolution and frame interpolation models. To our knowledge, SkyReels V4 is the first video foundation model that simultaneously supports multi-modal input, joint video audio generation, and a unified treatment of generation, inpainting, and editing, while maintaining strong efficiency and quality at cinematic resolutions and durations.

Key Takeaways

  • 1

    SkyReels-V4 unifies video-audio generation, inpainting, and editing in a single AI model framework.

  • 2

    Bidirectional cross-attention mechanisms between video and audio streams create synchronized multimodal outputs.

  • 3

    Rotary Positional Embeddings with scaling factors solve temporal alignment between different modality token scales.

Limitations

  • Previous commercial models like Veo 3.1 and Sora 2 couldn't unify generation, inpainting, and editing tasks.

  • Earlier systems required separate models for different tasks or couldn't generate audio from all input types.

Keywords

Multimodal Diffusion TransformerMMDiTMultimodal Large Language ModelsMMLMvideo audio generationvideo inpaintingvideo editingchannel concatenation formulationjoint generationsuper-resolutionframe interpolation

More in Multimodal AI

View all
SkyReels-V4: Multi-modal Video-Audio Generation, Inpainting and Editing model | Paperchime