MLA 027 AI Video End-to-End WorkflowHow to maintain character consistency, style consistency, etc in an AI video. Prosumers can use Google Veo 3's "High-Quality Chaining" for fast social media content. Indie filmmakers can achieve narrative consistency by combining Midjourney V7 for style, Kling for lip-synced dialogue, and Runway Gen-4 for camera control, while professional studios gain full control with a layered ComfyUI pipeline to output multi-layer EXR files for standard VFX compositing.
Links • Notes and resources at
ocdevel.com/mlg/mla-27 Try a walking desk • - stay healthy & sharp while you learn & code
Descript • - my favorite AI audio/video editor
AI Audio Tool Selection Music: • Use
Suno • for complete songs or
Udio • for high-quality components for professional editing.
Sound Effects: • Use
ElevenLabs' SFX • for integrated podcast production or
SFX Engine • for large, licensed asset libraries for games and film.
Voice: •
ElevenLabs • gives the most realistic voice output.
Murf.ai • offers an all-in-one studio for marketing, and
Play.ht • has a low-latency API for developers.
Open-Source TTS: • For local use,
StyleTTS 2 • generates human-level speech,
Coqui's XTTS-v2 • is best for voice cloning from minimal input, and
Piper TTS • is a fast, CPU-friendly option.
I. Prosumer Workflow: Viral Video
Goal: Rapidly produce branded, short-form video for social media. This method bypasses Veo 3's weaker native "Extend" feature.
Toolchain •
Image Concept: • GPT-4o (API: GPT-Image-1) for its strong prompt adherence, text rendering, and conversational refinement.
Video Generation: • Google Veo 3 for high single-shot quality and integrated ambient audio.
Soundtrack: • Udio for creating unique, "viral-style" music.
Assembly: • CapCut for its standard short-form editing features.
•
Workflow •
Create Character Sheet (GPT-4o): 1. Generate a primary character image with a detailed "locking" prompt, then use conversational follow-ups to create variations (poses, expressions) for visual consistency.
Generate Video (Veo 3): 2. Use "High-Quality Chaining."
Clip 1 • : Generate an 8s clip from a character sheet image.
Extract Final Frame • : Save the last frame of Clip 1.
Clip 2 • : Use the extracted frame as the image input for the next clip, using a "this then that" prompt to continue the action. Repeat as needed.
3.
Create Music (Udio): 4. Use Manual Mode with structured prompts (
[Genre: ...], [Mood: ...] 5. ) to generate and extend a music track.
Final Edit (CapCut): 6. Assemble clips, layer the Udio track over Veo's ambient audio, add text, and use "Auto Captions." Export in 9:16.
•
II. Indie Filmmaker Workflow: Narrative Shorts
Goal: Create cinematic short films with consistent characters and storytelling focus, using a hybrid of specialized tools.
Toolchain •
Visual Foundation: • Midjourney V7 to establish character and style with
--cref • and
--sref • parameters.
Dialogue Scenes: • Kling for its superior lip-sync and character realism.
B-Roll/Action: • Runway Gen-4 for its Director Mode camera controls and Multi-Motion Brush.
Voice Generation: • ElevenLabs for emotive, high-fidelity voices.
Edit & Color: • DaVinci Resolve for its integrated edit, color, and VFX suite and favorable cost model.
•
Workflow •
Create Visual Foundation (Midjourney V7): 1. Generate a "hero" character image. Use its URL with
--cref --cw 100 2. to create consistent character poses and with
--sref 3. to replicate the visual style in other shots. Assemble a reference set.
Create Dialogue Scenes (ElevenLabs -> Kling): 4.
• Generate the dialogue track in ElevenLabs and download the audio.
• In Kling, generate a video of the character from a reference image with their mouth closed.
• Use Kling's "Lip Sync" feature to apply the ElevenLabs audio to the neutral video for a perfect match.
5.
Create B-Roll (Runway Gen-4): 6. Use reference images from Midjourney. Apply precise camera moves with Director Mode or add localized, layered motion to static scenes with the Multi-Motion Brush.
Assemble & Grade (DaVinci Resolve): 7. Edit clips and audio on the Edit page. On the Color page, use node-based tools to match shots from Kling and Runway, then apply a final creative look.
•
III. Professional Studio Workflow: Full Control
Goal: Achieve absolute pixel-level control, actor likeness, and integration into standard VFX pipelines using an open-source, modular approach.
Toolchain •
Core Engine: • ComfyUI with Stable Diffusion models (e.g., SD3, FLUX).
VFX Compositing: • DaVinci Resolve (Fusion page) for node-based, multi-layer EXR compositing.
•
Control Stack & Workflow •
Train Character LoRA: 1. Train a custom LoRA on a 15-30 image dataset of the actor in ComfyUI to ensure true likeness.
Build ComfyUI Node Graph: 2. Construct a generation pipeline in this order:
Loaders • : Load base model, custom character LoRA, and text prompts (with LoRA trigger word).
ControlNet Stack • : Chain multiple ControlNets to define structure (e.g., OpenPose for skeleton, Depth map for 3D layout).
IPAdapter-FaceID • : Use the Plus v2 model as a final reinforcement layer to lock facial identity before animation.
AnimateDiff • : Apply deterministic camera motion using Motion LoRAs (e.g.,
v2_lora_PanLeft.ckpt • ).
KSampler -> VAE Decode • : Generate the image sequence.
3.
Export Multi-Layer EXR: 4. Use a node like
mrv2SaveEXRImage 5. to save the output as an EXR sequence (
.exr 6. ). Configure for a professional pipeline: 32-bit float, linear color space, and PIZ/ZIP lossless compression. This preserves render passes (diffuse, specular, mattes) in a single file.
Composite in Fusion: 7. In DaVinci Resolve, import the EXR sequence. Use Fusion's node graph to access individual layers, allowing separate adjustments to elements like color, highlights, and masks before integrating the AI asset into a final shot with a background plate.
•
58
|
1T 11M