Using AI for Early Color Blocking in Pre-Viz
I recently experimented with using AI art generators like Midjourney and Stable Diffusion to inform early color blocking in my pre-visualization process, and while it had some surprising benefits, it also highlighted the indispensable role of a human eye. I'd typically draft rough mood boards and then jump into grading stills, but for a recent short film with a very specific, stylized palette, I fed key scene descriptions and desired emotional tones into Midjourney, asking for 'cinematic stills' with specific color keywords (e.g., 'nocturne, teal, amber glow').
What worked remarkably well was how quickly I could generate dozens of atmospheric options that, while not perfectly filmic, provided fantastic starting points for hue relationships and light quality. It offered a 'what if' playground that would have taken hours in a traditional grading suite, allowing me to quickly discard schemes that felt too cliché or ineffective. It was particularly strong for generating ambient light colors and general palette cohesiveness.
What didn't work was the nuance, the AI struggles with subtle shifts between tones, skin tone accuracy, and the organic imperfection that makes a scene feel natural. It often produced overly saturated or flat images, and character faces were consistently off. I ended up pulling the AI-generated images into DaVinci Resolve, isolating the color palettes with qualifiers, and applying them as LUTs or power windows to actual stills from the shot list, then refining heavily by hand. It saved time on broad strokes but required significant correction for finessed results.
Do any of you integrate AI into your pre-viz, and if so, how do you bridge the gap between its broad strokes and the fine details needed for a filmic look?