The most revealing AI conversation I’ve had recently wasn’t with a director. It was with a film producer who needed helicopters. In the old world, that request triggered a familiar chain. You budgeted for aerial plates, planned safety, scheduled a unit, or pushed it to post and let VFX (visual effects) carry the weight. What made this request different was timing. They hadn’t even started principal photography.
They were solving a high-risk, high-cost sequence before the show even existed in the physical world. For broadcasters, streamers, and distributors, that shift matters. It changes when costs hit the budget, where lives are at risk, and what “deliverable” means when a shot is generated rather than photographed.
Over the last year, Gennie Studio (the Los Angeles-based AI powered production company where I work) has delivered fully AI-generated reenactments for multiple broadcast and streaming doc series, which forced us to confront deliverables, quality control (QC), and risk the hard way. Running this company day to day has demystified a lot for me. The productivity gains are real, including in production, but they arrive through process and iteration, not a magic button.
Here is the core misunderstanding. Executives treat AI like a camera or a crew. In practice, it is an options engine. It generates plausible versions quickly, then demands discipline to turn a promising version into something repeatable, spec compliant, and defensible from a provenance and liability standpoint.
Below are the misconceptions I hear most often, and what the work looks like when you actually have to ship.
Misconception number 1: “AI makes anything cheaper.”
AI can be cheaper, sometimes dramatically. But “cheaper” is not the same thing as “production ready.” The first pass is often fast. The hidden cost shows up later in iteration, QC, and compliance.
It is also not always true that AI wins on price. Many international factual producers already deliver broadcast acceptable scenes on a shoestring with clever minimalism, like abstract, out-of-focus reenactments or stylized coverage. In those cases, generative AI can be slower and more expensive because refinement time eats the savings.
We learned the expensive version of that lesson working on a historical project. We were overconfident that we could deliver meticulous period detail and keep it consistent from shot to shot, including the right weapons, footwear, facial hair, and geographic cues. Because we promised that level of accuracy, getting there was costly.
The poorest assumption was misjudging the models’ ability to reliably generate period-appropriate, location-specific detail on demand. These systems recreate patterns from training data, and the data is not always labeled the way production needs. Ask for Revolutionary War soldiers and you can get pirates.
Where AI does win is when you want higher production value, more scale, or shots you simply cannot afford to shoot. Going back in time (if you have the budget to refine for detail accuracy), visualizing fantasy worlds, and building sequences before principal photography can flip the economics quickly.
Misconception number 2: “You can direct AI and get what you want.”
Prompting is not directing. The practical reality today looks like prompt and pray, then iterate toward intent.
The craft involves constraints and workflow. References, composition, timing, negative controls, style controls, and hands-on human intervention matter more than clever wording.
Misconception number 3: “There’s a virtual camera.”
In many generative systems, camera language is an instruction, not a real camera moving through 3D space. That shows up the moment someone asks for precision. We once got a note on a car shot: move the camera higher above the car so we are looking down at the roof, not across it. The request was closer to 70 degree angle, and the shot was closer to 25 or 30 degrees.
That level of angular specificity is hard to hit with pure generative video without a lot of revisions. You can bring in 3D tools and use AI as part of a hybrid pipeline, but costs start creeping upward.
Even with strong process guiding style and consistency, outputs stay unpredictable. A director with 80 percent of their vision is often better served by AI than one with 100 percent because you need room for flexibility and surprises, similar to what you allow on a real set.
Misconception number 4: “Upscaling will fix it.”
Upscaling can make an image sharper. It can’t manufacture coherence.
On the six-episode documentary series Killer Kings (produced by FirstLookTV and commissioned by Sky History), more than 80 percent of our AI deliverables for episode one were rejected by the online facility conducting QC ahead of network delivery.
It also exposed the limitations and misleading promises of AI-powered upscalers. Upscalers routinely invent detail that was never there, making assumptions about what is in frame. A person in the background wearing a silk dress near a window can suddenly be transformed into a curtain. We built a pre-QC refinement process and learned to favor close-ups and medium shots over dense wides, crowds, and complex background motion.
Misconception number 5: “No Generative AI was used in the making of this show.”
From my point of view, and this is subjective, this statement often functions as a values message dressed up as a supply chain message. AI already shows up across the process, whether decision makers realize it or not, including marketing and promo workflows, pitch materials, audio cleanup, localization, trailer versioning, and tools inside post.
If a company wants to make a sweeping claim that appeals to consumers and press, the responsible move is specificity. Put the boundary in writing, in agreements, delivery notes, and marketing approvals, so it is auditable and defensible.
What’s actually true:
AI is a gift for creators and small teams, especially those who lack formal training in design, animation, VFX, or post. It lowers the floor. I call this upward technical mobility.
It also pushes the industry toward leaner teams. The same output can increasingly come from fewer people, as long as the pipeline is disciplined. AI is neither a camera nor a crew. It is a probability engine that produces options quickly. The advantage goes to teams that can convert those options into deliverables without blowing schedule, budget, or brand trust.
(By Max Einhorn)
Audio Version (a DV Works service)
Leave A Comment