At first glance, Ancestra – the new short film by Eliza McNitt, produced by Darren Aronofsky in partnership with Google’s GenAI machine – may seem like a moving, experimental fusion of memory and technology.
Premiering at this year’s Tribeca Festival, it ‘melds’ live-action filmmaking with generative AI, drawing from a story about the director’s personal story of birth and survival. It’s the first part of a larger series using the same tech, called Primordial Soup.
Is Ancestra a groundbreaking artistic work that blends science and art? Or is it yet another product of a creative pipeline that now leans heavily on technologies built from unethically sourced artistic labour?
Ancestra and GenAI in Hollywood
The use of generative AI in film and television is escalating rapidly, especially with major studios and high-profile creators like Aronofsky leading the charge. To the dismay of screen creatives everywhere, these tools can now generate scripts, storyboards, backdrops, voices – even characters.
What’s the appeal? Well, GenAI offers a tantalising promise of cheaper, faster content creation, especially at a time when entertainment sectors are struggling with shrinking budgets, audience fragmentation, and increasing demands for content that’s easy to digest – but whether they’re any good is a different story. Anyone else remember the story of the monkey’s paw?
There is an undeniable dark side to AI: the generative learning models are trained on massive datasets comprising copyrighted works (scripts, images, performances etc.) scraped from the internet without permission or payment. Much of that data belongs to the very artists and storytellers the technology threatens to replace.
In 2023, SAG-AFTRA went on strike in the USA for better conditions, protections and remuneration for screen writers. One of the key negotiations centred on the use of generative AI in Hollywood. After 118 days on strike and a lot of back and forth, they came to an agreement that new rules would be established for the use – and restriction of – artificial intelligence.
Under the hard won new terms, studios ‘cannot use AI to write scripts or to edit scripts that have already been written by a writer’. They also can’t treat AI-generated content as ‘source material’, as they might with a book or screenplay (as reported in The Guardian).
Ancestra does not break these rules – but not everyone is happy about the full-bodied embrace of AI generated footage in cinema.
ScreenHub: Generative AI use won’t stop films from winning an Oscar – but it won’t help them either
GenAI in Australia
In Australia, the backlash to GenAI is growing louder. A coalition of screen industry guilds –including the Australian Writers’ Guild (AWG), Australian Screen Editors (ASE), Australian Directors’ Guild (ADG), and several others – recently called on the federal government to take urgent action.
In a submission to the Productivity Commission, they denounced the current unchecked use of generative AI as ‘theft,’ demanding legal reform to protect copyright, ensure informed consent, and introduce proper remuneration schemes for creators whose work has been appropriated.
AWG and AWGACS CEO Claire Pullen puts it bluntly: ‘We’re not asking for unreasonable concessions – just that our existing laws be enforced. Without transparency and compensation, there can be no cultural or economic benefit from AI tools built on stolen IP.’
How the AI sausage is made
In 2024, Google’s Deep Mind launched Veo, an AI generator that could produce 60-second 1080p resolution videos. By the end of that year, Veo 2 arrived with support for 4K resolution and a noticeably improved grasp of physics, available to creators via Google’s VideoFX platform.
Now, in 2025 they’re already up to Veo 3: a system that can render a full scene, complete with characters, environment, motion, and seamlessly layer in audio that reflects the tone and setting. This is the AI system that was used to generate sequences in Aronofsky’s Ancestra.
If you’re wondering just how effective Veo 3 is at generating videos that look ‘real’ (i.e. made using human hands), let’s just say it’s good enough, and it’s probably already fooled your parents.
Aronofsky’s Primordial Soup project, of which Ancestra is only the first instalment, claims to put ‘artists in the driver’s seat of technological innovation’ – which sounds good, until you remember that this so-called empowerment of artists leans on non-human tools that cannibalise their original work.
Per IndieWire: In Ancestra, ‘[Eliza] McNitt trained the AI models on her own baby pictures and other photos taken by her late father in order to generate a newborn infant with a story that could be shaped by her own biography.’
But, given that Google’s Deep Mind is trained on immeasurable caches of data (basically, if you can Google it, Veo knows about it) there’s no way of confirming if other copyrighted images were used in forming the AI generated footage sequences.
Before we continue, you can watch Ancestra in full below:
US-based media scholar and industry veteran Jonathan Taplin, director emeritus of USC’s Annenberg Innovation Lab, warns that generative AI threatens to exacerbate the very problems already plaguing Hollywood: risk-aversion, reliance on formula, and the devaluation of originality.
Per MIT Sloan Management Review: ‘Entertainment relies on new ideas,’ he says. ‘And this technology can’t produce them.’
Instead, generative AI thrives on the past. It is most effective when reconstituting existing tropes, imagery, and structure – which is, of course, exactly what a cost-cutting executive wants. But it’s not really what the people want: more Australians agree generative AI will harm Australian society (40%) than disagree with this (16%) (The Conversation). I’m inclined to agree.
In my opinion, when studios start treating storytelling as a derivative data science rather than an expressive art, the result isn’t innovation. It’s slop. And with AI slop already dominating our social media feeds, WhatsApp groups and Discord Channels, it’s hard to see what artistic value there is in continuing to use it for screen content.
AI is rapidly on the rise
Of course, the use of AI in film and TV is already well documented, and it would be foolish to suggest it will ever cease. Generative AI was quietly used in the production of Everything Everywhere All At Once, and AI-generated VFX, backgrounds and intro sequences have started to flood mainstream films and TV shows.
While certainly groundbreaking for cost-cutting and time-saving reasons, we need to remember what’s hidden behind the shiny packaging: an acceleration of job losses in post-production departments, a shrinking pool of original IP, and an even smaller share of revenue flowing to working artists.
Proponents like Yves Bergquist of USC’s Entertainment Technology Center argue that AI is just a new tool of progress – a ‘creative assistant’ that won’t replace human writers but will streamline brainstorming or rendering.
But even he concedes that the technology is rolling out faster than institutions can respond, and that the lines around ownership, attribution, and creative credit remain dangerously blurry (MIT Sloan Review).
For example: if an AI is prompted to generate a character ‘inspired by Bluey/Studio Ghibli/Spongebob Squarepants’ who owns the output? Should the original creators and/or their respective creative studios be compensated? Is the new content even copyrightable? These are the unresolved legal and ethical dilemmas now at the heart of modern content creation.
Meanwhile, the power imbalance continues to grow. While a handful of blockbuster actors and tech billionaires reap the profits of AI-enhanced projects, the majority of working creatives are being squeezed out of the process.
The funnel narrows, and the cultural landscape suffers. In an industry where the barrier to entry is already so high, this is extremely concerning – but even those who recognise the issue can’t keep up with the sheer speed of AI rollout.
The next few years will be critical for how the creative industries use, restrict and regulate generative AI. Fence sitting is simply not an option.
Until the industry reckons with how generative AI is built – and who pays the price – we shouldn’t call it progress. We should call it what it is: exploitation. Shiny, sloppy, exploitation.