Artificial intelligence (AI) has begun to transform television, film, and audiovisual entertainment in general. From automatic editing tools to real-time data analysis, Artificial Intelligence is bringing significant changes to the way professionals work and generate audiovisual content. And like many other things that have been developed in the world, there are always questions that remain unanswered. Will AI replace professionals in this industry, or will it complement and enhance them in their tasks?
BY Christian Johnson for RED SHARK AND THE INTERNATIONAL CHRONICLES
Advances in Artificial Intelligence have allowed this technology to evolve from simple support tools to almost autonomous assistants in the field of audiovisual production and post-production. Some of the key areas where AI has made a noticeable impact include:
- Automating repetitive tasks: AI makes it possible to automate tasks that previously required a lot of time and effort, such as tagging video footage, sorting clips, and more. Editors can now organize content, search for keywords within audio, achieving preliminary organization and editing of material in record time.
- Script assistants: With tools like ChatGPT, writers can receive script ideas, content suggestions, and even entire dialogues that they can then edit to fit their creative vision. All of this has begun to change the workflow of screenwriters by helping them come up with fresh ideas faster.
- Improvements in video and sound production: AI can now automatically correct audio issues, and even improve video quality, allowing editors to create and correct high-quality pieces in less time.
- Audience analysis and content personalization: AI helps producers better understand their audience. Through data collected from social media, ratings, and streaming platforms, AI algorithms can predict consumer trends, measure the impact of each scene, and make personalized content recommendations.
- Creating visual effects and generating 3D content: In the field of visual effects, Artificial Intelligence is revolutionizing visual effects (VFX) and animation, generating 3D models in a matter of minutes.
Adobe and its Revolution in Editing with AI
One of the main players in this field is Adobe, which has incorporated several AI-powered tools into its editing software, Premiere Pro. These innovations are designed to speed up the workflow of video editors, allowing professionals to focus on storytelling and creativity.
- Adobe Sensei: This AI technology integrated into Adobe Premiere Pro and After Effects is capable of automating and improving various editing processes. Sensei can analyze images and video clips, intelligently apply effects, and improve the visual quality of audiovisual content with incredible precision.
- Automated subtitles and transcripts: Creating subtitles used to be a laborious task and took long periods of time, but now you can create subtitles with just one click. Adobe Sensei can also analyze dialogue and generate transcripts.
- AI editing for social media: Adobe improves every day and has launched features that allow you to automatically adapt videos to social media formats. It detects the important elements in each shot and adjusts them to the appropriate size and style for each platform, so you won’t miss the most important things.
- Automatic framing and effects adjustment: Adobe Premiere Pro analyzes the action in a scene and automatically crops it to focus on the protagonists. With this tool you could create multiple versions of the same clip.
Will AI replace professionals in the industry?
This is the question we all ask ourselves. However, the answer is not so simple. Artificial Intelligence automates tasks that previously required human intervention, on the one hand, but on the other hand it creates new opportunities and challenges that require the critical eye of the human being and its artistic touch.
The value of creativity and human interpretation: Although Artificial Intelligence has increased its capacity to detect patterns, human creativity will continue to be an important aspect to create impactful stories. AI cannot interpret our cultural context nor can it analyze something that is very innate to humans, emotions. All audiovisual production still requires editors, scriptwriters and directors who have the necessary sensitivity to stage a film, for example.
Professionals as AI supervisors: Humans must begin to think that AI can act as an assistant. We must act as supervisors of Artificial Intelligence, that is, we must all be responsible for the content we generate from AI. Not all the work must be done by AI, there is much to do and much to explore.
Adaptation to new skills: The implementation of AI is also generating a demand for new “skills”, that is, AI will demand new professional skills and in parallel new job opportunities will be created. Professionals must understand how these new AI-based tools work and the labor market will open its doors to professionals trained in AI.
Ethics and credibility: Not everything is done. In areas such as journalism, the use of AI poses ethical challenges. Can you imagine reading or watching a fake or unbelievable news story on television? This could happen if journalists do not supervise the information created by AI. The role of journalists and their moral role is very important to avoid this type of problem.
The Future of AI in Industry: Collaboration or Displacement?
Source: https://www.adobe.com/ec/products/premiere.html
I would say that both AI and audiovisual professionals are destined to collaborate. Technology is advancing, and we must all learn to use AI to reduce times in processes that take too much time, AI will allow us to improve the quality of work.
A memorable piece, whether it is TV, film or journalism, requires human vision that interprets and shapes the emotions and cultural values of each society. The use of AI in the audiovisual industry is constantly growing, and its role will become even more established in the coming years.
Revolution on the Screen: The Impact of AI on the Television, Film and Broadcast Industry
Artificial intelligence (AI) is revolutionizing various industries, and television, film and broadcast are no exception. In recent years, we have seen significant advances in the integration of AI in these fields, transforming the way content is created, produced and consumed. This article explores recent developments, how to leverage this emerging technology, and addresses the crucial question: Will AI replace professionals in the industry?
Over the past decade, artificial intelligence (AI) has begun to transform multiple industries, and the television, film, and broadcast industry is no exception. From automatic editing tools to real-time data analysis, AI is bringing significant changes to the way professionals produce, edit, and distribute audiovisual content. But this technology raises a fundamental question that many are concerned about: will AI replace professionals in the industry, or will it complement and enhance them in their tasks?
Conclusion
Artificial Intelligence is transforming the television, film, and broadcast industry in many ways, from automating strenuous processes to generating original content, AI tools are making a difference in the broadcast industry. However, the human element remains crucial. AI and industry professionals must complement each other, joining forces to get the best out of each other and obtain the best benefit, high-quality products that have a high impact on society.
As we think about this exciting future that is closer than we think, it is essential to embrace technology without losing our values, our morals and the creativity that makes us who we are, human beings, something that AI will never be able to achieve. Man and Artificial Intelligence will therefore be a complement. This promises to lead the industry to new challenges, offering society new experiences for an increasingly demanding public.
READ AND SUBSCRIBE TO RED SHARK
THE FOLLOWING ARTICLES ARE BY AUDREY SCHOMER FOR VARIETY / EDITED BY THE INTERNATIONAL CHRONICLES
AI Entertainment Studios: The New Breed of Companies Pioneering Gen AI-Powered Production
A crop of independent entertainment content studios are prioritizing generative AI in their production processes. AI studios are guided by two key strategic philosophies: aggressive experimentation and production efficiency. Studios are building their teams to bring together traditional creative and tech talent with deep knowledge of gen AI.
A growing number of independent entertainment studios are emerging with a capability Hollywood has never seen: generative artificial intelligence at the core of their creative DNA.
These studios include several primarily focused on producing feature-length and short narrative film and TV content, including Promise, the recently announced venture backed up Peter Chernin and Andreessen Horowitz; Asteria Film, owned by documentary studio XTR after acquiring the AI animation studio Late Night Labs; TCL Studios, owned by the U.S. division of the Chinese electronics giant; U.K.-based Pigeon Shrine, and creators of original animated IP made for YouTube and social media, including Toonstar and Invisible Universe.
Rather than taking a cautious or stymied “wait and see” approach that some perceived among legacy studios in Hollywood, AI studios are proactively bringing generative AI tools and models into their processes and designing production workflows and pipelines around them. They aim to deeply understand the technology and rigorously push the tools to discover what they’re capable of, how they’re limited and how best to use them to produce high-quality, compelling content, as opposed to more AI slop.
But exactly what it means to be an AI studio is less clear and begs deeper questions about how generative AI is being used in a professional workflow for content production.
VIP+ spoke to leaders at seven such studios to provide an in-depth exploration of studios’ different approaches to generative AI tools, production workflows, pipelines, and teams.
Two main strategic philosophies guide these companies…
Aggressive experimentation:
Multiple sources referred to the present moment as the “wild West,” as no one knows yet how this technology will be applied to filmmaking. Rather, it needs to be uncovered through purposeful trial and error by actually working with the tools to try and create content.
Several view their teams as on the leading edge of the next wave of production technology in the industry, already far ahead of major studios and VFX studios in their facility with gen AI in production thanks to their serious efforts to experiment and stay on top of constant tool and feature changes. They described their teams as having “breakthroughs” during production that could only be achieved by creative storytelling and technical expert teams confronting and solving actual creative problems.
“Production breakthroughs really happen through being driven by creativity,” said Paul Trillo, director and filmmaker now collaborating with Asteria Film. “That’s why ILM is what it is. They had to brute-force figure out how. All of it is in service of some underlying story that is requesting something. And we’re like, all right, well, no one knows how to do that. Let’s figure that out.”
Several studios further referred to their nimbleness and adaptability as critical operating elements, given the rapid pace of AI advancement. With new models and tools constantly emerging, sources said their toolsets were being outmoded within weeks or months, followed by new features or tools launching that resolved yesterday’s production challenge.
That nimbleness has allowed them to evaluate and rapidly adjust toolsets in a workflow as needed, in some cases immediately integrating a new or updated tool if it improves on a different or earlier one or meets a specific creative need on a project. By contrast, legacy studio systems with established pipelines wouldn’t as easily permit sudden changes to tools.
Pursuit of efficiency:
AI studios are interested in building workflows and pipelines that make good on the ability of gen AI tools to drive down content budgets, shrink production timelines, get quicker feedback (“fail faster”) and supercharge the output of their creative teams and artists, in some cases to meet the pace of certain distribution channels (e.g., social media).
Sources argued that content produced in the traditional Hollywood pipeline quite simply costs too much and takes too long to make. “In Hollywood, it’s very difficult to produce something affordably. We’re looking to basically compress those unit economics, both in terms of budget and timeline for creation,” said Jonathan Lutzky, COO at EDGLRD, a digital IP-based studio led by filmmaker Harmony Korine.
Some further argued that efficiency, thanks to AI power tools, might revive fallow projects during a period of contraction in Hollywood. “We’re building software and workflows that provide opportunities to be more efficient and bring down the time or cost it takes to make something,” said Bryn Mooser, CEO at Asteria. “We talk about how these tools can help filmmakers either make something they couldn’t have made before because the budget was too high or they didn’t have access to those kinds of tools. Now the cost can come down to where they can independently finance.”
Perhaps most salient to the industry right now, AI studios are hiring but arguably doing so from a limited pool of people.
In addition to their facility with AI tools, studios view the strength of their talent teams as their true differentiator and X factor. Sources at AI studios described purposefully building their teams to bring together creative and tech expertise, including highly adept traditional artists, animators, and producers with deep knowledge of storytelling, senior VFX specialists (e.g., CG generalists and compositors) and tech talent with deep knowledge of generative AI. Most importantly, sources highlighted that creatives and tech work directly alongside each other to problem-solve.
For example, AI tech talent can consist of engineers, who can function as the studio’s research and development arm, tracking new research and tools and showing it to creatives on the production side to assess how it might help solve a particular production issue. But it can also mean gen AI specialists who understand more advanced, technically “in the weeds” ways of using the tools.
“The kind of ‘dream team’ that we’re cobbling together as the core creative research team has people who have both [creative] backgrounds, whether directing, filmmaking, animation or VFX, and they know AI tools really intensely,” said Trillo. “You’d be surprised how few people have both sets of knowledge.”
“We’ve been trying to be home to some of the earliest and best makers in this space,” said Eric Shamlin, CEO at Secret Level. “We’re by no means alone. There are probably half a dozen shops that are in the competitive landscape now. All of us are trying to identify the best talent out there.”
Plus IconAI Entertainment Studios: How Gen AI Toolsets Are Transforming Production Workflows
AI studios use off-the-shelf tools but are also developing in-house workflow solutions to streamline production. Studios are pursuing “hybrid” production paths that still purposefully incorporate human artists and their creative work. Still, they expect using generative AI in a workflow to diminish traditional previsualization and post-production stages.
A new crop of independent entertainment studios has emerged, as VIP+ examined last week, bringing generative AI tools and models into film, TV, and short-form video content production and designing professional workflows around them.
AI toolsets used on productions will change according to the needs of the project and as new tools or models become available. Several studios were agnostic to the tools they used and are integrating whatever performs best for the task.
For these studios, experimentation to discover production methods with generative AI ranks as a priority versus ongoing legal murkiness and ethical qualms that result from many AI models training on copyrighted data, a reality of how the current generation of AI models has been developed. However, AI film animation studio Asteria intends to transition to using Marey, a forthcoming image and video model developed by its partner AI research firm Moonvalley, trained exclusively on licensed data.
Overall, AI studios were well versed in image and video foundation models, referencing Midjourney, Flux, or Leonardo (a tool built on Stable Diffusion) and Runway, Kling, Minimax, Luma AI, Haiper, and others. Studios also referenced using AI tools that have been developed for markerless motion capture (e.g., Wonder Dynamics), lip-sync (e.g., Hedra, Flawless), style transfer (e.g., Runway’s Act One), and voice (e.g., ElevenLabs).
Several studios have custom built their own internal workflow solutions, sometimes referred to as proprietary tech stacks. Often, these are software that effectively aggregates an array of the studio’s preferred models via open-source or APIs, with a main goal of streamlining workflows for internal creative teams.
This allows them to access multiple models within a single tool instead of inefficiently “bouncing around” between multiple web-based AI tools to output and modify content, which sources said was one of their key UX pain points. For example, Promise’s workflow solution MUSE intends to offer a “streamlined, collaborative, and secure production environment” for artists.
For the same reason, multiple AI studio sources referenced using ComfyUI, which offers a single interface for accessing multiple AI tools to sequentially and specifically modify an AI output. “It’s quite powerful and adds levels of control that a professional artist would need,” said Eric Shamlin, CEO at Secret Level.
In some cases, AI studios developing their own workflow solutions also expect to license them out as enterprise creative and collaboration software, positioned for content production. Invisible Universe is in conversation with domestic and international studios and will be opening its cloud-based software, Invisible Studio, to a prosumer base, such as social media content creators. Likewise, Secret Level and Promise eventually intend to offer their respective tools, Liquid Studio and MUSE, under an SaaS model.
In theory, it’s possible to create a fully AI production from end to end, where every element is synthetic. But in practice, AI studio sources described “hybrid” production workflows where they are intentionally incorporating human artists and their creative work, artificial intelligence automation where it provides value and traditional production methods where they are still needed or desired, including shooting with cameras, motion capture and traditional CGI. Sources noted that the needs of any given project have still often meant hiring human writers, actors, artists, animators and composers.
“We’re AI-first or AI-forward, but it‘s not AI only. AI is a primary tool for us, but we’re still doing live-action shoots, traditional CG, traditional animation. But those things are now accelerated or augmented by AI. It’s very much a hybrid production model,” said Shamlin.
Studio sources repeatedly said there was no such thing as a completely automated AI movie that still didn’t require substantial manual edits to make raw AI outputs “look right,” including to fix hallucinated artifacts and the “uncanny valley” feel still apparent in AI imagery.
In short, studios are searching for ways to make AI outputs look not like AI. Right now, there is more confidence about the ability to do that for animation than live action, although the photorealism of AI imagery is fast improving. Some also expressed early confidence they would find methods to “crack” live action.
Sources contend gen AI tools are restructuring the conventional production pipeline, normally a stepwise progression from previsualization to production (shoots) to post-production (CG rendering).
For example, some described a newfound ability to visualize an entire film upfront by using AI image and video tools to generate visuals for the entire film based on the script. These AI frames could then act either as a first cut of the film, where AI visuals would be further edited and refined with generative AI tools or traditional CGI techniques to become the final frames.
Alternatively, the initial AI frames would be treated as a high-fidelity storyboard and used to guide a production shoot with actors on set or with greenscreen.
Some interpreted that restructuring as inverting the pipeline to “post-first” or “post-to-pre,” where upfront visualization allows them to see and iterate immediately on visuals that would otherwise have had to wait until production shoots or post-production VFX to CG-render. As a result, sources expected gen AI would diminish the duration and demands of both traditional pre- and post-production.
“As a studio, we’re approving the movie long before anybody steps on any form of set or traditional tool,” said Tom Paton, CEO at Pigeon Shrine.
“Conventional pre-production is kind of dead in our minds,” said Shamlin, who heads the Secret Level, the studio that produced Coca-Cola’s controversial AI-generated commercial in November. “We had a lot of the final frames you see in the final spot in pre-production in lieu of storyboards because you can just go straight to the AI and start fully rendering, whereas in traditional production you’d have to wait until after you shoot. All that starts in pre-production now.”
Celebrity AI: Using Talent Digital Replicas
Use cases for celebrity digital replicas enabled by generative AI systems are beginning to emerge. Safely engaging will require a new set of data management processes to allow consent, control, credit, and compensation.
Digital replicas are becoming available for talent to exploit for creative and commercial use with AI.
Not all talent will. Even with the protections established under SAG-AFTRA, some actors are wholly rejecting the use of AI, stipulating “no AI” in contracts. Yet some transactions are beginning to happen, and more are expected. One VIP+ source anticipated talent digital replicas would at some point become “ubiquitous,” while another felt most talent would have digital replicas within the next decade.
Used correctly, with consent and compensation, AI conceivably scales opportunities for talent past ordinary limitations. As some argue, AI versions of talent allow a celebrity to be in many places at once or perform “impossible jobs,” such as personalized interactions with fans at scale through a chatbot, all without requiring talent to physically do the work.
Use case ideation is still early, as is the thinking about how talent should be compensated and valuations for certain utilizations of a talent digital replica. But the potential is broad. Projects could originate from film, TV, gaming or animation studios, sports leagues, and major or minor brands, but they could also be initiated by AI companies or talent themselves. New uses will emerge as talent reckons with their own options and interests amid evolving tech capabilities.
As talent digital replicas begin to be utilized, entertainment industries across film, TV, gaming, music, and more will need to build or integrate technical mechanisms into workflows to ensure that use provides the so-called 4 C’s for talent: consent, control, credit, and compensation.
The development of solutions that will accomplish these requirements is still in the early stages. However, there is general agreement that a standards-based approach gaining industrywide and cross-industry adoption will be needed, even as proprietary services are developed and used.
“It’s very important that individuals are able to identify where their name, image, likeness has been utilized. The first problem we need to solve is who owns the rights to the name, image, and likeness, and who has the right to actually utilize it and give permission for its use. If someone creates a scanned image of themselves, they should own and control that,” Renard Jenkins, president of the Society of Motion Picture and Television Engineers told VIP+.
“Then we need to create a pathway to traceability, to be able to track and trace what’s happening to an asset you own from the point it’s created, to have the ability to audit every action and see who’s utilizing those assets,” he continued. “And then how do we provide a way for improper use to be swiftly taken down or payment requested for its usage? All of that is going to take a much larger effort across the industry, with everyone saying this is something important they want to work on together.”
From project origination to content distribution, several technical components are starting to come together to allow talent to make, own, control, and monetize their digital replica for authorized, employment-based use and content creation. How exactly different technologies and providers should be assembled is now being carefully considered.
Processes that can enable authenticated employment-based use of talent digital replicas would need to include methods for the following:
Consent: Ensuring talent has a way to approve or decline the creation and any use of their digital replica asset
Data Capture: Creating a digital asset of talent likeness
Data Storage and Management: Securely housing, transferring and/or tracking digital replica assets
Content Creation: Using talent data to create content or experiences
Provenance: Applying hard-to-break mechanisms to enable real-time traceability and verification of name, image, likeness, and voice assets and derivative content for their entire lifecycle, such as with embedded watermarks, cryptographic metadata, hashing, or blockchain records.
Compensation: Establishing payment models and triggers to ensure talent is fairly paid for the use of their digital replicas, including residuals for ongoing use or AI training.
Will Generative AI Supplant or Supplement Hollywood’s Workforce?
Machine-created content is sparking concern over a showbiz labor crisis waiting in the wings.
The rapidly advancing creative capabilities of generative AI have led to questions about artificial intelligence becoming increasingly capable of replacing creative workers across film and TV production, game development, and music creation.
Talent might increasingly view and use generative AI in more straightforward ways as simply a new creative tool in their belt, just as other disruptive technologies through time have entered and changed how people make and distribute their creative work.
In effect, there will still — and always — be a need for people to be the primary agents in the creative development process.
“Talent will incorporate AI tools into their existing processes or to make certain aspects of their process more efficient and scalable,” said Brent Weinstein, chief development officer at Candle Media, who has worked extensively with content companies and creators in developing next-gen digital-media strategies and pioneering new businesses and models that sit at the intersection of content and technology.
The disruptive impact of generative AI will certainly be felt in numerous creative roles, but fears about a total machine takeover of creative professions are most likely overblown. Experts believe generative AI won’t be a direct substitute for artists, but it can be a tool that augments their capabilities.
“For the type of premium content that has always defined the entertainment industry, the starting point will continue to be extraordinarily and uniquely talented artists,” Weinstein continued. “Actors, writers, directors, producers, musicians, visual effects supervisors, editors, game creators, and more, along with a new generation of artists that — similar to the creators who figured out YouTube early on — learn to master these innovative new tools.”
Joanna Popper, chief metaverse officer at CAA, brings expertise on all emerging technologies relevant to creative talent and the potential to impact content creation, distribution, and community engagement.
“Ideally, creatives use AI tools to collaborate and enhance our abilities, similar to creatives using technical tools since the beginning of filmmaking,” Popper said. “We’ve seen technology used throughout history to help filmmakers and content creators either produce stories in innovative ways, enable stories to reach new audiences and/or enable audiences to interact with those stories in different ways.”
A Goldman Sachs study released last month of how AI would impact economic growth estimated that 26% of work tasks would be automated within the “arts, design, sports, entertainment, and media” industries, roughly in line with the average across all industries.
In February, Netflix received backlash after releasing a short anime film that partly used AI-driven animation. Voice actors in Latin America who were replaced by automated software have also spoken out.
Julian Togelius, associate professor of computer science and engineering and director of the Game Innovation Lab at the NYU Tandon School of Engineering, has done extensive research in artificial intelligence and games. “Generative AI is more like a new toolset that people need to master within existing professions in the game industry,” he said. “In the end, someone still needs to use the tool. People will always supervise and initiate the process, so there’s no true replacement. Game developers now just have more powerful tools.”