Wake The Town
Stuck in Motion
Contemplative Reptile
  • International Edition
  • USA Edition
  • UK Edition
  • Australian Edition
  • Canadian Edition
  • Irish Edition
  • German Edition
  • French Edition
  • Singapore Edition
  • Spanish edition
  • Polish edition
  • Indian Edition
  • Middle East edition
  • South African Edition

Frienemies: AI, VFX and Their Merging Paths


LBB’s Zoe Antonov asks the VFX community if AI is their new best friend or their biggest nightmare so far

Frienemies: AI, VFX and Their Merging Paths

AI is up-ending the creative industries, most recently causing panic among illustrators and copywriters. But the VFX industry is well ahead of the game when it comes to experimenting with AI and algorithmically-generated images, as well as successfully deploying it. Whether it’s using AI to simulate crowds or plant growth or flesh and muscle on creatives, or using it to improve motion capture, masking and more, there are many areas ripe for artificial intelligence - and that’s before we even get to the growth of deepfakes. So, at LBB we wanted to find out how things have changed in the last few years and to explore how AI is really being used within the sphere of VFX. 

How exactly do AI and VFX intermingle?

While we know that the visual effects industry has always been a hotbed of innovation and the advent of AI is revolutionising the way movies and TV shows are made, there are some particular ways in which the two intermingle to advance what we see on our screens. But let’s first break down just a few with the help of No.8 London's VFX creative director Jim Allen, who puts it in a handy list: 

Text-to-image AI, also known as natural language image synthesis or NLIS, generates images from textual descriptions. Motion capture, which has been around for decades, but through AI is rapidly transforming by capturing motion without the need for markers or suits in cinematic-quality motion data. Crowd simulations, which has become way more efficient and realistic with the introduction of AI tools like Golaem Crowd and Miarmy. Text-to-model, capable of generating 3D models, environments, animations or entire scenes based on written descriptions. Facial reenactment is being used mostly to translate movies into various languages by analysing the facial movements of actors and modifying them to match the lip movements to the translated dialogue. Text-to-video is another emerging area in VFX deploying AI algorithms to generate complex animations and video without the need for manual frame-by-frame animation. Finally, the controversial deepfakes, which could also be used legitimately, for example as creating doubles for actors.

This list, however, is by no means exhaustive. Jamie Watson, co-founder and executive creative director at Heckler explains how at a recent Facebook developer conference in San Jose, one of the more interesting talks was surrounding the role of AI in restoration of photos and subsequently, movies. “The speed of the process even a few years ago was impressive. Denoise, rebuilding broken frames, increasing resolution, adding colour. That was 2019.” Today, Jamie says, there are multiple applications that do this successfully which Heckler also uses to their advantage in VFX. “Building Matte Paintings at 12k+ requires a level of detail that can easily be achieved with these tools at our disposal, the ability to source degraded images and include them into the scene.” 

He goes on to say that Heckler has also embedded text-to-image AI generation of images into their pipeline for concepting and pitching ideas. “Our current tools are adapting to new AI workflows,” Jamie adds. “Attempting to take the manual nature out of our work and automate. There have been mixed results with this so far, but we can only see it improving. At the rate the industry has taken to AI, it won’t be long before the tools are fully integrated.”

Similarly, while Airbag isn’t yet using AI in VFX, director and partner Travis Hogg says that they very well might be starting by the time you are reading this article. Although it has almost become a second full time job keeping up with the tech developments in the sphere as well as the novel ways for their implementation, the truth is that incorporating AI directly into the software pipeline will be the turning point for post and production. “When you ask Houdini software to build a node tree, or Nuke to rotoscope ‘that tree in the corner’, is when it will be a real change to our day-to-day workflow,” says Travis. “We are at the tipping point right now, but don’t know how far this affects all of our industries. The one hurdle with AI is the lack of control over the final image. When you have clients that expect and require specific accurate control of an image, it’s very hard to trust that the AI will deliver.”

However, what Travis sees as a feasible option to this is not incorporating AI tech into creating the final image, but creating the assets that build it. “For example, if we are texturing a 3D building, we are asking for a brick wall image that fits the 3D geometry. Or, if we need a stormy night sky for the background plate. The main point is we need to still keep full control of the image and at the moment AI can’t deliver that… yet.”

For David Casey, creative director at Embassy, AI’s onward march is inevitable, and it is here to stay, be it through mundane tasks like rotoscoping or more high level initial concepting, and Embassy doesn’t shy away from playing with its many iterations. “The recent inclusion of AI tools sets directly in The Foundry’s Nuke opened up a lot of possibilities for our compositing team,” he says. “Initial first pass composites, often known as ‘temps’, previously required some time for rough rotoscoping and cleanup work, which can now be handled to some degree by these AI tools.” EDISEN’s creative director Jay Harwood, also says that the team is getting to grips with the new tech. Recently, they used deepfake on a project that historically would have cost a lot more time and money, as well as required multiple team members, making it very difficult to handle. 

Why is it picking up now?

But, however you look at it, all of this is normal stuff and comes naturally with technological, and especially non-linear, advancement. “The VFX community has always naturally adopted cutting-edge techniques, as we’re always looking for things to happen quicker, easier and better. Because there’s a lot of iterations to get something to a place of creative alignment, if that’s photorealism, abstract, or something more fun we’re chasing,” says Jay. So, if that is the case, why are these conversations gaining speed now, since many of the AI or machine learning tools that the VFX community commonly uses in visual effects today, for masking, tracking and in-painting, have all been around for a number of years, albeit with limited usefulness and clunky platforms.

Embassy’s Davis believes that the industry has found itself at a crossroads. “Hardware is relatively cheaper, the models and datasets have been refined and expanded to be production ready, and high-powered computing is more accessible than ever, as the world was pushed to remote and cloud-based computing over the last three years,” he says.

In his view, virtually anyone can now train a model on their home computer or rent time on high-powered cloud computing systems - it is the democratisation of AI and machine learning that is really responsible for the huge interest and growth over the last year and a half and ‘therein lies the rub’, as David puts it. 

So, should the VFX community be worried about robots taking over?

This is a complicated one to answer. Jay Harwood believes that the layers of control when it comes to client-based work with AI are where a human touch is still very much needed, as the technology doesn’t have the flexibility one would need to have that ‘anything is possible’ mentality that the media world loves. Besides, “clients certainly don’t understand that they can’t make comments on everything, but when most of the work is being done by AI there’s a lot of variables that don’t have full control.” While he admits the technology is incredible and is set to become even better over the next few years, the industry should rest assured it’s not the silver bullet people might think it is.

“Historically, productions tend to go on the 80/20 rule, meaning that the first 80% takes the same as the final 20%. Now that the needle is moving, AI helps us to get that 80% faster, but the final 20% still needs human input to put the final polish required. That is still an incredible statement, the times are changing and optimistically speaking, the AI evolution will help do more of the fun stuff, allowing more artists to own more of a shot - empowering them to have more ownership of the stories we tell.”

Jay explains that even with that said, some are worried. “We’ve had a few individuals reach out after recent articles talking about their fears with AI,” he shares. “Which ultimately came down to them worrying that they will have to retain on software that does it quicker. And after you’ve been in this industry a couple of decades, that can be a drain.”

David Casey is equally reassuring in his conviction that there is no merit in the worry that AI will replace the artist, because AI simply cannot do anything without the artist. “The real fear here is the democratisation of the skills that VFX artists have spent considerable time and money investing in,” he says. “That some random John Doe down the street can create acceptable results. I ask though, is that such a bad thing? The visual effects industry has a capacity challenge. The growth of the streamers, Netflix, Apple, and Disney, now finds us with too much work and not enough artists. If AI and machine learning can spread the load and allow human artists to focus on the more important tasks, then that’s a win.”

Important tasks, if you wonder, encompass the borders within VFX where science, technology and craft merge together. And while AI and machine learning might help with the science and tech side, it will less easily replace art and craft. That is the sphere where decisions are made on a highly personal, subjective basis and are mixed with emotion, experience and context. Not to mention trial and error, something humans are pretty good at and something that won’t go anytime soon. Jim Allen, VFX creative director at No.8 London thinks in a similar direction: “While some may worry that AI will take jobs away from artists, it is more likely that AI will augment the work that artists do, allowing them to focus on more creative and challenging tasks. Ultimately, AI has the potential to make VFX more efficient, cost-effective, and visually stunning.”

Airbag’s producer Nick Venn’s prognosis is equally optimistic and relies on collaboration rather than picking an overlord in the VFX world: “It's hard to see AI taking over and being able to respond with as much accuracy, emotion and context as a team of human artists and we'll develop and evolve alongside it, rather than be taken over by it. Famous last words?”

view more - Trends and Insight
Sign up to our newsletters and stay up to date with the best work and breaking ad news from around the world.
LBB Editorial, Fri, 05 May 2023 16:21:15 GMT