senckađ
Group745
Group745
Group745
Group745
Group745
Group745
Behind the Work in association withThe Immortal Awards
Group745

How ArtClass Used AI to Animate GoFundMe’s 2022 Stories

24/01/2023
Production Company
Los Angeles, USA
927
Share
ArtClass director Paul Trillo and creatives from AKQA share how they used DALL-E and Stable Diffusion to celebrate donors’ generosity, writes LBB’s Ben Conway


In 2022, artificial intelligence (AI) became one of the biggest trends, not just in the creative world, but in the mainstream too. There was an explosion of AI-generated artwork, stories, Twitter bots and ads - all created with a variety of machine-learning AI systems that have recently become more accessible than ever. Midjourney, DALL-E, Stable Diffusion - all of these names are now entering common parlance as more and more people are exposed to the products of these highly developed artificial artists.

By now, most people will have seen still images that have been generated by AI. And using relatively simple prompts, many will also have created such images for themselves - after all, it’s fun to see what Paddington Bear would look like riding a dragon made entirely of marmalade sandwiches. However, not so many people have seen AI be used for animation. In fact, at first thought, this might even seem to be an impossible task, even for the most futuristic of technologies - but you would be wrong.

ArtClass director Paul Trillo had been experimenting with using AI for VFX and animation when AKQA and crowd-funding site GoFundMe approached him to create an AI-generated mural to remember some of the most heartwarming stories of 2022. What developed from this, however, was a fully animated film that combined the talents of actors with AI imagery techniques.

To explore how this project was created and developed into this technically impressive video, LBB’s Ben Conway spoke with Paul, as well as the creatives at AKQA - Jon Phillips, senior client director, Emlyn Allen, creative director and Heather Harlow, executive producer. Speaking about the complex production process, they discuss how the team ‘tamed the chaotic nature of AI animation’ and why artists will only be replaced by AI if they allow it to happen.



LBB> So, how did this project come about? What was the brief like?


Jon> In 2022, a donation was made via GoFundMe every second, so the brand simply wanted to say thank you to all the donors who had given generously to help others throughout the year. GoFundMe is used by a wide variety of people, ranging from those that have hit tough times personally, to those seeking to help others in need and people looking to make a positive difference in their communities. The stories for the film were selected to show that breadth. They ranged from a veteran’s house being rebuilt after a fire to pets being rescued after Hurricane Ian, and Ukrainian refugees being reunited.

Emlyn> Given the sheer amount of powerful GoFundMe stories worth telling, the idea of using AI came into play; not only would it serve our community mural concept well, it would also allow speed and scale of production. From our own early experiments, the AI medium showed enormous flexibility, and therefore felt like the right tool for the brief. Enter Paul Trillo, who was immediately excited that we were using the tools for narrative purposes. As the work developed further in motion, we were excited by the styles and compositions the AI was creating, but we wanted to ensure an authentic representation of GoFundMe’s diverse community in the final product. So instead of just focusing on one particular community mural aesthetic, we used the AI to aggregate a more universal artistic style which represented all types of communities across America.

Paul> AKQA reached out [to me] after seeing some of my experimental video work on Instagram. Since June, I’ve been experimenting with different ways of incorporating AI VFX into live action video. Each experiment looks for a way to use the tech in a way I haven’t seen before. These videos made their way around the internet and eventually to some agencies. There aren’t really many other people experimenting with AI VFX who also happen to be commercial directors.



LBB> How did the mural idea develop into the animation we see now, and what were some visuals or other creative ideas that immediately sprang to mind from the brief?


Paul> The original brief from AKQA was simple: create a side scrolling mural with [AI image generating system] DALL-E and use inpainting as a metaphor for GoFundMe donors to show a positive change in scenery. I loved the idea of using AI for a more pointed storytelling approach but as I do, I wanted to make it more complicated. I had been keeping tabs on the latest advancements in AI (this was back in September) and I knew we could do so much more. I wanted to really push how cinematic this felt, create surprising transitions between the scenes and use real actors to portray the people in the paintings. It was about trying to tame the chaotic nature of AI animation and combine different animation and live action elements to create something cohesive and grand.



LBB> The animation is far from traditional - among others, you used DALL-E 2 and Stable Diffusion’s Image-2-Image software to create the visuals, and used AI-rotoscoping. What did AI allow you to do that traditional animation couldn’t?


Paul> As controversial as AI is, it is undoubtedly a tool that allows us to create in ways we’ve never seen before. There is a lot of criticism on the 2D image side of AI generation but I feel as a video and animation tool there is a little less flack. My hope with this project was to show people that the AI could be used not just in a flashy, viral, experimental sense but as a legitimate tool for VFX and animation going forward. 

I went to school for drawing and painting originally and when I was even younger thought I would be a Disney animator. I became more drawn to doing video work but always kept that stylish tendency in my work. This enabled me to create the look of frame-by-frame animated paintings, something I never thought would be possible. This wasn’t even possible in July of 2022. While the AI was incredible at designing the background assets and other elements for each scene, it was the process of regenerating, recreating each frame to look like a unique painting that had me truly excited.



LBB> Secondly, many people have used an AI like DALL-E before, but creating a moving picture with them seems almost beyond belief for the average person! Can you please walk us through the process of turning actors into AI-generated images, and AI-generated images into an animation like this? What is ‘AI rotoscoping’?


Paul> For sure! Dall-E is what showed me that AI image generation could be used as a formidable VFX tool. However, we primarily used Stable Diffusion which was originally developed by the AI / ML video company, Runway. Stable Diffusion is open source, which makes it incredibly powerful and has allowed for the process of inputting an image as the source material for the generated image. This is called ‘img2img’ within the Stable Diffusion script ‘Automatic1111’ - which requires a beefy graphics card. I had been wanting to use this tool in a meaningful way and so when the project came around, I thought this aspect could add a lot of humanity to the piece rather than some of the lifeless AI animations out there. 

What was great is we could change the actors' look through img2img, not just in terms of painterly style but their characteristics. We could also shoot them on a white cyclorama stage, rather than a green screen which is more difficult to light. We used Runway’s AI-trained rotoscoping tool to easily isolate each actor from the white stage and then process each frame at 10 frames per second through img2img and give them that hand animated look. We then created a background using Stable Diffusion in Runway, which allows you to tweak the scene endlessly. 

Once we landed on the look for each scene, we brought them into After Effects and broke them up into 3D layers so we had full control of the camera move and transitions. We then exported those animations once more through Stable Diffusion img2img so that it all felt cohesive and each frame felt like a unique painting.



LBB> The art style is very consistent throughout - was that a challenge to achieve?


Paul> It certainly was given that each scene had so many different elements. We experimented with a look of prompting to get it right. I wanted to create something that felt classic but also something that you had never seen before. Once we developed the aesthetic (not using any artist names) we applied a similar style prompt to each scene.



LBB> Had the AKQA team seen AI technology being used in this way before? What was your reaction to seeing the amazing technological process underway?


Heather> Absolutely not. We had heard of AI technology being used in more gimmicky ways - for example, app photo filters. But after meeting with Paul, we started a deeper dive into what else was being done out there in the world in terms of collaborating with AI as a creative partner in the process of storytelling. We found there was a lot of experimentation happening across the multiple AI tools available, but nothing was being done on the creative level of this GoFundMe project.  

I was very captivated by the imagery after seeing our very first AI-animated clip, which was only a few seconds in length at that stage. It evoked such a profound and beautiful curiosity, like seeing movement in a way I had never seen before. In addition to being a source of inspiration, as it aggregates and manipulates found imagery based on human prompts, it was obvious in that moment that AI has the potential to be a tool that can help speed up the more tedious stages in our production workflow - allowing more time for the human creative process.



LBB> What was the hardest challenge you faced on this campaign and how did you overcome it?


Paul> I’d say the hardest part was trying to get this look consistent from frame to frame, especially the people. We wanted to lean on the AI animation aesthetic to some degree but we didn’t want to let it dominate. It was really about maintaining that control so that every decision was deliberate and wasn’t left to the AI to completely invent something random. There was a lot of experimentation in how we did the img2img process to achieve a look that doesn’t feel too frenetic. But even since we finished the animation, there have been a few advancements that would make this even more consistent.



LBB> What did you want to express through this film, as a brand and a community? What messages and goals does this help GoFundMe lead into 2023 with?


Jon> The film shows two things. Firstly, that there is a vibrant community of people who understand the power of kindness and are willing to step up and help others. And secondly, that GoFundMe enables real, meaningful change at the community and national level. 

As for 2023, the world now is not much different from 2022, so that spirit of generosity and the realisation that we can lift each other up, is needed as much as ever. 



LBB> Where do you see the future of this AI animation technology in this industry? 


Paul> I fully believe that AI will be an undeniable VFX tool in the future. Stubborn people who resist it and want to keep doing VFX the hard way are only talking themselves into irrelevancy. What’s important to consider in the film industry is that AI is here to make our lives easier and it will only replace you if you let it replace you. That’s why you need to be on top of what’s coming. I believe we can significantly cut down the number of hours it takes to achieve VFX and animation. But it will still require using the same old techniques and tools, in combination with the AI. I hope in the future I can customise and develop these tools.



Credits
Work from ArtClass
AI
Lilly Pulitzer
11/03/2024
8
0
Odds and Ends
Daniel Arnold
30/01/2024
15
0
Gym Membership
Morgan & Morgan
18/01/2024
20
0
ALL THEIR WORK