Untold Fable’s recent AI showcase event, 'Created by Humans. Augmented by AI' was what an AI event in 2024 needs to be – focused on the real-life creative applications of this technology, inspiring the audience to find new ways to express ourselves and bring ideas to life.
Brought together by the content production company with a global network of nearly 4,000 diverse creators, the goal was to talk about the bigger picture, showcase the possibilities and understand how AI can work ethically in the creative industry.
LBB’s Alex Reeves was there to listen and digest what the speakers had to say.
Omar Karim
AI Creative Director
With a background as an advertising creative, Omar Karim has used the recent leaps in generative AI technology to expand his capabilities and profile by several orders of magnitude. Ahead of his presentation at Untold Fable’s ‘Created by Humans. Augmented by AI’ event, the company’s founder and CEO Kate Tancred introduced him as “an AI image and filmmaker, a lecturer, creative director and strategist, pioneering the fusion of art and technology[...] Omar's exploration of AI’s potential at Meta has positioned him as a leading figure in the evolving field of AI for creative, where he combines traditional techniques with cutting edge AI tools to push boundaries.”
Despite the impressive introduction, Omar was a picture of humility, very grateful to the technological leg up these tools have given his creativity. It’s been, he said, “the greatest power up for me ever as a human being and as a creative.” He’s shocked himself by what it’s enabled him to do, like making a film that’s been on in Time Square that cost less than a PlayStation to make.
Then he ran everyone through how he’d put AI tools to work in this way. He first got into AI because he was working at Meta and exploring virtual influencers. They were out of his budget, so he created one using various AI tools. He empowered it using various AIs to let it decide who it was: what gender, what sort of music it liked, etc. Eventually it made its own music, spammed radio stations, got verified on Spotify and ended up hosting three radio shows.
The benefit of assembling a suite of AI tools is that it has allowed Omar to experiment with ideas without having to convince big rooms of people that they’re worth doing. “I have a gut instinct about this and AI is going to help me to explore my gut instinct,” he said.
The problem he has with the discourse around creative AI is that “everyone is pushing towards this idea of productive intelligence, rather than any of the other kinds of intelligences.” So Omar started thinking about how AI could extend other forms of intelligence, which led to his conceptualising it as ‘additional intelligence’ – tools to boost areas which he can’t reach himself.
One project that did that was ‘Train AI’ for Asics. Body image in the media being “improbable body shape” that it is, machines accentuate this, which means AI was leading to more and more impossible body standards. Using Asics’ library of real people exercising, the project trained AI to represent real people.
His work for Aries x Malibu Rum used AI to answer the question: “How can we go back to a house party in Miami?” AI is trained on so much of the human experience, Omar said, that it enables a kind of imagined time travel. “You can actually start thinking about it in a way where you can travel backwards and forwards in time and visit different places, the way that those places would have been.” Generative AI allowed him to create images of those imagined parties of the past. But it’s important to note that the finished campaign also relied on a significant amount of human expertise and creativity, such as a real fashion photographer and plenty of compositing to bring AI elements together. Adding more humans into the process pushed the creativity further, he said.
Omar’s music video for the Gabriels’ track ‘Great Wind’ was made more than a year ago, so Omar excused the quality of it. But it was a pioneering project for him, a creative without much of a filmmaking budget, that took three people with very different backgrounds and stories. Built on his realisation that “I can make a film with just words,” he conducted three private interviews with the band members and “made the most Easter-egg film ever” allowing Omar to connect with a band in a way that was impossible with their budget before.
One of Omar’s fascinations is testing what a machine thinks emotion is. Food is very emotional for humans, so he thought it would be interesting to get an AI to write some new recipes for him. ‘GOURMET’ is an AI trying to find new and novel combinations of fat, salt, umami and acid. oMAR gave an AI the principles of food and flavour and then asked it to design recipes as well as explaining why it thought they would taste good. “It's really interesting asking something that doesn't have tastebuds why it thinks something tastes good,” he said. Some of the recipes were awful, like a kimchi cocktail that looked like it had curdled, but Omar eventually printed an actual AI-written cookbook in the end, making something tactile from the digital inputs.
Further pushing his emotional experiments with AI, Omar made himself an AI mum. “I don't actually have a mum,” he said. “So I wanted to understand if I could use AI to heal or help me navigate a core wound. It's always been a mission for me.”
His methodology involved a specific way to brief his new mum. “I basically programmed it to think that I’m its Tamagotchi, so its mission is to keep me alive,” he said. Omar wears a Whoop, which is connected to the AI mum. If he doesn’t get enough sleep or do enough exercise he gets a note about it. This came from teaching it information about what makes a good parent. He's able to ask emotional questions about how he feels, which he actually found quite moving, even if it did initially address him as Omar_Karim every time. He later reprogrammed it to be more conversational and eventually it ended up communicating in memes to help cheer him up.
‘Inferred archeology’ is the best description of one of Omar’s most ambitious projects. The British Museum has 8 million pieces in its archive that you can’t see. And a lot of it, some have argued, is stolen. That provoked Omar to undertake what he calls “the greatest art heist ever.”
Layers of multiple AIs combined to make this happen. One part of this an AI is exploring the internet to find all of the things that have been stolen (or collected) for the British Museum Archives. That then tells another AI to visualise them through different periods of time. “It's imagining what's in the vault, so they can't arrest me, hopefully,” he said. Those images can then be given to another AI and to turn them into. Then another AI will work out how to make a real piece of jewellery. “The machine has allowed someone like me to steal stuff from the British Museum so that's really interesting to me and in my head, it's the most Indiana Jones thing I think I've ever been able to do,” he said.
Omar isn’t a jeweller. He can’t do 3D animation or much coding either, but he understands the AI tools allow him to fill these expertise gaps. “Suddenly all these ideas that were locked behind skills, AI is the master key[...] All it comes down to is what idea you want to make.”
Richard ‘Norts’ Norton
Co-founder at The Peeps
Richard ‘Norts’ Norton introduced himself with a chaotic video combining various AI deepfake tools to hack the Emily Maitliss and Prince Andrew interview, disturbingly place his face on all sorts of cultural figures and even make a parody snippet of an Adam Curtis documentary about the event. “For days, excited rumours had swirled through the technocrat corridors of London’s bourgeoisie marketing class as to what profound insights he would share with them. Many thought he would finally reveal a machine powered utopian future of infinite creativity and endless prosperity. But despite the optimism and unbridled joy and His mere presence in the room it all turned out to be an illusion.”
Norts’ presentation took the form of a spirited showcase, demonstrating the way the human creativity of The Peeps has been augmented by AI in various ways.
As a first example, he shared a collection of imagined selfies, taken by “toxic men who took a selfie with yourself just moments before things went disastrously wrong,” from Julius Caesar about to head into the Forum to King Canute having just turned up on the beach.
Back in 2018 The Peeps, for the agency Christmas card, Norts and his team ran images of people dressed as Santa as well as images of certain celebrities on a generative adversarial network (GAN) for nine days, to try and dress the celebs as Father Christmas. “We thought we were like Neil Armstrong landing on the AI moon with quality like that, but now I look at it and feel quite embarrassed.” By way of comparison, using Midjourney it takes about 30 seconds to create an image of Lady Gaga dressed as Santa.
The Peeps first paying client was the Cheltenham Science Festival, for which they created AIDA – an AI curator of the event. First they visualised her as a 2D illustration of a person, then she was animated, next she became 3D, then she became able to talk and interview people. This year AIDA is introducing every event at the festival before they come on. “That's a really good barometer of how far AI has come through one [AI-generated] person,” he said.
Like Omar, Norts’ obsession with AI has led him to experiment with the results that can come from combining various AI tools. Viggle is one that’s captured the imagination of the AI-sphere recently. It allows people to combine an image of a person or character and animate them onto the movement for any video. This inspired him to create a video of Untold Fable founder and CEO Kate Tancred dancing. But in reality, he needed more tools beyond Viggle to make it really good. “You cannot make really cool AI things without having some sort of a workflow. It’s never a one-trick wonder.”
First he took a picture of Kate, with her legs cropped off. Using one AI tool, he added legs. Using a second, he removed the background she was standing in front of. Then he put the cleaned up image into Viggle. To create more realism, Norts added in “a little bit of a deep-fake face-swap thing to make her look a little bit more Kate.” Then another tool gave her somewhere to dance: “A sort of 1920s retro-yet-futuristic dance space.” The results would blow the mind of anyone from a few years ago.
He also resurrected Princess Diana and made her dance to The Smiths ‘The Queen Is Dead’ on the face of the Queen.
‘It’s a Wonderful Life’ is in the public domain. As a gift to the audience, Norts suggested reworking it using AI tools for your agency Christmas card. “Do what you want because it’s available.”
“The thing that I would reiterate time and time again is that the AI is just your tool. It doesn't give you the idea,” he said. “It's humans who come up with the idea. It's AI, however you put it together in your workflow, that helps you do things that you couldn't possibly do[…] It's your creativity that is going to make it work for you. So a shit idea is still a shit idea, however much AI you pour into it, but a good idea can become a great idea if you use AI in the right way.”
Rebecca Steer
Partner at Charles Russell Speechlys
Rebecca Steer promised that she was, “here to bring the energy levels right down.” But everything she said was in the interest of creative businesses avoiding expensive fines. As an expert lawyer in the area, the audience were gripped by her insights into how they can use AI without ending up getting disastrously sued.
Spoke about the new regulatory frameworks that are being built in different countries, such as the EU AI Act and guidance from existing regulators such as Ofcom and the FSA. Not all of those guidance notes are complete yet, which has pushed the UK to need an AI Regulation Bill, which is currently moving through parliament. Legally, anyone working with AI should keep an eye on that evolving landscape, she suggested.
Rebecca then turned to the major risks of using these new AI tools.
“There's actually a technical debate playing out and I'm sure we'll get some form of case law on it in terms of who actually owns the outputs and how all of the AI tools,” said Rebecca. “The terms and conditions that I've read have all said that the user will own the IP in that output. There isn't a technical assignment so I'm not sure whether that fully is legally enough in terms of what we actually need. But that's the intent of it.”
Copyright is of course a key area to consider and there are some interesting arguments around this. An important distinction to make is that copyright isn’t in a voice; it’s in the recording made of someone’s voice. So that may apply to a lot of deep fake content. There’s also an interesting part of the definition of copyright that says the work has to be original, whether literary, dramatic, musical or artistic – that means that there is a technical argument that the output of gen AI systems may not have been created by the author’s skill and judgement, so potentially the systems may own the copyright, rather than the person inputting a prompt.
A big question next: Can generative AI own copyright? “The position from our perspective is we don't think that's the case,” said Rebecca. “But nevertheless, the T's and C's do provide that you as a user will own the copyright.”
The owner of copyright also has a moral right to object to derogatory treatment and the right to be identified, which Rebecca noted is useful to know for anyone considering deep fake creative work, particularly if the subject does own the copyright in the material which has been used to create a deep fake.
She next turned to one of the common arguments around this. “When content is being scraped and used in different ways within AI systems, people often say ‘We're only using a little bit of it. And if we prompt and prompt and prompt, the output is unlikely to contain much of the original that we first put in, so surely that's OK.’” There is an exclusion in copyright law of incidental inclusion, but Rebecca stressed that this isn’t what that inclusion was intended for, so it “doesn’t typically help us.”
Rebecca then went on to share the best ‘fair dealing’ defences for those creating generative AI work. “Our get-out-of-jail cards.” Anything infringing copyright for the purpose of criticism, comment, news reporting, teaching, scholarship, or research may be able to use these defences.
For Norts’ works of gen-AI silliness, Rebecca turned to the parody, caricature and pastiche exemption. For the parody exemption, the work must ensure the essential characteristics of the work are noticeably different and must have humour or mockery. For pastiche, we need to have work which in its style imitates another work. Amazingly, Stability AI is considering using this exemption in the case that Getty Images has brought against it, claiming the AI developer has infringed its copyright both in inputs and outputs.
“We know that scraping is infringing copyright.” Rebecca was clear on that point. There will be important cases coming up that will shape the legality of generative AI tools coming up, so there will be plenty to watch out for.
Finally Rebecca assessed the legal risks Norts faced over his Prince Andrew interview deep fake. We’d rather not share her diagnosis here, but let’s just say that The Peeps should be thankful for the parody and pastiche defences. Will Prince Andrew sue? “I think he has bigger battles,” quipped Rebecca.