An interactive experience that makes it possible to visualise any goal in seconds, AXA Bucket List AI – created in collaboration with Publicis Groupe in Thailand, the Philippines and Hong Kong – is designed to empower people to protect their dreams.
Inspired by the fact that one in two of us will have our lives unexpectedly interrupted by a critical illness, and with this being highly prevalent in Asia, the medical plan provider hopes to encourage people to live out their bucket list, spurred on through visualisation on the app – because you never know what’s round the corner.
In this interview, Publicis Groupe’s head of creative technology APAC, Laurent Thevenet discusses the tech that powered the campaign, pre-generating thousands of images, and how they kept visuals on brand.
Laurent> Most people ignore critical illness; too confronting, too out of their control. Yet cancer and heart disease remain major threats across Asia.
AXA set out to change mindsets, turning fear into action, with a simple, human approach: a text-to-dream portal that reframed illness from a crisis to an opportunity – to take control, protect yourself, and embrace a future worth fighting for.
Laurent> Without AI, it would not have been possible to visualise people's dreams. But we didn't limit ourselves to an out-the-box use of AI. We went through a prototyping exercise in which different prompt formats and AI models were tested. The objective was to render high-quality images that are inspiring but also on brand. We needed to create an uncanny valley where some images pass as real, while others are understood to be synthetic, as some people’s dreams are highly imaginative.
Laurent> OpenAI APIs are used to control and enhance the prompt entered by people. Generation is done exclusively by Imagen3, which at the time we built this campaign, was the best enterprise-grade commercial model available for our needs, knowing that at Publicis Groupe, we pay strong attention to ethics and liability when using models. So, we tapped into our partnership with Google to use Imagen3 via the Google Cloud Platform.
The images are actually not rendered in real-time for a couple of reasons. First, we wanted to create the best possible outputs, and this meant controlling the pre-generation process. Second, the latency of the user experience is important: users cannot wait 20-30 seconds to get an image. It had to appear almost immediately. To be able to associate a pre-generated image with a prompt, we used an algorithm called CLIP that helps understand the meaning of prompts and finds the best image for it by looking through a massive array (called vectors) of pre-generated images. This process happens in less than a second, making the experience smooth.
You will probably wonder how we could create an image for any prompt. We actually created more than 10,000 images, some very specific while others are generic enough to cover rare prompts.
Laurent> Definitely. We did a lot of research on the visual direction upfront and used a Large Learning Model to craft the best possible prompts for Imagen3 in order to match that direction. The prompts are long and detailed, not to describe the scenery but to achieve a high level of craft (light, composition, colour tone, etc). No post-production was done. What you see is what the machine generated.
Laurent> As indicated earlier, we had to achieve almost no latency in associating prompts with images. So, we used the CLIP algorithm from OpenAI to match prompts with an image out of the thousands pre-generated. The vector of images is stored in a new-generation type of database called Pinecone. All of this seems complex, but the actual experience suffers no latency thanks to it.
Laurent> It shows that Generative AI can be used to support the creative expression of users through text and image. We’re on the cusp of a new era of intelligent experiences, and this is a glimpse into that future.
Laurent> First, we learned that AI-powered experiences require new UX patterns or interactions we didn’t need before. Traditional design approaches can make AI feel less intelligent or static, so it's essential to design specifically for the dynamic and the responsive nature of AI interactions.
Second, we discovered that Imagen3 is an exceptionally powerful tool and that Google is an outstanding partner. Moving forward, we’re expanding our collaboration with Google beyond creativity into media and production as well.
Laurent> We would have loved for these dreams to come to life in motion, but video generation takes time and costs much more than image generation. We anticipate that moving assets will be much more common in interactive campaigns like this going forward as the price and time to generate decrease. Another aspect could have been to make these dreams collective (across families and friends), with AI helping in combining and personalising the output.
Laurent> AI is out there to be used. Many are using it, but knowledge is limited when it comes to choosing the best systems. Publicis Groupe has a strong focus on ethics and security, which means that we only tap into enterprise-grade AI systems, and our own CoreAI model. By contrast, a lot of AI-generated content shared on LinkedIn is often not immediately relevant to agencies and brands because of how these AI systems are trained. It’s important to work with partners who share your concern for ethics and security.