senckađ
Group745
Group745
Group745
Group745
Group745
Group745
EDITION
Global
USA
UK
AUNZ
CANADA
IRELAND
FRANCE
GERMANY
ASIA
EUROPE
LATAM
MEA
Creative in association withGear Seven
Group745

Inside the Box: NCAA March Madness with Coke Zero

28/03/2025
42
Share
Thinkingbox's latest project with Coca Cola, NCAA March Madness, transformed the hype of the game into a personalised experience for fans

Thinkingbox's latest project with Coca Cola, NCAA March Madness, transformed the hype of the game into a personalised experience for fans. The team spoke to director of Innovation, Patrick Daggitt to lift the lid - or crack the proverbial can, if you will - to learn more about the project, and the impressive tech that powered it.


Q> Hi Patrick! Can you tell us the initial goal of the project, and how technology brought that vision to life?

Patrick> Hey! The goal was for Coca-Cola to use AI to create a unique fan hype video that would capture each fan’s energy at the NBA and WNBA All-Star game. Fans would go through a personalised experience that reflected their passion for the game. We started with a QR code system that registered fans, but this activation took it further. Before entering the booth, fans completed a quiz that asked about their hype style and other personality-driven questions. Their answers were linked to their QR code, which stored all their personalised data.

Inside the booth, fans scanned their QR code. The system recognized them by name and assigned them a unique animated AI-generated background that responded to their movements. There were also text prompts encouraging fans to strike different poses, and if they hit certain key poses - like flexing their arms - special effects like lightning or stylised outlines would appear.

The experience was recorded, integrated into a polished motion graphics package, and then instantly shared with the fan via email and on the booth screen.


Q> Let’s talk AI! How did it power the fan experience?

Patrick> The AI system generated dynamic backgrounds in real time based on each fan’s quiz answers. We used Stable Diffusion XL Turbo, which allowed us to process image generation at 27 frames per second—which is almost unheard of for real-time AI image rendering. Additionally, we implemented pose recognition that detected key fan movements.

The AI didn’t just randomly generate visuals - it was curated. We used a custom-built tool to create a vast library of AI-generated pre-sets, ensuring that each video had a consistent, high-quality look while still feeling unique to each fan.


Q> Everyone loves AI, but how did you ensure that AI wasn’t just a gimmick but enhanced the fan experience?

Patrick> We wanted the AI to feel natural, not forced. The key was in the prompting and customisation - we designed a system where the creative team could directly control AI pre-sets and settings without needing deep technical knowledge. We built a tool that allowed us to tweak everything - the AI prompts, Perlin noise settings, and motion-based effects - so that each fan’s quiz answers meaningfully influenced the final output. In total, there were about 7,500 different pre-set combinations, ensuring that no two videos were exactly the same.


Q> For the fellow nerds, what did the tech stack look like for this activation?

Patrick> The capture hardware was all Blackmagic - Blackmagic cameras and capture cards fed video into NVIDIA RTX graphics cards using CUDA for processing.

The AI-generated visuals were powered by Stable Diffusion XL Turbo, which enabled us to generate real-time video backgrounds at a high frame rate.

The final output was created in TouchDesigner, where the recorded fan footage, AI-generated backgrounds, and motion graphics were all composited into a polished, shareable video.


Q> An impressive stack! But what was the most innovative or technically impressive part of this project?

Patrick> The real-time aspect was the biggest breakthrough. We managed to generate AI-enhanced visuals at 27 FPS, which is incredibly fast for live image processing.


Q> Let’s reflect on the unsung hero, what was a major challenge that no one would notice but was crucial to making the experience work?

Patrick> Pose detection. We had to program the system to track specific fan movements in a certain order. It required setting timers and defining precise motion patterns to trigger the right visual effects. It was difficult, but we made it work. Recognising a single pose in an image is easy, but detecting a sequence of movements - like recognising someone doing the Macarena - was a massive challenge.


Q> Could you please stop doing the Macarena?

Patrick> Right. Sorry.


Q> Okay, so how big was this activation compared to other experiential projects you’ve worked on?

Patrick> This was one of the largest experiential activations we’ve ever done. It spanned over 200 square feet, with six booths, a DJ station, and a massive interactive setup.


Q> Do you think fans understood the level of tech behind this experience?

Patrick> Not at all! And that’s a good thing. Fans weren’t thinking, “Oh wow, Stable Diffusion is running in real-time.” They were just excited by the magic of the experience. That’s the goal—to create something surprising, fun, and memorable without making the technology the focal point.


Q> If you had to sum up the project in three words?

Patrick> Energy. Fandom. Art.


Excellent, thanks Patrick! Now we’re thirsty.

SIGN UP FOR OUR NEWSLETTER
SUBSCRIBE TO LBB’S newsletter
FOLLOW US
LBB’s Global Sponsor
Group745
Language:
English
v10.0.0