senckađ
Group745
Group745
Group745
Group745
Group745
Group745
Trends and Insight in association withSynapse Virtual Production
Group745

Teching Out On Computational Storytelling With R/GA Asia Pac

14/09/2015
Advertising Agency
Sydney, Australia
484
Share
The agency recaps its Spikes session on how code, technology and experience design can create real-time content and experiences

At last week’s Spikes Asia, R/GA Sydney senior technology director Jeff Donios, R/GA Singapore and NYC art director Jonathan Han and Laurent Thevenet, technical director of R/GA Singapore, took to the stage to co-share a session titled: ‘Art, Code and Tech: Creating Connected, Real-Time Stories and Experiences’.

The session highlighted how code, technology and experience design can combine to make real-time content ranging from visual storytelling to data driven music and soundtracks.

It also demonstrated how R/GA’s systematic and connected approach to design, as discussed by Nick Law at Cannes Lions, is at the heart of its daily approach with brands when trying to marry creativity with technology in ways that have relevance and an impact on their audiences, without becoming merely tech for tech’s sake.

Here, the trio catch up with LBB to dig deeper into their practice and process.

R/GA moves at the speed of culture. As brands compete, generating relevant content in the form of real-time storytelling becomes paramount.

To achieve this, we believe the future lies in Computational Storytelling. It is the intersection of artificial intelligence, the arts, cognitive psychology and philosophy. As Brene Brown puts it, “Stories are just data with a soul.”

At R/GA, we experimented with algorithms and the data from within user-generated content to produce ‘creative’ outputs in the form of audio and video. We began with the written word and prototyped the first iteration of a software tool that could potentially automate the generation of an audiovisual experience.

We approached this information design challenge through the creation of a prototype. The prototype would take a sentence, break it down, and generate related audiovisual content. This involved the creation of an algorithm, the design of an experience and a visual system.

In creating the algorithm, it was important to formulate logic that would make sense when extracting the essence of a written sentence. To do so, we started by systematically breaking down the components of a sentence. We identified each noun within the sentence, and created a database of synonyms and descriptors for the noun. For example, the noun ‘fate’ had related synonyms such as ‘drive’ and ‘journey’, and secondary descriptors such as ‘atmospheric’ and ‘calmly’. Using these synonyms and descriptors, the algorithm could identify related visual content and dictate the sound patterns of the audio.

Secondly, the design of the experience took on a modular and systematic approach. Primarily, the logic of the algorithm translated into the design of the prototype.

We have provided a walkthrough of how the prototype works:

When we enter a sentence into the prototype, the nouns are identified. For each noun, the prototype identifies different iterations of what it could mean. Additionally, the tags within each iteration are displayed to give a more detailed meaning to the noun. The user could drag-and-drop an iteration of each noun within the sentence into the editor at the bottom of the prototype. These can then be compiled to generate an audiovisual experience.



Systematic thinking was crucial to various aspects of this prototype. Through the process of building this prototype, we had considered its application across multiple platforms and its ability to be scaled for different uses. This included its ability to be used as an art installation, a digital platform, or a service tool. This is key to designing a unified experience when the prototype is scaled and applied across multiple scenarios.

The engine that generates these stories currently lives on the web. However, this is inconsequential as it is a proof of concept flexible enough to be applied to a range of scenarios. The prototype could even become an API, working in the background, learning and evolving as we start using the algorithm in a variety of contexts.

By utilising audio synthesis, we were able to generate music on the fly. As we continued to iterate, we may even be able to generate the perfect audio score for any given video in real time. While there is still work to be done, we see the potential to create a new vertical in the music industry. Electronic music could be the starting point for bots to generate music, eventually leading to music generation for live bands and DJs. 

In building the prototype, we were fortunate to have a team who could work across disciplines to take on the complexities of this build. As a coder who has played with vintage synthesizers, it was a smoother transition to using code to handle oscillators.

As we experimented, we began to realise the potential of the prototype we were creating. We imagined replacing existing storylines with tweets. We hypothesised what could happen if we were to replace a highly curated database of video and audio, with open content sources like YouTube, Google Images, Spotify, and Soundcloud. As consumers tell their stories in the form of tweets, a real-time story could be generated representing an aggregate of how consumers view that brand. Or imagine the different permutations stemming from a single brand belief which is contextualised to each user’s life experience. 

Image: L-R: Jeff Donios, senior technology director R/GA Sydney, Jonathan Han, art director, R/GA Singapore and NYC, and Laurent Thevenet, technical director, R/GA Singapore.

Credits
Work from R/GA Australia
The Battle Cats
PONOS
05/02/2024
22
0
MOCA
Museum of Chinese in Australia
24/05/2023
259
0
ALL THEIR WORK