Sam Wray is a creative technology director at George P. Johnson (GPJ) UK, a musician and a visualist, based in London.
He has a persistent interest in interactive arts and emerging technologies, and their place in the everyday and beyond. Sam thrives in figuring out the unknown, collaborating across disciplines, creating for digital, physical and for the in-between.
Sam> Using AI as a creative extension, a new creative partner. I’ve been able to use it to riff ideas for clients more quickly than working solo. We all know large language models (LLMs) are pretty good at summarising, so I’ve also been using it to make sure writeups are concise and to the point without losing too much detail along the way.
Sam> We use Google Gemini at GPJ, it’s part of Google Workspace. When I’ve been using it to bounce ideas off, I can also ask Gemini to create some inspirational images as to how the design of the creative might look – it’s really great to be able to create all in one place and not bounce between tools.
Sam> Generating images can occasionally become difficult, especially when you’re trying to blend interactive concepts like human interaction and show how client messaging can be shown in the visuals. It often takes re-thinking the prompt entirely or editing the resulting image heavily.
Mostly though, I’ve been able to get what I want out of AI – especially when using storyboarding tools.
Sam> I bring the ideas to the AI, I try not to use it as a starting point. If I don’t start the thought, either myself or the AI I can easily get lost a few prompts down the line. Somebody needs to be driving the process, and currently that isn’t the AI.
Sam> I bring the idea to the table and I transcribe from the AI’s response, to allow my own tone and personality to come through. AI is excellent at forming sparse ideas into something more sophisticated – especially when you can feed it information about clients or ask it to find more info, but I’ve found without a human behind the idea at the start, the creative can become dry.
Sam> I believe the hype around AI was too strong at the start, as usual with new technologies. It’s not brand new as a lot of people seem to think. We’ve actually all been using a form of AI for many years, now not as popularly referred to as ‘machine learning’.
Your phone’s keyboard has been learning how you type for years – Apple implemented a neural network to correct users’ individual typing on the iPhone back in 2014! In fact, it was interesting to see companies and products transition in the past few years from referring to machine learning as ‘A.I.’ to ‘AI’, we do really love a brand-name.
Sam> My background in music and the arts really left me quite conflicted when generative AI models for image generation launched in the past few years. When it comes to creator attribution for the source materials, I believe AI models should be trained on known data – not just blindly scraped from the web. At GPJ we use Gemini and Adobe Firefly often, which are both trained on non copyrighted data sources.
Sam> My own attitude towards AI has shifted in the past few years. I’ve been able to use LLMs to help me solve some complex computer programming problems which I’d been investigating for many years before – but when LLMs first came on the scene to help coding, I really didn’t see the benefit as they were really only used to save a little bit of time with repetitive tasks.
Now text generation has become more complex and you can create huge prompts with truthful sources, the use-case for AI has become even more widespread.
Midjourney image inspired build for Salesforce at GPJ. Pinterest moodboards were becoming repetitive, so we used Midjourney to spice things up. The image to the right is the final build.
Sam> It’s a real mix of feelings. I believe in the right hands, creativity will ultimately be expanded upon and benefit from it. It’s becoming more ingrained into every creative tool, but too much reliance on AI generation could lead to mediocre ideas and visuals, where taking longer with a more well positioned moodboard or sketch could explain the ideas better.
There’s still a balance to be found.
Sam> Generative code art has been around since the earliest computers. Generative art (before 2020) was mostly mathematical code running on graphics processing units (GPUs). AI does run on a GPU, so could it be classed the same as generative art? I’m not sure.
Generative AI has certainly augmented existing forms of media and art, but unless we find a new way to consume media I don’t believe it will change existing channels.
However, it may contribute to a focus on physical or solely human-created artworks or performances.
Sam> AI has already transformed my role and discipline. I’ve found I can come to conclusions and format ideas and documents so much quicker than I could previously.
As models continue to become more complex, I can see myself using AI to ingest more documents as sources of truth to help expedite processes even further – maybe even if a rabbit-AI-style “large action model” becomes available to control existing software, use AI to automate other tasks like timesheets, haha.