senckađ
Group745
Group745
Group745
Group745
Group745
Group745
EDITION
Global
USA
UK
AUNZ
CANADA
IRELAND
FRANCE
GERMANY
ASIA
EUROPE
LATAM
MEA
Trends and Insight in association withSynapse Virtual Production
Group745

Is Generative AI Proving to be *Too* Creative?

12/02/2024
Publication
London, UK
912
Share
Experts at Dentsu Creative, SmartAssets, Publicis Groupe Australia, M&C Saatchi Performance, Whalar and VML share how they’ve been working with and honing the output of AI tools, writes LBB’s Nisna Mahtani
I’m sure that while we’re all up to our ears in AI chat, we can acknowledge that it is one of the biggest trends of the year already. With ChatGPT, Midjourney and DALL·E delivering information and images to suit almost every kind of brief, many creatives are already implementing generative AI into their processes, but how reliable is this in the long run?

As misinformation is rife, and AI can occasionally spit out made-up facts and unreliable sources to cite, deciphering fact from fiction can often be a grey area. With lots of fictitious quotes, references and in what some would describe as a world of pure imagination, we wanted to ask AI experts, creatives, strategists and prompt writers about discerning between what’s real and what isn’t, and how this comes to fruition when looking for creative truth. 

To hear more about the prompting process and how they see the future of gen AI transpiring, LBB’s Nisna Mahtani speaks to experts from Dentsu Creative, SmartAssets, Publicis Groupe Australia, M&C Saatchi Performance and Whalar.

 

Alex Hamilton 

Head of innovation at Dentsu Creative UK


It’s difficult to determine what’s real and what’s fantasy, in a world of AI-generated content, 3D worlds and gaming engines. 

Critical thinking, verification, and a healthy dose of scepticism are therefore essential for any individual dealing with AI-generated content. The challenge is that humans nearly always look for the shortcut, meaning the rigour behind evaluating anything AI generated has been less strenuous than it should be, to date. 

At Dentsu, we’ve put in place an evaluation framework that our teams adhere to when assessing content generated by a large language model. 

A crucial part of the framework is ‘context’. Generative AI models often lack contextual awareness and therefore occasionally produce information that is technically accurate but irrelevant or misleading in a specific context. We therefore always encourage our creatives to consider the context when generating prompts and assessing the output. 

The quality and specificity of the prompt provided to the AI model is also key. A well-crafted prompt can guide the AI towards more accurate and relevant responses. Sharper prompt writing comes with training and we are in the process of upskilling clients in this regard. 

It’s also important to note that not all AI models are made equal, with some more adept at generating factual information and others, in our experience, displaying tendencies to generate fantastical content. Any agency worth its salt will have these nuances mapped, so that teams are aware of what they should be looking for if certain models are used to create certain outputs. 

Ultimately, we’re not able to remove the hallucinatory nature of generative AI models, currently, but we can put checks and balances in place, with a splash of common sense, to ensure these complex brains don’t wander too far from the path of reality. The challenge is whether humans are willing or skilled enough to be the shepherds that AI needs.


Lindsay Hong

CEO and co-founder of SmartAssets


It is entirely understandable that there are concerns about the quality of content being generated by AI. The technology has advanced extremely fast and many people are still only now learning how it works and what it can do. As with any technology developed by humans and fuelled by human-generated data, it is bound to have errors or ‘hallucinations’, that some may deem as ‘too creative’. However, the question we should be asking is actually, do we care?

The truth is that not all content is equally important, and we have a way to go to better segment content types based on their need for accuracy/ quality/ realism.

For example, it is probably of less concern if a short-lived piece of social media content, that is designed to amuse and entertain, contains some crazy images. In fact, this can add to the entertainment, as we have seen with the Instagram trend of prompting AI to make images increasingly extreme. We asked DALL·E to make Ken look like a Yorkshireman, then more Yorkshire, then extremely Yorkshire.


This is completely different to if you are creating informational content such as summaries of regulations, court rulings, medical diagnoses etc. where accuracy is key. What people need to understand is that gen AI produces outputs which it identifies as statistically likely based on the input data it has been trained on. This is not the same as a search functionality which for example checks lots of documents to find the ‘right’ answer.

The key thing here is for us as a society to identify how concerned with accuracy we are for different types of content, and rather than seek to constrain gen-AI's creative power unnecessarily, put in guardrails for high-risk content while leaving the door open for gen-AI to create unique content where we want it.


Ilinca Barsan

Director of data science at VML US


Large language models (LLMs) are very good at predicting what answer you’d like based on your input. Most of the time, these predictions are pretty accurate. And almost all the time, they sound extremely convincing – which can be dangerously misleading. LLMs have happily made-up entire books, songs, and research papers on request; they have, with no remorse, even fabricated quotes and events for unsuspecting college students to copy and paste into their essays.
 
Practically, there are a few solutions to minimise hallucinations when it comes to text generation. The quickest one might be to play around with model parameters. For example, you can dial down the 'temperature' of a model – a parameter that, in a nutshell, controls randomness (temperature =0 will get you more predictable, though rather boring answers). You can also use embeddings, which is especially helpful if you want the model to refer to relevant external information before responding. You might choose to finetune a model for your particular use case – and, of course, you can engineer your prompt to prioritise factual accuracy. 
 
We have used all these methods in the past when working with LLMs – and will most likely continue to do so for the foreseeable future. While the companies behind some of your favorite generative models are working hard to minimise hallucinations (and have had quite a bit of success – getting an LLM to blatantly lie to your face has become much more difficult), they are unlikely to go away completely. After all, hallucinations are embedded into their probabilistic nature.

What this means is that we must still be cautious and deliberate in our use of the technology. High-stakes and sensitive applications need extensive collaboration with human experts if using LLMs truly cannot be avoided (spoiler alert - it probably can). Real-time text generation should be reserved for carefully selected cases and integrate any – or all – of the safety measures mentioned above. 
 
Meanwhile, we can try to cautiously embrace hallucinations as a feature rather than a bug, in the right context. If used deliberately and behind the scenes, they can help us reframe questions, think about problems in unusual ways, and make connections that we as humans might not have thought of before.


Kiki Lauda

Technical director of production at Publicis Groupe Australia


In my opinion, the ‘dreaming’ or imaginative output of AI can be viewed as a genuine form of creativity, similar to human creative flair. This capability might not only be a step towards more advanced general AI, but could also be seen as a rudimentary form of artificial consciousness. Like humans, who have spent thousands of years distinguishing fact from fiction, AI also requires continuous teaching and refinement of its knowledge base. By constantly updating and correcting AI's understanding, we can guide it towards a level of accuracy and discernment comparable to human intelligence. 

This process is not just about feeding data to AI; it involves a deeper engagement where we continually refine its interpretive frameworks, much like nurturing human understanding over time. This approach might lead to a future where AI's creativity and consciousness evolve in parallel with its factual accuracy.
 
AI's imaginative ‘dreaming’ reflects a budding form of creativity and consciousness, mirroring human development in distinguishing fact from fiction. However, it requires ongoing education and refinement, similar to human learning. This process signifies a potential future where AI's creative and conscious abilities develop alongside its factual understanding. Embracing this evolution, much like we did with past technological advancements, calls for a paradigm shift in our legal, ethical, and societal frameworks, ensuring AI's responsible integration into our lives.


Francis Kuttivelil

Head of technology at M&C Saatchi Performance


At M&C Saatchi Performance, we’ve been on a deep dive into the world of large language models (LLMs) from OpenAI, AWS and Facebook (Meta), and one constant stands out, they all hallucinate. The industry's nicer way of saying they make stuff up.

Before the whole ChatGPT craze, if an AI or ML model predicted the wrong outcome, it was just an ‘error’. Now it's a ‘hallucination’; making these AI models seem almost human. However, it's essential to remember that at their core, LLMs are merely sophisticated algorithms designed to predict the most appropriate response based on the inputs they're given, without any genuine reasoning or thought process.

So, why use these ‘hallucinating’ LLMs? Well, the adage that all models are wrong, but some are useful holds true. While there is a good chance that the systems will hallucinate, the main use case we have found is ideation. Whether it's generating initial ideas for a new campaign, drafting a document, or offering a fresh perspective on a piece of content or coding task, large language models provide an invaluable starting point. It's like editing a first draft – yes, it might be rough, but it's easier to refine something than to face a blank page.

We’ve discovered that the likelihood of hallucinations can be reduced by experimenting with various versions of your prompt. If the LLM gives a similar answer across different prompts, it's less likely to be hallucinating; in essence, it’s a simple form of cross-validation.
For those more technically inclined, tweaking the ‘temperature’ setting of your LLM can adjust its creativity level. Lowering the temperature decreases creativity, thereby reducing hallucinations but potentially making the results less exciting.

Reducing hallucinations is a hot topic for researchers and some solutions are already in use such as SelfCheckGPT Or Microsoft’s Laser. Alongside these, we also have tools like Why Labs, AWS and TrueLens which check for consistency and accuracy. 

Hallucinations aren’t a reason to stop using LLMs but simply a factor to be mindful of. Just as we continue to rely on weather forecasts despite their occasional inaccuracies, we recognise the value of the models amidst their imperfections. Ongoing research is expected to reduce the occurrence of hallucinations. Our advice for those tackling niche topics is to consider deploying a large language model in-house or train a GPT (a custom version of ChatGPT) specifically on your topic. As we all become more adept at leveraging models and as the technology itself continues to evolve, the impact of hallucinations is likely to decrease, and the utility of LLMs to increase.


Harry O'Grady

Associate director of creative strategy at Whalar


Generative AI, like most previous tech advancements, faces scepticism and fear reminiscent of the reaction to Photoshop's initial launch and potential for photo manipulation. Despite criticisms over its use of copyrighted content and occasional unsettling outputs, it's proving to be transformative and becoming indispensable to many, myself included. Of course, what would an opinion piece about AI be without using it? You're reading work by generative AI right now, not written by it, but with ChatGPT serving as my personal first-pass editor, and for good reason.

It took until I was 18 years old for my dyslexia to be diagnosed. Consequently, I struggled through school when computers weren't commonplace in the home, let alone in the classroom, and tools like spell-check weren't available to help me. Now, ChatGPT has become a saving grace for me and other dyslexics I speak to. It helps with summarising overwhelming large quantities of information, grammar checking, proofreading, and even distilling hours of desk research. Yet, the necessity for a critical eye remains paramount. Just as we would question the reliability of the information we find online daily, we must scrutinise AI-generated content. This critical approach ensures that we leverage AI to our benefit without being misled by its outputs.

Generative AI is poised to become an essential part of our lives, enhancing our ability to write, think, and create. As it evolves, it will provide not only basic benefits, such as grammar checking for dyslexics like myself but also act as an extension of our brain. It will learn from the way we think, write, and act, making suggestions without prompts. The future promises a landscape where generative AI becomes so commonplace that future generations will regard it as we do spell-check: always there, always used, and rarely thought about.

Agency / Creative
SIGN UP FOR OUR NEWSLETTER
More News from LBB Editorial
Awards and Events
A Guide to the Best Bye Bye Advertising
20/12/2024
133
0
34
0
ALL THEIR NEWS
Work from LBB Editorial
We Got Your Back
Poliklinika Croatia
19/12/2024
8
0
The Rhythm Of Life
Vodafone
19/12/2024
18
0
Snap Maps - McDonald's
McDonald's
16/12/2024
21
0
ALL THEIR WORK
SUBSCRIBE TO LBB’S newsletter
FOLLOW US
LBB’s Global Sponsor
Group745
Language:
English
v10.0.0