senckađ
Group745
Group745
Group745
Group745
Group745
Group745
EDITION
Global
USA
UK
AUNZ
CANADA
IRELAND
FRANCE
GERMANY
ASIA
EUROPE
LATAM
MEA
Trends and Insight in association withSynapse Virtual Production
Group745

Are You Ready to Dance the Trust Tango in 2024?

11/12/2023
Publication
London, UK
269
Share
Generative AI, misinformation, declining trust in traditional media and institutions are creating a confusing environment for consumers and brands alike – ‘How can marketers navigate these shifting sands?’ asks LBB’s Zoe Antonov

Trust is a funny old thing. Elusive and fragile, yet the bedrock of a functioning society. It “comes on foot, but leaves on horseback,” as the Dutch saying goes. 

These days, trust in institutions from the police and government, to media and, yes, brands, looks like it isn’t so much leaving on horseback but flying away on Concorde. There are surely a number of reasons for this, but as AI allows a generation of increasingly realistic images and voices (and video’s just around the corner), what will it mean for trust as it gets easier to fake real people? As 2024 takes us into a year of several major elections, from the USA to India, the interplay between trust and misinformation is sure to be worth watching.

“Comes on foot…” but disintegrates into the abyss


To start this conversation we need to dig a little deeper into what ‘trust’ means to us, today. It’s a “deeply philosophical concept tied to managing uncertainty,” says Konrad Shek, director of policy research at the Advertising Association. But also “a fickle beast,” as Dane Buchanan, global director of data, analytics and tech at M&C Saatchi Performance puts it.

It involves processing information from our environment to assess whether our interactions with it meet our expectations. As Konrad says, it’s “profoundly human.” 

“When we trust something, we relax our vigilance and expend less cognitive effort,” he continues. “Trust is complex and precious. Earning it and maintaining it are continual challenges.” 

Indeed, trust is one of the core foundations of what it means to be that most social, communicative of creatures – a human being. Our capacity and willingness to trust is, argues psychologist Roderick Kramer, something that we’ve evolved. We wouldn’t be the species we are without it – it’s significant to send a man to the moon if you are starting from a place of mistrust towards every other specialist involved. Writing in the aftermath of the Bernie Madoff scandal, Kramer points to the body of research that shows that while we are largely predisposed to trust… We’re not terribly good at figuring out who to trust at an interpersonal level

“If it’s human to trust, perhaps it’s just as human to err. Indeed, a lot of research confirms it. Our exquisitely adapted, cue-driven brains may help us forge trust connections in the first place, but they also make us vulnerable to exploitation,” writes Kramer.

Low trust isn’t particularly good for us; it creates stress, and in individuals it’s associated with vulnerability to depression and anxiety.

Both ‘trust’ and ‘truth’, though, get distorted when looked at through the prism of business and capitalism – and the short-term pressures to boost profits means that the temptation to undermine both are always present.

But it’s worth investing that effort. “Truth…it’s good for business,” says chief strategy officer for McCann Central, Ringo Moss. In 1912, the H.K. McCann Co. advertising agency coined a phrase that has become more relevant today than ever before: “Truth well told.”

104 years later, in 2016, the year when, Ringo reminds us, Brexit and Trump were encircled by a spectre of misinformation, ‘post-truth’ became the Oxford Dictionary’s Word of the Year. “This was the year we saw appeals to raw emotion and personal belief shape public opinion, blurring the lines between hard facts and disputable viewpoints. It marked the start of a trend towards the erosion of truth. Since then, we have never looked back.”

That environment of misinformation and fake news is bad for brands. Just this month, the journal Current Opinions in Psychology published a literature review that showed that misinformation about brands – fake reviews for example – can increase consumer scepticism.  

But that’s fairly obvious. Marketers should, however, take note that the broader context of “an information ecosystem polluted with misleading content” can have indirect impact on consumers’ perceptions and decision making too. It fosters confusion or feelings of vulnerability, decreasing trust in traditional media outlets and can shape decision making and our willingness to assess brands and products positively. 

"2016 marked the start of a trend towards the erosion of truth. Since then, we have never looked back.”


As the authors Giandomenico Di Domenico and Yu Ding put it, this environment can “prime mistrust mindsets, making consumers more reluctant to process positively the marketing stimuli coming from the broader environment and possibly affecting brands’ ability to establish trust relationships with consumers in the long run consumers may experience confusion, doubt, and a general sense of vulnerability to the external world.”

So, truth and trust never really stopped being good for business. Amy Gilmore, head of strategy at Accenture couldn’t agree more. According to her, trust sits at the intersection of competence and values in action for brands and institutions. “It is how they deliver successfully against their remit, while also being able to authentically demonstrate what they stand for in the real world.” That sells.

Trust in specific brands as well as trust in advertising generally has been linked to purchasing decisions and consumer behaviour. That’s why in the UK, the Advertising Association and Credos have been tracking public trust in the advertising industry – which had dropped to such an extent that they teamed up with the Advertising Standards Authority to create a campaign educating the public on the role of the ASA as a regulator, and empowering audiences to turn to it. As a result, those who were exposed to the ads were 80% more likely to trust the advertising industry and 50% more likely to trust ‘most ads’. 

It’s not all bad news for brands, however. Recent research shows that while people may be rejecting the institutions that they have traditionally trusted, from newspapers to governments, Ringo at McCann says that Edelman’s Trust Barometer suggests that “61% of people are turning towards businesses over institutions for trusted guidance and support.” He adds that brands today have an opportunity to “capitalise” on this emergent culture of distrust – to be the outliers. “In a world where facts continuously blend with opinion, brands that commit to openness and honesty can become beacons of confidence, building ever-scarcer trust with audiences, increasing preference and, ultimately, their share of the market.” 

But isn’t ‘capitalising’ on things what started all this? Can people really trust anybody that tries to sell something to them, especially when it’s not a single person but an omnipresent corporation?


Social media – the 21st century’s major paradigm shift


For Konrad, social media created a major paradigm shift in the 21st century, allowing individuals (and companies) to bypass traditional media gatekeepers, like producers and editors. “The nature of social media made it possible to create individual platforms and reach audiences directly,” he reflects. This proved to be a double-edged sword at best – while brands and businesses were able to create personalities to relate to their consumers (as the word ‘relatable’ jumped in usage by 200% between the early noughties and 2018), it also enabled those intent on spreading misinformation to “reach bigger audiences with velocity.”

According to Sarah Jardine, social strategy lead at Wunderman Thompson UK, social media has indeed suffered its fair share of bad press for being a place to “show off”. She explains: “It’s an unrealistic extension of reality, whether it be from the brands you follow, or even your friends. A place where the ultimate goal is the personal brand – to be known and stand out, usually at the expense of trust.”

She believes, though, that this mentality has shifted greatly, as more brands have tapped into “their own humanity” through social, but also have experimented with the types of content they create and the influencers they collaborate with. TikTok is the most recent example of this, with brands like RyanAir and DuoLingo going viral with their online presence on the platform.

Strategist and author Kevin Chesters argues that the rise of social media and proliferation of technology over the past decade or so has actually “helped elections, politics and politicians to be more scrutinised than ever before – same goes for brands and their actions/messaging.” It’s true, the gen-z overlords won’t let anything slide, and it’s probably for the better. 

When radioactive rain poured on parts of Eastern Europe after the Chernobyl disaster, people were told by their governments to stay inside and take a iodine tablet with not much else information. By contrast, social media became its own form of watchdog as Russia waged another wave of war on Ukraine. In 2020, when George Floyd was murdered by Derek Chauvin in Minneapolis, the entire world rallied through social media channels. And today, this momentum continues as the Israeli-Palestinian war rages on.

“Citizen journalism has shone light into areas that historically might have been ignored by main media outlets. Social media platforms and the ‘dark socials’ have given brilliant ways to people to hold brands to account, from Clean Creatives to Led by Donkeys. I don’t think politicians are dodgier in 2023 than they were in 1983, but I think they’re a lot more wary of being found out,” explains Kevin. 

But while technology has become a way to hold governments accountable, hasn’t it also become a weapon in power’s hands? After all, cyber warfare is what transformed ‘war’ into ‘hybrid war’.

"I don’t think politicians are dodgier in 2023 than they were in 1983, but I think they’re a lot more wary of being found out.” 


Consumers have far more avenues to share their feedback publicly – and according to a recent study in Journal of Business Research, user-generated content which features negative emotions (contempt, sadness, fear) has a far more potent impact on our trust in a given brand than content featuring positive emotions. So it really is much easier to lose trust than gain it – which means that, thanks to the democratisation of social media, it is in brands’ interests to maintain the highest standards in their products and organisational behaviour. However, that doesn’t seem to stop brands from engaging in risky behaviours like green-, pink-, white- and rainbow-washing.

But let’s get back to personalising brands online.

“The focus on UCG, influencer and creator-led content has developed a human-to-human approach to the connections that brands build on social, making them more relatable and trustworthy,” says Sarah. That was surely true at the beginning of the YouTuber era when, as ‘iJustine’, she made an entire career out of trying out the newest Apple products (and her viewers stretched well beyond just tech bros), but have influencer-led campaigns also suffered from the era of mistrust?

Not only are influencers being increasingly questioned on their authentic relationships with brands, but they’re also being burned at the stake if found in a questionable predicament. So relating yourself to a young gen-z content creator online, especially if you’re not sure what controversy will jump out of the bag, is a risky move for brands.

Mistrust in influencers online has also been fueled by the age of new tech. Miquela Sousa, also known as Lil Miquela became the first CGI character to sell us stuff. This wasn’t necessarily a shock, as by the time Miquela came to prominence, Gorillaz were an animated band that was taking the world by storm. A crucial difference, though, was that Miquela’s point was to solely work with brands and to keep the elusive nature of her existence online – ‘is she real’, ‘how is she made’, ‘why won’t she talk about it’, were all questions that powered her mystique and value on sales-driven platforms.

Today, CGI creators are a dime a dozen. Inevitably, AI creators are now galloping onto the scene too. Meta just created an AI chatbot on Instagram that is a spitting image of Kendall Jenner called Billie. Currently Billie’s Instagram account following surpasses 240,000 people, adorned by a nice tag in the description: “AI managed by Meta.” It’s not just her either – Snoop Dogg and Paris Hilton also did it, among others. They’re not selling anything yet, but they certainly bring us to the next major conundrum in our conversation about trust.


In comes AI


Kevin Chesters says: “There’s a tendency whenever any kind of new technology comes along for people to get a little bit hyperbolic about its potentially apocalyptic impact in society, in all ways.” To him this has remained true from the printing press to the telegraph, television to rock and roll, video nasties and social media, the mobile phone to the metaverse; and it occurs because “humans just aren’t good with change – we’re dreadful when it comes to new things. And now it’s AI’s turn to drive the Skynet-style dystopian nightmares.”

Others are more concerned about the potential for AI to erode trust in brands. For example, Sarah Jardine worries that AI could halt the progress that brands have made when it comes to authenticity on social media.

Critically, the blessing and curse of generative AI is that it has democratised the creation of increasingly lifelike images or fake audio recordings, with video right around the corner. For some, this amounts to unleashing our creativity – but it also makes it even easier for bad actors to spread misinformation. “Mass personalisation at scale” might be the promised land for marketers, but it doesn’t take much to imagine what propagandists might be able to stir up with hyper-targeted ‘fake news’.

“Now it’s AI’s turn to drive the Skynet-style dystopian nightmares.”


“The output of these tools is becoming increasingly realistic and we’ve not yet hit peak capability - the viral image of the Pope wearing a puffer jacket was a classic example, as was the AI-generated Drake and The Weeknd collaboration song. Not only must we contend with these challenges, but we also need to grapple with the risk that generative AI can inadvertently create persuasive yet potentially false content,” explains the Advertising Association’s Konrad.

Amy Gilmore says that the ‘always-on’ nature of media in today’s world has shown us that there is no greater route to showcasing the good face of businesses than through acts of transparency, integrity and inclusivity. This, in turn, shows who brands are, what they stand for and why to trust them. But, with the growing adoption of AI across so many interfaces, the notion of transparency and integrity “needs to be prioritised more than ever.”

She continues: “By its very definition, AI takes existing information and rewrites it to create something new, whether the source material is accurate or not.” 

In its most powerful state, she explains, AI offers an enormous opportunity for insight and creativity, though. The latest Accenture Life Trends research shows ‘The Great Interface Shift’ means that AI has finally hit mass awareness with nearly half (42%) of consumers sharing that “they feel comfortable using conversational AI like ChatGPT for product recommendations, completing tasks at work (44%) and wellness and healthcare advice (33%).”

Dane Buchanan also knows that the highly personalised engagement that AI can provide brings a myriad of benefits and for the most part, she thinks that customers “want and appreciate this level of engagement.” 

Executive creative director at Media.Monks Jon Biggs thinks it’s totally understandable that we’re scared of technology – and we’ve always been. But on a bigger scale, the opportunity of misinformation is absolutely terrifying because of AI. 

“Imagine the Cambridge Analytica scandal powered up by a million,” he says. “But what about the day-to-day? What about the simple things that we take for granted as people working with other people, trust from a client that we can responsibly look after their brand, trust that we spend their marketing budget effectively, and trust that the work we do together shows a fair representation of people and society.”

For Ringo, the logical outcome of living in a post-truth world is a post-trust world: “Uncertainty is now second only to personal finances as the highest source of anxiety.”


Regaining trust is a two-way street


Across the range of conversations we’ve had, two ways to regain (or gain) trust have emerged: brands should stay accountable for what they put out and remain as transparent as possible, for their own good; and consumers should strive towards a higher level of media literacy.

Here’s an example of the first – recently the Maybelline mascara swiping the Tube’s lashes went viral online. I’ll be the first one to admit, I thought it was real – as did many people. Sarah speaks more about the OOH CGI trend: “Jacquemus’ ad with the handbags hurtling down Parisian streets, or the Wilkinson Sword’s razor fountain. On face value, these look like impressive real-world activations, but are often followed by momentary disappointment when you realise it’s not real. It’s still important to invest in those genuine connections with your followers outside of AI.

“Brands should not shy away from disclosing what is ‘fake’ and what’s not. You don’t want people thinking ‘is this real? Can I trust this?’,” says Sarah.

Amy Gilmore agrees and thinks that ‘trust building’ principles should be king in a post-truth era. “We need to be totally transparent with our consumers about the role AI plays in our businesses and communications. We also need to be clear-eyed about the motivation to do so! If objectively you can say that it is being used in service of the customer or public, instinctively you will make trust-building choices.”

Parallel to this, governments are working hard to draw up rules and regulations to cope with the new technology, which to Dane, shapes the way people perceive AI systems – ultimately, though, it will be companies and brands that carry out these regulations, which can “either erode or strengthen the trust the general public has in AI.”

To avoid erosion, Amy, too, believes that transparency and accountability are necessary – clarity on how and why brands are using AI, disclosing the types of data being used, how it’s being processed. Importantly, keeping the criticism channel open and establishing responsible feedback mechanisms where customers can report issues, or ask questions. 

From a media strategy perspective, the trustworthiness of the very platforms and channels that brands choose to advertise on is also something that marketers will have to navigate. When it comes to brand safety, for example, with AI at everyone’s fingertips, there’s now an added risk that brands could appear next to dangerous misinformation and fake images. 

Moreover, even legacy media brands face the challenge of sorting AI fakes from real evidence in their reporting. All of this heightens the dilemmas facing brands around the kind of content they want to appear next to -- and it may influence how people trust the overall media environment. On the other hand some AI experts argue that rather than creating a Wild West of misinformation, AI tools can be used to help bolster trust and to improve fact checking and assessing the veracity of sources. Whether AI will erode or boost the trustworthiness of media - or indeed if the benefits and problems will cancel each other out - is unclear for now.

One could argue that the onus doesn’t just fall on brands, though – and that people should take responsibility for keeping up with tech literacy. Gen z came into existence parallel to social media platforms gaining prominence, so in a way, they’re all tailored to their understanding. Later however, as data about gen z’s rapidly shortening attention span (of only eight seconds) emerged, they became the main target of apps like Vine, YouTube and of course, TikTok. Will we be willing or able to slow down our hyperfast media consumption to interrogate what we’re seeing?

Konrad raises another important point – gen z saw the proliferation of technology in real time, but in a world where AI becomes the norm, the generations after them might start lacking the critical thinking needed to assess not only fake from real, but an ad from a non-ad altogether. He explains that for the Advertising Association, media literacy is a key intervention that aims to empower citizens’ critical thinking skills. That’s why their Media Smart work delivers media literacy skills to young people growing up in a world of commercial messaging that touches most areas of their lives. 

“It’s more important than ever that people understand exactly what is being suggested, promised and sold to them. The same case could be made for AI. Understanding how it works is half the challenge and learning to interpret media critically is the other.”


Dane echoes this: “Learning about AI, its applications, and potential risks is key. Understanding its basics can help consumers make informed decisions.”

McCann’s Ringo says that today consumers are savvier – they’re increasingly interested in sourcing, manufacturing, and the environmental commitments of the brands they buy from. For responsible brands, he says, this can be a compelling area for telling the truth “and telling it well.” 

Some brands, though, sell across an incredible range of demographics – are they all savvy enough to catch them out and has equal opportunity for education become widespread enough for experts to be able to expect it on such a wide scale? How fair is it to put the onus on ordinary people to be able to spot the real from the fake? 

For now, there may be obvious tell-tales like mutated hands and odd lighting but as AI platforms develop and learn, those will get ever more subtle. Moreover, given the volume of news, ads, and social media posts that we are exposed to at any given age, it’s perhaps an unrealistic ask that people should have the time and mental bandwidth to scrutinise every asset they come across. This year we’ve seen a cat-and-mouse game between sophisticated generative AIs and AI-detecting AIs, which are less than perfect. 

Some publishers and tech platforms have come together to form the Content Authenticity Initiative, and are trying to formalise a set of standardised and easily traceable certifications to be attached to digital work, to help audiences understand the provenance of digital images. 


Level heads and empathy


“Everyone always thinks they’re living in the most unprecedented and challenging times – from Seneca to Pepys. We’re not,” says Kevin. Most advancements in communications technology come with the yin-yang of the increased sharing of ideas and the ability to spread misinformation and mistrust. Just take the printing press. It helped end the Dark Ages and kick start the Renaissance – but it also made propaganda much easier. Ultimately, though, the sky clearly did not fall down and we made it work.

Whether the AI revolution proves to be just more of the same, or a qualitatively different phenomenon remains to be seen. However, trust is often built on emotion and cognitive shortcuts rather than carefully considered facts.  

“Everyone always thinks they’re living in the most unprecedented and challenging times – from Seneca to Pepys. We’re not,”


That means that if people feel more vulnerable, or confused, or unsure of what to believe, if they feel like they’re in an environment of misinformation – whatever the facts are – it’s still something that brands will need to explore and consider. If they’re going to create their own AI content, how do they intend to protect trust and transparency? How bothered are people, really, about AI-generated fake news? What impact will levels of trust in specific platforms and channels influence where they choose to advertise? How might a wider context of AI-generated misinformation (or simply the perception that we’re surrounded by AI-generated misinformation) change consumer behaviours or make it harder to build trust? Are there ways that brands can step up and take advantage of depleting levels of trust in traditional institutions as they come under attack? These are questions worth asking with a level head and empathy.

In 2024, with multiple major elections taking place around the world, it doesn’t seem a huge leap of imagination to predict that we might well see an uptick of polarising, culture war-y, AI-generated misinformation, which in turn whips up uncertainty and mistrust. It won’t directly impact every brand and, in the long run, it may not prove to be the end of the world, but it is something that brands and agencies are going to have to navigate sooner or later.

SIGN UP FOR OUR NEWSLETTER
Work from LBB Editorial
ENGLISH
Allegro
13/11/2024
21
0
ALL THEIR WORK
SUBSCRIBE TO LBB’S newsletter
FOLLOW US
LBB’s Global Sponsor
Group745
Language:
English
v10.0.0