senckađ
Group745
Group745
Group745
Group745
Group745
Group745
EDITION
Global
USA
UK
AUNZ
CANADA
IRELAND
FRANCE
GERMANY
ASIA
EUROPE
LATAM
MEA
Trends and Insight in association withSynapse Virtual Production
Group745

Holding Up the Mirror to AI Beauty: Attractive Investment or Ethical Eyesore?

04/02/2025
Publication
London, UK
47
Share
Hogarth’s head of AI, Priti Mhatr, TBWA\New Zealand’s chief executive Catherine Harris, Reckitt’s diversity and inclusion director, Efrain Ayala, and LBB’s Zara Naseer discuss AI-generated models, their ethical use, and how to avoid a PR crisis

Image by Алекс Бон from Pixabay

Fashion and beauty’s rocky relationship with altered images of models is enough to give anyone whiplash. A few decades ago, even if you were the paradigm of ‘90s beauty (white, slim, and preferably blonde), your images would be retouched to within an inch of their life – a practice that advertising was pivotal to normalising. Attractiveness was aspirational, and ‘perfect as you are’ was a pipe dream.

After a collective awakening to the fact that, one, this was making people miserable, and two, things didn’t have to be this way, unfiltered and inclusive representations of beauty began to proliferate. Authenticity was in, with Dove’s iconic ‘Real Beauty’ campaign as one of its main ambassadors.

But with a new tech revolution to grapple with, the sector finds itself once again at a crossroads. At the same time as Dove is pledging never to use AI to create or alter images of women, brands like Levi’s and Mango are piloting fully artificial fashion models. Given fashion and beauty marketing and advertising directly impacts the way people feel about themselves – empowered or inadequate – it must now think hard about what route to go down and the impact that’ll have on consumers’ perceptions not just of themselves, but of brands too.

So how have the first AI model brand pioneers fared? Spanish fashion retailer, Mango, has twice rolled out AI-generated campaigns to promote its teen lines, and both times, its models have been uniformly hyper-perfect: all are fair-skinned, full-lipped, and fat-free. Diversity and inclusivity don’t appear to have been on the brand’s list of priorities.

Above: Image from the first round of Mango's AI models

“AI models frequently reinforce stereotypes because they are trained on biased datasets,” comments Priti Mhatre, head of AI at Hogarth. “So, as brands adopt AI for marketing, they should look at this as an opportunity to engage with a diverse set of audiences instead of purely replacing their current human models with lookalike AI models. The Mango campaign is an example where the brand could have intentionally opted for a broader representation in their AI-generated human models without incurring additional costs.”

The brand’s specific targeting of teens has also been the subject of debate. According to a report from Getty Images, ‘Building Trust in the Age of AI’, younger consumers are more open to gen AI images being used in brand communications; but teens going through puberty – especially girls – are also some of the most vulnerable to body confidence issues as a direct result of online media. “Young people are particularly impressionable, growing up in a digital world where filters and AI are pervasive,” notes Catherine Harris, chief executive at TBWA\New Zealand, whose Bodyright.me initiative to end unethical retouching and misrepresentation was recognised three times in Fast Company’s World Changing Ideas Awards. “Brands, media platforms, advertising agencies, talent managers, and influencers all have a collective responsibility to address this.”

A read of Mango’s own press release around the launch makes it clear why the brand has jumped head-first into this new terrain – its “commitment to innovation.” But when unchartered terrain is also unregulated, critical mistakes are bound to be made. Priti urges advertisers to establish fundamental brand principles and safety standards before charging ahead, but Reckitt’s diversity and inclusion director, Efrain Ayala, worries that enough aren’t taking the time to do so.

“I don't think as an industry we’ve figured out what's appropriate, and there's also very little governance and legislature to help with that,” Efrain reflects. “So when I see brands running head-first into the creation of synthetic people for their communications to consumers, I worry whether those pieces are in place, whether there’s rigour around creating a standard for themselves. That's the balance of innovation and governance.”

The lack of regulation is why Reckitt has taken it upon itself to establish internal standards and ethics. Efrain shares, “We had to really ask ourselves, ‘What are our principles in this space? How do we feel about synthetic people? What is our take on communicating the use of AI in communications?’ Some platforms and some brands are starting to take a stance because, at the end of the day, as brands and brand builders, we're talking about trust. How are we building it?”

Above: AI models advertising Mango’s teen sport line

Trust is like gold dust, driving loyalty, resilience, and growth – so brands should be wary of how AI can jeopardise their perception among consumers. A 2024 YouGov study across 17 markets found roughly half of consumers were “not very comfortable” or “not comfortable at all” with the use of AI to create brand ambassadors and to generate or edit advertising images. Part of this relates to what the everyday consumer will deem false advertising, so brands must tread carefully. “For instance, companies like L’Oréal have implemented explicit policies prohibiting the use of AI to depict product efficacy or fit,” Priti recalls.

“Brands need clear, upfront disclosure about AI-generated content, treating consumers with intellectual respect by explicitly stating when models, images, or experiences are artificially created to build trust,” Priti continues. Last year’s ‘Building Trust in the Age of AI’ report from Getty Images corroborates the point, finding that almost 90% of consumers globally want to know whether an image has been created using AI.

It’s a good first step, and one that Mango took during its second iteration of AI images: a small disclaimer can be found at the bottom of the images, stating “These images have been generated by AI”. But is transparency enough? Several analyses (here and here, for example) indicate that it’s not, finding disclaimers to be ineffective at mitigating the harmful effects of modified images. More recently, Dove reported that “one in three women feel pressure to alter their appearance because of what they see online, even when they know it's fake or AI generated.” To go even further, in response to Norway’s 2022 policy for paid social posts to flag image modifications, TBWA\New Zealand’s Catherine Harris and Shane Bradnick hypothesised at the time: “there’s every reason to fear that a warning label will essentially act as a kind of permission slip for advertisers on social media – when the viewer knows an image has been edited, what incentive is there to even pretend it's realistic?”

There is another option – we can stop generating homogeneous models that promote damaging ideals. “AI models are trained on data which can be biased,” Priti warns, but through better programming, “brands can break away from traditional beauty norms and create content that embraces a broader spectrum of ethnicities, body types, ages, and abilities.” Among her principles for responsible AI use, Priti advocates for bias mitigation techniques like adversarial debiasing “where two models – one as a classifier to predict the task and the other as an adversary to exploit a bias – can help program the bias out of the AI-generated content,” as well as content reviews to detect the biases that slip through.


Above: Image from the Levi's AI model press release

Advertisers who generate a more diverse vision of beauty on screen mustn’t forget to mirror those DEI values behind the scenes. When Levi’s piloted custom AI models with plans to “supplement human models, increasing the number and diversity of our models for our products in a sustainable way,” the brand faced backlash for appearing to chase a synthetic facade of representation over actual equity. 

It’s a crucial lesson: consumers will not respond well to companies appearing to profit off a diverse aesthetic without actually improving the livelihoods of marginalised communities – the ‘university brochure’ or ‘window dressing’ approach, as Efrain puts it. Levi’s quickly amended its press release with an editor’s note to clarify that it did not see the pilot as a “substitute for the real action that must be taken to deliver on [its] diversity, equity and inclusion goals and it should not have been portrayed as such.” The brand also emphasised its “commitment to support multicultural creatives behind and in front of the camera,” and that it is not scaling back its plans for “live photo shoots, the use of live models, or [its] commitment to working with diverse models.”

Take care of meaningful representation behind the scenes, and the visible representation will take care of itself. It’s a diversity domino effect that applies even beyond the creative teams. Drawing from her experience at Hogarth, Priti shares, “We have observed that having truly diverse talent across AI-practitioners, developers and data scientists naturally neutralises the biases stemming from model training, algorithms and user prompting.” 

Improved visible representation then has its own happy knock-on effect – a more personalised and compelling user experience where customers can see products on bodies like their own. With AI digital twins working hand-in-hand with diverse human models, that possibility is supercharged, and Levi’s seems keen to embrace that. The brand has stated that, while industry standards typically limit the number of models photographed per product to one or two, soon, AI and the creation of digital twins will enable it to publish more product images on a broader range of bodies more quickly. Virtual try-ons and personalised recommendations are two opportunities that may now be within reach, according to Priti, with AI helping to overcome traditional resource and budget constraints.

Profiting off digital twins in such a way must be done ethically, and advertisers must both fairly compensate the human models involved and ensure their likenesses are protected. That’s how you keep talent excited to work with you. Efrain uses the subject of Hollywood’s SAG-AFTRA strikes as a cautionary tale: “Day actors would be hired and paid for a one-day shoot, but during it, they’d be captured with cameras and later inserted via AI into subsequent scenes. That would mean that a day-rate actor who would have normally booked seven days for that shoot is now maybe only booking half a day, and getting paid less while appearing in the same amount of content in the final product. What is the advertising industry’s remuneration programme when using AI to supplement diversity and inclusion in our communications?” It’s an important final point.

“To me, that’s how you deliver the equity along with the diversity and inclusion when using AI.”


There’s definitely room for ethical gen-AI innovation in the fashion and beauty sphere. If advertisers uphold the right principles – diverse datasets, representation in front of and behind the camera, complementing (not replacing) humans, and fair remuneration – they can foster positive feelings towards brands among both consumers and collaborators. Priti puts it best: “The authenticity of the intention behind utilising AI will significantly influence how the content is received.”

Brand
Agency / Creative
SIGN UP FOR OUR NEWSLETTER
SUBSCRIBE TO LBB’S newsletter
FOLLOW US
LBB’s Global Sponsor
Group745
Language:
English
v10.0.0