senckađ
Group745
Group745
Group745
Group745
Group745
Group745
EDITION
Global
USA
UK
AUNZ
CANADA
IRELAND
FRANCE
GERMANY
ASIA
EUROPE
LATAM
MEA
Thought Leaders in association withPartners in Crime
Group745

How Creative Industries Can Train AI to Tackle Mental Health Stereotypes

29/08/2025
28
Share
Ogilvy Consulting’s head of consumer equality, Shelina Janmohamed on tackling the existing stereotypes that have been fed into AI, the importance of balancing AI prompts with understanding real lived experiences, and making sure future creative projects authentically reflect mental health experiences which AI will learn from

AI has become part of our inner circle. We use it to navigate messy break‑ups, coach us through work dilemmas and manage our grief. TikTokers are even posting: 'I told ChatGPT about you.'

The real-world consequences can be dangerous, even fatal, triggering mental health breakdowns, parasocial relationships and even leading people to create law cases based on AI advice for professional misconduct against real people. It’s so worrying that Microsoft’s head of Artificial Intelligence, Mustafa Suleyman, has given a name to this phenomenon - ‘AI psychosis’ - and he says it’s keeping him up at night.

It’s also a creative problem as well as a tech one. Which means all of us in the creative economy that that are involved in creating culture are to some extent accountable and need to step up.

AI learns about mental health from what we’ve banked in our existing cultural archive – films, stock photography, news stories, books, art, music, images, ad campaigns – even memes. They represent the range of mental health conditions, from anxiety to depression, to psychosis to schizophrenia. If our cultural archive is saturated with reductive and stigmatising portrayals (which we know historically it is), AI will only amplify them. And that means the faces, the tones, the clichés from times past - are now baked into the tools people are turning to for guidance.

This generates a feedback loop: creative industries shape the archive, AI learns from it, AI outputs influence new creative work, stereotypes persist. The heavy weight of past stigma and stereotypes are given new life, reviving the toxicity that we’re working so hard to erase and magnifying its reach.

News stories are currently flooded with problems emerging from AI-human conversations – everything from falling in love with your AI to triggering OCD.

The recently released ChatGPT-5 has inputs from psychiatrists, physicians and human-computer-interaction experts to improve its impact on mental health, reduce triggering and avoid providing unqualified advice.

But this all focuses on tackling the dangers that lie in the interface between AI and the human user.

The challenge that needs addressing is what goes into the black box, the cultural archive that shapes the outputs and narrative, the ‘stuff’ that feeds the algorithm.

We are the ones creating that ‘stuff’. And not only do we need to think harder about what ‘stuff’ we create, we should also be thinking creatively about how our output can be used to game the system and diffuse the influence of past toxicity in the cultural bank.

Because in a creative economy increasingly leaning on AI for inputs, this happens: we use AI tools for research, to uncover insights and to test hypotheses, but the AI is digging into biased data and then testing its hypotheses against those biases, we fall short as an industry.

What is needed are lived experience experts and expert human oversight is required to refine the questions set to the AI, to review and challenge and sometimes even completely override any outputs through real world knowledge of bias and mental health narratives. AI can then be used to supplement this by training it explicitly with information about what is biased and toxic.

Knowing the biases

But here’s where we can show we have really understood the assignment: identify where AI threw up biases. This way we can create cultural output that doesn’t just diffuse the old stuff, but actively counters it. The cultural output we create can proactively target the toxic stereotypes and narratives. A kind of kryptonite for the toxicity that is currently polluting the existing cultural archive.

The added benefit is that every condition can be addressed with its own counterpoint. For example, the ‘stuff’ that needs to be input to inform the interactions with those suffering from OCD – which can be made worse by AI’s constant information feedback about risk - will be very different to that needed to counter prevailing unhealthy narratives about other conditions.

That said, when we move forward on this journey of creating better cultural output when it comes to mental health, we need to be aware of our own biases of what we are creating. The lack of broad societal representation across the creative economy is well documented, whether that’s age, gender, socioeconomics, ethnicity, religion, regionality and others, and it’s these conditions that negatively impact mental health in ways we may not realise. We also risk excluding those who live with those conditions and could help inform us with their realities.

Without care or due diligence, we can quickly amplify existing problems. Nobody wants to accelerate problems for those already facing inequality and disadvantage.

Thankfully, the problems of AI bias when it comes to racism or sexism are at least on our radar. We are far from solving all of those, but at least we have growing awareness of it.

Now we need to raise the alarm on tackling mental health and AI. Creative outputs are key. For those of us in the business of making and shaping culture, this is a wake‑up call: every campaign, every storyline, every image or script we put into the world today is shaping the mental health narratives AI will repeat back to millions tomorrow.

SIGN UP FOR OUR NEWSLETTER
SUBSCRIBE TO LBB’S newsletter
FOLLOW US
LBB’s Global Sponsor
Group745
Language:
English
v2.25.1