senckađ
Group745
Group745
Group745
Group745
Group745
Group745
Thought Leaders in association withPartners in Crime
Group745

Humanising AI - The Race to Replace Ourselves

22/03/2019
Advertising Agency
London, UK
128
Share
INFLUENCERS: This should be a world of augmented intelligence rather than artificial intelligence, writes Cheil UK's David Coombs

The humanisation of technology was a strong theme at this year’s SXSW. Sessions with names like, ‘Empathetic technology and the end of the poker face’, ‘Will machines be able to feel?’ and ‘Expeditions into the uncanny valley’ show this was one of the big themes of the week. 

So, can – and more importantly should – AI be more like us? 

There’s no doubt it can - usually in one of two ways. The first is simulation, where we teach AI, through learned responses, to emulate us. This started in 1966 with a chatbot called Eliza which could only rephrase what it was asked, leading up to Sophie, the latest thing in chatbots, who is a real life citizen of Saudi Arabia.

The second approach to making AI more like us is to teach it semantics and natural language processing, so that it can understand more about the intent behind the words. Amazon’s Alexa is the best-known example of this, but there are many more: IBM’s ‘Project Debater’ is capable of taking on humans in arguing complex topics, and ToMnets – that's 'theory of mind networks' – are currently pushing AI’s understanding of intent to the next level.

This advanced AI belongs in what has become known as 'Uncanny Valley', because research shows that chatbots that look and behave too much like a human cause fear in participants. People want the experience to finish as quickly as possible, whereas with text-based chatbots, they are much more comfortable and happier to engage.

So, on to the 'should'. Is it right to make AI as human as possible, or should we just focus on using the advances in technology as a force for good? Take voice for example, where the technology can help us enormously, especially the elderly. 

AlterEgo allows users to communicate silently with a device by vocalising internally. The subtle internal movements are understood by the machines, which then send an aural output through bone conduction, enabling the user to hear the response without obstructing any of their physical senses.

Often it is the small things that have the biggest impact. Take a person with a muscle wasting disease: by installing a voice activated bed movement system, not only can you give them the ability to physically move and turn themselves, but you also give them back the feeling of control. This also has an impact on the carer, who no longer has to get up every hour throughout the night to do the bed turning.

As the AI that powers these systems gets increasingly sophisticated, can we simply just replace humans in the process? If AI can create original artwork that sells at Christie’s for $432,500, are we becoming the architects of our own redundancy? 

Created using Generative Adversarial Networks (GANs), it’s now possible to use AI to originate as well as to emulate. The premise of a GAN is pretty simple. Take two neural networks and ask one algorithm (generative) to create a new version of something. You then ask the second algorithm (adversarial) to try and distinguish the AI versions from the human ones. Each time the adversarial network rejects something, the generative network learns and improves. Each time the generative network improves, then so does the adversarial one. This goes on until the AI version is undetectable. Told you it was simple. It is GAN networks that are currently impressing and scaring people with lifelike faces on this website.

But in real world scenarios it’s not that simple to turn on AI and turn off people. Take DoorDash, the US-based food delivery company, which uses AI in many areas of the business, from calculating delivery times to suggesting new restaurants that should be added to the platform. When trying to identify the best image for each restaurant to use in order to attract customers, AI recommended every restaurant should use a picture of a bottle of Coca-Cola. It may well have been the most commonly ordered thing, but the AI wasn’t capable of understanding that Coca-Cola was only a very tiny factor in the complex decision making that goes into choosing a restaurant. 

Entertainment AI is a company that automates content creation and storytelling, but its founder, Anne Greenberg, also involves a human in the process – otherwise the machines are simply creating things that only machines are interested in, and who wants that?

What does all this tell us about the future of AI? Well, for me it is very simple. We shouldn’t be trying to emulate and replace humans with technology, we should be enhancing them. Not in a 'Terminator' style, but by helping us to better ourselves, and helping us to lead more fulfilling and rewarding lives.

Perhaps we need to redefine AI. This should be a world of augmented intelligence rather than artificial intelligence.



David Coombs is head of strategic services at Cheil UK

Credits
Work from Cheil London
DeepScreen Alive - Piccadilly Lights
Samsung Gaming Hub
18/12/2023
24
0
ALL THEIR WORK