Get your own Little Black Book.

Build your own personal news stream. Discover the latest work created that intersts you, share your favourite stories and follow your favourite people and companies

Already have an account?

The Influencers

Emotive Computing – What Does It Mean for the Future of Advertising?

J. Walter Thompson London, 3 weeks, 3 days ago

INFLUENCER: Elizabeth Cherian, Director of the Innovation Group at JWT Intelligence London, on a hot topic that dominated Web Summit 2016

Emotive Computing – What Does It Mean for the Future of Advertising?

This year’s Web Summit welcomed a host of colourful characters to the MEO Arena in Lisbon which housed 21 standalone tech conferences from November 7th to 10th. While many of the 50,000-plus international attendees might cite key highlights such as the opening night presentation on creativity and commerce by actor and HitRECord founder Joseph Gordon-Levitt, few demonstrations produced a similar hoard of journalists and rubbernecking delegates who clamoured to capture Hanson Robotics Chief Scientist, Ben Goertzel, in conversation with his humanoid robot, Sophia.  The cross between human and machine, and the technological field of emotive computing that it produces, were hot topics at what’s fast becoming the SXSW of Europe.

Emotive computing encompasses the systems and devices that can recognise, process, and simulate elements of human behaviour. When Siri understands voice commands, responds effectively and introduces a glimmer of personality with simple jokes to silly questions, she’s demonstrating emotive computing.  She relies upon a host of technologies like natural language processing and machine learning, all of which fall under the umbrella term Artificial Intelligence, or “AI”.  

Some instances of AI process enormous sets of data in search of anomalies that help experts like oncologists pinpoint the cause of cancer, in the case of IBM Watson.  This sect of AI demonstrates very little human-like qualities.  But digital assistants like Siri, as well as humanoid robots like Sophia, do. It is this area of AI that had many people at Web Summit talking. 

“Robots Should Look and Act like People” was the title of Goertzel’s headlining talk that put Sophia on centre stage.  He’s part of a growing movement of roboticists and software developers who believe the line between man and machine is quickly eroding. People today implant RFID chips in their hands to avoid carrying office passes, as is the case with employees of the Swedish-based company, Epicenter. 

 It’s also possible for anyone to seek out tailored medical solutions via ingestible sensors produced by Proteus Digital Health.  It’s easy to imagine a future where less of the human body is restricted to the confines of traditional biology.  While Goertzel acknowledges there is a difference between a human brain and machine intelligence, he believes there is no reason we should not treat them with the same degree of dignity.  After all, “we are all biological robots,” he enthused.  His company strives to produce low cost emotionally expressive humanoid robots like Sophia that will be accessible to all.

But critics cry foul.  Managing Director of Silicon Valley Robotics, Andra Keay has called for ethical design guidelines. Indeed, Rodolphe Gelin of French-based Softbank Robotics Holdings asserted that developers face a serious responsibility, citing the Microsoft debacle that saw its chatbot Tay spewing racial slur over social media earlier this year.  Their side of the argument evokes visions of a future akin to the American HBO series, Westworld, which calls into question the moral nature of a society that integrates humanoid robots.   

Irrespective of our treatment of social robots or digital assistants, they possess the incredible power to acutely sense the world around them, including how we think and behave. Rana el Kaliouby of American facial recognition startup, Affectiva, cited the example of Brain Power, a research project in the US servicing autistic children who are unable to naturally interpret facial expressions.  The research team provides kids with Google Glass that overlays emotional signals over people’s faces to guide better social interaction.  Kaliouby showed a film of a tearful mother who told her participating son she felt as though he had never truly seen her before he used the device in this way.  

While AI promises incredibly personal, insightful solutions, it only works when it deciphers the information we present it, responds in a way we can understand, and delivers an experience that is seamless/intuitive enough to promote further engagement. Brands and businesses are asking how these devices can better treat their users.  How can the technology be more empathetic so it behaves in a human-like way and eventually show compassion, especially in the case of Rob Kurzweil’s point of “singularity”, should artificial intelligence ever surpass that of humans. 

Either way, emotive computing is exposing some of the shortcomings in humans themselves, highlighting the success of AI doctors like Babylon that are programmed to communicate more empathetic diagnoses, or diet coaches like Lark that do not hold biases or judge failed attempts to lose weight.  In many cases, services built on AI produce better results, even in contexts where one might assume that the natural human touch would prevail.  As we teach technology to be kinder and more ethical, we remind ourselves of the same lessons we often forget.