The power of artificial intelligence has been the central focus of tech hype conversations for some months now. However, some tech leaders and researchers are concerned about the potential risks of AI, especially when it becomes more advanced and capable than humans.
That's why a group of figures in tech and business, including Elon Musk, Steve Wozniak, Gary Marcus and engineers from Amazon, DeepMind, Google, Meta and Microsoft, have signed an open letter calling for a temporary halt in development of advanced AI. The letter, issued last week by the non-profit Future of Life Institute, called for AI labs to pause training any tech more powerful than OpenAI's GPT-4, which launched last month. According to the Institute’s website, almost 6,000 have signed the letter, although the validity of these signatures has been questioned.
GPT-4 is an AI system that can generate natural language text on almost any topic. It is based on a massive neural network trained on billions of words on the internet. The letter argues that such AI systems with "human-competitive intelligence" pose profound risks to humanity, such as ethical dilemmas, social impacts, economic disruptions and existential threats.
The letter proposes that AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts.
As much of an overhyped tech-bro soap opera that this debate might seem to be, advertising professionals should consider the implications of moments such as this open letter’s publication. From targeting and personalisation to optimisation, content creation and measurement, AI is already woven into the fabric of the ad industry. So understanding the challenges, ethics and responsibilities that come with this rapidly advancing tech is crucial to navigating its growth.
We asked leaders from across the industry to respond with their thoughts on the open letter’s implications. Here’s what we got in response.
Alex Steer
Global chief data officer at Wunderman Thompson
Sometimes you need old wisdom to think about new things. In the eleventh century, King Cnut set his throne on the sea shore and commanded the tide to stop. Spoiler: it did not. He was demonstrating the futility of believing you can stop the unstoppable. It’s a good, if unusual, starting point for thinking about how to take ethical actions in the AI era.
There are good reasons to be suspicious of those AI enthusiasts who argue that there should be no checks and balances on the development of this new technology. This is the ‘move fast and break things’ mentality of Silicon Valley, hyperscaled to a scenario where the speed of movement is incredible, and the scale of breakage potentially immense. We urgently need applied AI ethics, and this should not be left to technology companies (many of whom have laid off their AI ethicists recently).
But I have little sympathy for this open letter. Demanding a halt to the development of new technology, and using crude scaremongering language to do so, is not a credible ethical position. We need AI ethics that can deal with the world as it is and as it will be. When change accelerates, that matters even more.
As King Cnut knew, the tide won’t stop because you want it to.
Wesley ter Haar
Co-founder of Media.Monks
ChatGPT reached one million users just five days after its launch; Netflix took 3.5 years to do the same. AI's explosive growth shows the incredible value that people see in it, a technology that is helping them overcome limitations in significant ways—whether that means democratising creativity or becoming more autonomous (ChatGPT-4 can help the visually impaired navigate or know their surroundings, for example).
That rapid adoption and continual sophistication can be cause for pause. It’s true that regulators are unlikely to move at the speed of AI innovation. But the proposal offered by Future of Life isn’t a realistic one, meaning there needs to be a level of self-regulation in building and deploying AI. In our role as a digital advertising and marketing services partner, we’ve embraced honing that critical eye through the simple act of buying tools and evaluating partners.
One area that needs immediate attention is more transparency in training data sets. We believe it is possible to share where your data comes from and how you’re accounting for legality, representation, bias, and other factors without revealing competitive secrets, but we’ve found this to be a major challenge in the evaluation process.
We as an industry need to have some shared principles around this, so that we can demand this as a condition of sale or partnership. I don't agree that postponing further AI development will solve these problems; we need to begin working through them and building solutions now. At Media.Monks we have well-established strategies and principles to account for these issues in all facets of our business, which will continue to shape our response to all societal issues related to AI.
Brian Yamada
Chief innovation officer at VMLY&R
I absolutely think it’s healthy for this industry to consider what it’s unleashing, but I likewise think this is critically important tech to advance, and to do so responsibly. That said, this letter won’t dramatically affect our AI strategy. We (and our parent company, WPP) strongly believe in responsible AI. Strive for transparency and ethics in what we do for our clients and ourselves. So this letter is a good reminder and reinforcement of those beliefs.