Above: Rob asked Dall-E to render a photorealistic image of a living being with a computer for a mind
Like half the world recently, I’ve been experimenting with the new crop of AI tools. These have ranged from image and text generators to video editing automation, realistic speech synthesisers, with a few experiments in creating virtual influencers thrown in for good measure.
And what have I discovered?
Something between a creative nirvana, the ultimate cheat code and Pandora's box.
Or to put it another way – I’m massively conflicted about the whole thing.
Anyone who has used Open AI’s Chat GPT will know how brilliantly addictive it can become, getting very human-like conversations going and thorough, eloquent answers to pretty much any questions you have - although with 18 month old data in some cases. Or even, and whisper it quietly, generating intelligent sounding copy for presentations when you are short of time. (This article is 100% human generated I hasten to add…)
Similarly, anyone who’s used Midjourney or Open AI’s other big title ‘Dall-E’ to generate imagery – it’s an amazing experience. Type what you want to get and within 30 seconds you get served up a bunch of AI generated images that you would otherwise not have been able to find using a Google image search -or any other kind of image search for that matter. With a little practice you can even develop your own ‘visual style’ creating images in this way. That’s genuinely astonishing when you think about it.
After only a few months of noodling around and being amazed at what was achievable, I find myself thinking “I could get AI to write that and save me a job” or “I won’t bother to scamp that idea – I’ll get AI to generate some stuff and see how it looks.”
But then I’m suddenly consumed with this kind of guilty feeling. Like I am cheating on a test, or passing someone else’s work off as my own. And when I look around at the mounting and wholly understandable consternation coursing through the creator community about IP theft, I see that I’m not alone.
I also see the massive potential for artists and creators to use AI as a tool to do better work themselves, not as a replacement for their services, but as a complimentary service. Now I’m starting to think of AI as a kind of creative partner – someone whose skill and opinion you admire and together you bounce thoughts and ideas off each other to create something better than you would do on your own. You ideate, AI iterates, then you create better than before.
By way of example, a design ‘crush’ of mine - Daniel Simon (designer of futuristic vehicles to Hollywood and the motor racing industry) is starting to use AI to help iterate his initial concept designs. By feeding the AI his original visuals and getting it to iterate from there, he can look at potentially hundreds of variations of his original design. He then incorporates elements of the AI ‘suggestions’ into his final design artwork.
In my opinion, the potential for problems comes not from the AIs themselves. The potential for problems comes from the ethics of those that build it and ultimately those that use it. As I was writing this piece, I was horrified to read an article in Time that exposed how Open AI has used very low paid workers in the southern hemisphere to train toxic content out of its conversational AI. The intention is noble – to keep AI ‘clean’ and free of toxic content. However the methods to achieve this – and all automatic content moderation for that matter – are far from ideal.
You see, AI first has to be ‘trained’ to perform its task. And at the moment, the only device capable of doing that training is the human brain. In the instance of Open AO’s content management partner – the owners of those brains, who are on around $2 per hour, had to tag thousands of pieces of human created text and imagery that were, frankly, horrific. In fact the very worst kind of content humanity can create. Many reported PTSD as a result.
So to allow the rest of us to have ‘safe’ and ‘moderate’ experiences with AI, we first have to account for the worst kinds of human nature – but at what price?
And I think that’s where my thoughts on the subject currently net out. AI is only as good – or bad as the humans who use it. In the end, we – the users – enter the search terms and curate the outcome to be used and shared. We decide what to use. We decide if we plagiarise. I would argue then that the eventual verdict on AI’s role as hero or villain rests on human nature, and algorithms can’t fully control that. Yet.