Music Supervisors of the Future, Your Jobs Are Safe
With discussions about A.I., machine learning and automation ubiquitous, what naturally follows is a conversation about the decline of certain roles across all industries. Some you would expect to see in a digital age where automation and its promise of increased efficiency are essential to the more nimble start up culture (book keeping, high level computation, manual labour etc), but the conversation is now bleeding over to include the possibility of creative roles being replaced as well. This is intriguing to us.
You have to look back to the late 60’s to find the first examples of robots replacing humans on automotive production lines but the development of sophisticated algorithms to perform much more complicated and creative tasks has only really gathered speed in recent years. Ambitious futurists as many of us humans are, there is inevitably talk of what robots / A.I. might be able to do in the creative industries in future. What could we automate? How can we cut humans out of the loop to save money and time?
The Key Question
So, are music supervisors going to be out of a job? Could A.I. eventually replace us altogether, leaving us all scrapping for the last remaining bartenders position at the O2? There are several aspects of the music supervisor’s day to day that could, with more development, potentially be performed by A.I. / sophisticated algorithms. So, the key question here is whether the music supervisor as we know her could be out of a job as machine learning becomes more sophisticated or whether there is still a space for us in this dystopian technological future. The three key areas to focus on here are creative (commercial) music research, production of bespoke compositions and scores, and the more nebulous areas of licensing, consultancy and project management.
The horses are very much out of the proverbial gate when it comes to music composition. AIVA, founded in Luxembourg by Pierre Barreau, is one of many examples of future music making, including projects funded by Google and IBM, amongst others. AIVA is even registered as a composer with PROs and has released an album (Genesis)… but this is still a set of painstakingly programmed algorithms, and Genesis, although composed by a computer, was recorded by real living humans in a studio. Google, with their NSynth project, have created a machine learning algorithm that “uses a deep neural network to learn the characteristics of sounds, and then create a completely new sound based on these characteristics”, as with Genesis above, at this stage it is still reliant on human musicians using the tool and turning it into music, but how long until this is no longer necessary?
When it comes to sourcing the perfect commercial track for a film or campaign, we are already able to make use of the rapid and groundbreaking change in the way music streaming platforms enable the public to interact with and discover music. When we first started picking songs for brand films, commercials and TV shows we were still looking through our music collections, our CD’s, our hard drives full of thousands of meticulously organised (ahem) folders, even our ‘vinyls’. Now many researchers’ first port of call is to use Spotify / Apple Music to get an initial palette of sounds / artists. While this may work for a very simple lyric search or broad genre search, it is very narrow in its creative scope as you are necessarily limited by what their particular algorithm sends back; relying on the connections it makes between artists based on collected user data and not on an intuitive understanding of what will work on the film. There is little in this process (barring a rare serendipitous moment) that is able to replace the spark of inspiration which reveals a track that is completely ‘off brief’ but just works, and wins the job. Also, for the experienced and trusted supervisor there are many occasions on which you are brought in at the beginning of the process when your client is saying, explicitly or not, ‘we don’t know what we want!’. It’s your experience, knowledge and creativity that can help construct the music brief, or narrow the focus through guidance, exploration and even putting forward tracks you know won’t work to help close certain doors. You can look at Spotify’s ‘Fans Also Like’ section to get stuff that sounds similar to your reference track, but where do you find something completely different that tells the story in the same way?
So where can current A.I. add value ?
Among other examples we could cite, there has been interesting progress made with an IBM funded algorithm called Watson, which analyses ‘enormous amounts of unstructured data like articles, social media, interviews and fan sentiment’ to discover the right artists for collaborations. https://www.ibm.com/watson/music/
This, with the right focus, would be a very powerful tool for brands looking to align with a particular musical personality for a partnership.
The problem remains that current A.I. is unable to engage with the subtleties of communication between the stakeholders in an audiovisual production. Directors and creatives are looking for music which supports the storytelling of their visual content which will often take many rounds of feedback and revisions… for this you need intelligent communication.
So, onto licensing. At Future Music Forum this year we discussed at length with Imogen Heap her vision of a future where each professional musician has an online ‘creative passport’ which is globally accessible and details exactly their contribution to each song they have played on, co-written or produced. There is potential here to develop automated licensing capabilities, but this would involve either unprecedented communication between international PRO’s (Performing Rights Organisations), or the creation of a global PRO, before all global information could be successfully and incontrovertibly assimilated (smirk). Also, there would still have to be an approval process which has always been managed by human beings at rights holders and artist managers and it’s hard to imagine this changing without a global shift in attitudes towards synchronisation.
The problem with automating licensing is that the true value of a music supervisor who is representing the client’s needs at the quoting stage would be lost as a large part of the job involves drilling down into the media plan / preferred media and suggesting ways to streamline / cut out parts of the request that are unnecessary. Is it really Global online or are you targeting your paid media to specific territories? Can you geo-lock to your specific markets to help reduce the fee? Do you need an archive term? Is the client likely to want to go to TV at any point? Can we build in options? These questions (and many, many more) can help to guide a client, but also help them work out what questions to ask their client as well. Questions can of course be automated, but people only want to use automated services if they make their lives easier. Having to go through a hundred ‘yes or no’ questions on an automated service is less desirable for a producer than dealing with a professional supervisor who is able to look at the media plan / type of content etc and know instinctively which questions to ask, and quickly. You can tell instantly by looking at a piece of creative whether it’s for social media and not cinema (for example), but an automated service won’t make this distinction so the user might have to navigate hundreds of pointless questions.
...will A.I. replace music supervisors? We don’t think so. Will it change them? Probably. Eventually. We think there are too many important areas within a production that unquestionably require an experienced supervisor to get rid of us entirely. Whatever happens though we’re rolling with the punches and excited to see where it takes us.
This article was co-written by Thirty Music's Toby Slade-Baker and Alex LodgeThirty Music's Founder/Director Toby Slade-Baker and Director Alex Lodge
Genre: Creative technology , Digital , Music & Sound Design , Music performance , Strategy/Insight