Last week I was interviewed about music and artificial intelligence (AI). This led to several different stories on radio, TV, and as text. The reason for the sudden media interest in this topic was a story by The Guardian on the use of deep learning for creating music. They featured an example of the creation of Sinatra-inspired music made using a deep learning algorithm:

After these stories were published, I was asked about participating in a talk-show on Friday evening. I said, yes, of course. After all, music technology research rarely hits the news here in Norway. Unfortunately, my participation in the talk-show was cancelled on short notice after one of the other participants got ill. I had already written up some notes on what I would have liked to convey to the general public about music and AI, so I thought that I could at least publish them here on the blog.

The use of AI in music is not new

While deep learning is the current state-of-the-art, AI has been used in music-making for decades. Previously this was mainly done on symbolic music representations. That is, a computer algorithm is fed with some musical scores, and it tries to create, for example, new melodies. There are lots of examples of this in the computer music literature, such as in the proceedings of the International Computer Music Conference.

The Sinatra-example shows that it is now possible to create convincing music also at the sample level. This is neatly done, but also timbre-based AI has been around for a while. This was actually something I was very excited about during my master’s thesis around 20 years ago. Inspired by David Wessel at UC Berkeley, I trained a set of artificial neural networks with saxophone sounds. Much has happened since then, but the basic principles are the same. You feed a computer with real sound and asks it to come up with something new that is somewhat similar. We have several projects that explore this in the Interaction and Robotics cluster at RITMO.

If one also considers algorithmic music as a type of AI, this has been around for centuries. Here the idea is to create music by formulating an algorithm that can make musical choices. There are rumours of Mozart’s dice-based procedural music in the 18th century, and I am sure that others also thought about this (does anyone know of a good overview of pre-20th-century algorithmic music?).

There will be changes in the music industry

As the news stories this week showed, many people in the music industry are scared about what is happening with the introduction of more AI in music. And, yes, things are quite surely going to change. But this is, again, nothing new. The music industry has always been in change. The development of the professional concert halls in the 19th century was a major change, and the recording industry changed music forever in the 20th century. That doesn’t necessarily mean that everyone else will lose their jobs. Even though everyone can listen to music everywhere these days, many people still enjoy going to concerts to experience live music (and even more so now when corona has deprived us of the possibility).

Sound engineers, music producers, and other people involved in the production of recorded music are examples of new professions that emerged with the new music industry in the 20th century. AI will lead to new music jobs being created in the 21st century. We have already seen that streaming has changed the music industry radically. Although most people don’t think much about it in daily life, there are lots of algorithms and AI involved under the hood of streaming services.

There will surely be some people in the industry that will lose their jobs, but many new jobs will also be created. Music technologists with AI competency will be a sought after competency. This is something we have known for a long time, and that is why we are teaching courses on music and machine learning and interactive music systems in the Music, Communication and Technology master’s programme at UiO.

Another topic that has emerged from the discussions this week is how to handle copyright and licensing. And, yes, AI challenges the current systems. I would argue, however, that AI is not the problem as such, but it exposes some basic flaws in the current system. Today’s copyright system is based around the song as the primary copyright “unit”. This probably made sense in a pre-technological age where composers wrote a song that musicians performed. Things are quite different in a technology-driven world, where music comes in many different forms and formats.

The Sinatra-example can be seen as a type of sampling, just at a microscopic scale. Composers have “borrowed” from others throughout the entire music history. Usually, this was in the form of melody, harmony, and rhythm. Producers have for decades used sound fragments (“samples”) of others, which has led to lots of interesting music and several lawsuits. There are numerous challenges here, and we actually have two ongoing research projects at UiO that explore the usage of sampling and copyright in different ways (MASHED and MUSEC).

What is new now, is that AI makes it possible to also “sample” at sample-level, that is, the smallest part of a sound wave. If you don’t know what that means, take a look at the zoomed-in version of a complex sound wave like the complex waveform (coded by Jan Van Balen):

waveform

Splicing together sound samples with AI opens for some interesting technological, philosophical and legal questions. For example, how short samples can be covered by copyright? What types of representations of such samples should be considered? Waveforms? Equations? A software with API? Clearly, there is a need to think more carefully about how this should be implemented.

Possibilities with music and AI

The above-mentioned challenges (a changing music industry and copyright discussions) are not trivial, and I understand that many people are scared by the current changes. However, this fright of new technology may get in the way of many of the positive aspects of using AI in music.

It should be mentioned that many of the new methods have been developed and explored by composers, producers, and other types of music technologists. The intention has been to use machine learning, evolutionary algorithms, and other types of AI to generate new sound and music that would not otherwise be possible. There are some extreme cases of completely computer-generated music. Check out, for example, the autonomous instruments by my former colleague Risto Holopainen. In most cases, however, AI has been (and is still) used as part of a composition/production process together with humans.

Personally, I am particularly interested in how AI can help to create better interactive music systems. Over the last century, we have seen a shift towards music becoming a “hermetic” product. Up until the 20th century, music was never fixed, it changed according to who played. To experience music, you either had to play yourself or be in the near vicinity of someone else that played. Nowadays, we listen to recorded music that never changes. This has led to an increased perfection of the final product. At the same time, it has removed many people from the experience of participating in musicking themselves.

New technologies, such as AI, allow for creating more personalized music. The procedural audio found in computer games is one such example of music that can be “stretched” in, for example, its duration. New music apps allow users to modify parts of a song, such as adding or removing instruments to a mix. There are also examples of interactive music systems made for work, relaxation, or training (check, for example, the MCT student blog). They all have in common that they respond to some input from the user and modifies the music accordingly. This allows for a more engaging musical experience than fixed recordings. I am sure we will see lots of more such examples in the future, and they will undoubtedly benefit from better AI models.

Music and AI is the future

AI is coming to music as it is coming to everything else these days. This may seem disruptive to parts of the music industry, but it could also be seen as an opportunity. New technologies will lead to new music and new forms of musicking.

Some believe that computers will take over everything. I am sure that is not the case. What has become very clear from our corona home office lives is that humans are made of flesh and blood, we like to move around, and we are social. The music of the future will continue to be based on our needs to move, to run, to dance, and to engage with others musically. New technologies may even help us to do that better. I am also quite sure that we will continue to enjoy playing music ourselves on acoustic instruments.