Viewing a single comment thread. View all comments

thisimpetus t1_j0m01ku wrote

I think you're vastly, vastly underestimating musical composition and our relationships with it, and I say relationships because what we hear and enjoy and why varies enormously from person to person.

I'll just give myself as an example: I studied audio engineering and also play music. I also have ADHD. My absolute favourite music is my favourite in part because of my tastes in melodies and rhythms, in part because I have training in the actual recording process and can hear things that the untrained ear simply cannot (there's no conceit, here, it's just the consequence of knowing what you're listening for and a very great deal of practice) and then finally I have neurological mechanisms that affect my tastes. My favourite music sounds like cacaphony to some of my friends, who don't at all enjoy listening to sixty-four simultaneous tracks while lying perfectly still in the dark concentrating as hard you can.

Meanwhile I also enjoy dancing and and everything I just said goes straight out the window into irrelevance as soon as moving my body is involved, whereupon the criteria for what I enjoy is fundamentally different. A dirty beat that makes my feet work is just repetitive and boring if all I'm doing is listening; a pop-orchestral synthesis of myriad musical styles all engineered to precision sounds great on good monitors and almost like noise as background.

Any AI that sought to do what you're proposing has a much, much greater task in front of it than simply assigning acoustic data to a particular pattern of brain function. The degree to which such an AI would have to meaningfully grasp subjectivity is way, way, way more complicated than you're imagining. I don't mean that the AI would have to understand what it's doing, exactly, but rather that it would have to train on such a staggeringly vast corpus of data that it is, for the moment, unimagineable.

Consider training an AI on images. It takes thousands of them, and the training usually has be done on a rented super computer or cloud network to have enough computing power to get it done in a reasonable period of time. And that's just pixels. The size of the data between a corpus of images vs the entire operation of brains in all their myriad complexity is so vast I can't really express it, gigabytes vs petabytes at the very least.

So we can't currently collect that data, never mind begin to process it, nevermind train that data, and even if we had all that we don't have hardware that can even begin to monitor your brain activity in anything like the real-time fidelity such an AI would need to make intelligent choices about what to feed you, and we definitely don't have that kind if hardware at the consumer level.

What you might see is a much, much, much simpler service, in the next decade or two, that can match some very basic musical choices to mood and attention. That's vastly, vastly different from an AI authoring music that's matched in terms of enjoyment by our favourite artists today.

AI can write music now; it might soon write popular music, but it won't be using neural data to train on, it'll use social media and downloads and playlists etc. to figure out what will be popular. Custom, personalized, real-time music is a whole other ballgame and not currently even on the horizon.

3