Submitted by HaHaBlahs t3_zo42c8 in Futurology

I predict that some time in the future, a system will be created which automatically analyses your brain activity in real time then creates and plays the perfect sound tones for that moment.

Not going to give an estimate on when this may happen. It's just something that feels inevitable given a few assumptions.

For something more in the near term, we already have constant surveillance, we just need a bit of software that can piece together all of that then recommend music that already exists.

0

Comments

You must log in or register to comment.

aeusoes1 t1_j0l2l90 wrote

Combine that with those headphones that have cat ear speakers and you've got your own theme song that people hear when you walk in the room.

1

thisimpetus t1_j0m01ku wrote

I think you're vastly, vastly underestimating musical composition and our relationships with it, and I say relationships because what we hear and enjoy and why varies enormously from person to person.

I'll just give myself as an example: I studied audio engineering and also play music. I also have ADHD. My absolute favourite music is my favourite in part because of my tastes in melodies and rhythms, in part because I have training in the actual recording process and can hear things that the untrained ear simply cannot (there's no conceit, here, it's just the consequence of knowing what you're listening for and a very great deal of practice) and then finally I have neurological mechanisms that affect my tastes. My favourite music sounds like cacaphony to some of my friends, who don't at all enjoy listening to sixty-four simultaneous tracks while lying perfectly still in the dark concentrating as hard you can.

Meanwhile I also enjoy dancing and and everything I just said goes straight out the window into irrelevance as soon as moving my body is involved, whereupon the criteria for what I enjoy is fundamentally different. A dirty beat that makes my feet work is just repetitive and boring if all I'm doing is listening; a pop-orchestral synthesis of myriad musical styles all engineered to precision sounds great on good monitors and almost like noise as background.

Any AI that sought to do what you're proposing has a much, much greater task in front of it than simply assigning acoustic data to a particular pattern of brain function. The degree to which such an AI would have to meaningfully grasp subjectivity is way, way, way more complicated than you're imagining. I don't mean that the AI would have to understand what it's doing, exactly, but rather that it would have to train on such a staggeringly vast corpus of data that it is, for the moment, unimagineable.

Consider training an AI on images. It takes thousands of them, and the training usually has be done on a rented super computer or cloud network to have enough computing power to get it done in a reasonable period of time. And that's just pixels. The size of the data between a corpus of images vs the entire operation of brains in all their myriad complexity is so vast I can't really express it, gigabytes vs petabytes at the very least.

So we can't currently collect that data, never mind begin to process it, nevermind train that data, and even if we had all that we don't have hardware that can even begin to monitor your brain activity in anything like the real-time fidelity such an AI would need to make intelligent choices about what to feed you, and we definitely don't have that kind if hardware at the consumer level.

What you might see is a much, much, much simpler service, in the next decade or two, that can match some very basic musical choices to mood and attention. That's vastly, vastly different from an AI authoring music that's matched in terms of enjoyment by our favourite artists today.

AI can write music now; it might soon write popular music, but it won't be using neural data to train on, it'll use social media and downloads and playlists etc. to figure out what will be popular. Custom, personalized, real-time music is a whole other ballgame and not currently even on the horizon.

3

RPC3 t1_j0mwlcb wrote

You are underestimating neural complexity and the brain's relationship with music. Music is a form of synesthesia in which you feel things based on sound, You can't look at an area and say "this is where the feeling of the music lies." In reality, it's an emergent phenomenon based on a ton of factors.

Also, even if you could do it, what is perfect? How would you classify that? Who decides? This is a situation in which you can't pin down perfect. It doesn't exist.

1

PurpleDancer t1_j0nd3ut wrote

What seems very likely is automated DJs that keep track of the room look for all the signs of engagement reading the crowd that sort of thing

1