Hazzman
Hazzman t1_jcvh4u7 wrote
Reply to comment by UnlikelyPotato in 1.7 Billion Parameter Text-to-Video ModelScope Thread by Neither_Novel_603
It won't matter if it's full of AI created content or if movies are made using it - people have to actually want to watch them.
And people aren't going to want to watch them. It's going to be fluff. It's going to be a whole lot of gravel filling up these platforms.
Does that mean this will always be the case? No... of course not. But just the ability to create SOMETHING and post it on youtube doesn't mean it is going to be worth looking at or gain any traction. Not at first.
Eventually this will change but early on its just going to be novel fluff.
Hazzman t1_jc5zz1z wrote
Reply to comment by mundodiplomat in A philosophical dive into “Everything Everywhere All at Once” by Azmisov
Actually I don't think the movie is masquerading as anything. I think it is what it is. A fun and chaotic movie with some stuff about relationships. I don't think it tries to pretend to be particularly deep. I mean a giant blackhole bagel as the source of all evil or whatever?
I think it is important to separate the movie, its actors and those who wrote and produced it from the awards ceremony that is placating the masses in order to win over views on an awards show that no longer has the same grip on the publics imagination as it once did.
Everyone likes a feel good story, so the Oscars produced a couple this year and it was popular and it worked.
It doesn't mean the movies that are a a part of that production are any less for it.
Are there going to be articles and fluff pieces trying to paint these productions as more than they are? Probably. Does it mean they aren't actually good or that they have some depth or anything worth appreciating? Absolutely not.
Lots of people will get annoyed at the recipients of these cynical backroom manipulations - but I think that's silly. Just because the feel good story was manufactured doesn't mean it doesn't feel good. Does the movie and its performers deserve an Oscar? Honestly? More than anything I think - who cares? In my opinion no, probably not. But - essentially who cares?
You will see the same people claim that these actors didn't deserve these Oscars while also claiming the Oscars are silly and pointless and people shouldn't watch them. Who cares.
Hazzman t1_jagn18t wrote
Reply to comment by Nebu_chad_nezzarII in The imperfect translation between thoughts and language by LifeOfAPancake
It reminds me of how some languages have words for emotions, concepts or scenarios that other languages don't have, but are emotions people can experience - just without the words to express them succinctly or even at all. Like Schadenfreude. We know what that means - and it is precise in its expression. Without that word, an English speaking person would have to deliberately express "Their failure, pain and or harm is satisfying to me" which is cumbersome.
But what about emotions we can feel that we don't have words for in any language. I'm sure there are many of these across different languages that aren't present in English but that we would understand if presented with them.
I also think of 1984's 'Newspeak' where the dictatorship of the future controls people's thoughts by eroding their language. Until the concept of revolution or rebellion no longer has a word or a phrase to describe it, and therefor doesn't exist as a possibility for the people.
Hazzman t1_j8c6v7u wrote
Reply to comment by Imaginary_Ad307 in Bing Chat blew ChatGPT out of the water on my bespoke "theory of mind" puzzle by Fit-Meet1359
Here's the thing - all of these capabilities already exist. It's just about plugging in the correct variants of technology together. If something like this language model is the user interface of an interaction, something like Wolfram Alpha or a medical database becomes the memory of the system.
Literally plugging in knowledge.
What we SHOULD have access to is the ability for me at home to plug in my blood results and ask the AI "What are some ailments or conditions I am likely to suffer from in the next 15 years. How likely will it be and how can I reduce the likely hood?"
The reason we won't have access to this is 1) It isn't profitable for large corporations who WILL have access to this with YOUR information 2) Insurance. It will raise ethical issues with insurance and preexisting conditions and on that platform, they will deny the public access to these capabilities. Which is of course ass backwards.
Hazzman t1_j4p31er wrote
Reply to comment by turnip_burrito in Researchers develop an artificial neuron closely mimicking the characteristics of a biological neuron by MichaelTen
"We made it up"
Hazzman t1_j4nz8l3 wrote
Hazzman t1_j4n7rao wrote
Reply to comment by thegoldengoober in Researchers develop an artificial neuron closely mimicking the characteristics of a biological neuron by MichaelTen
Memories are tiny globs of slime that cling to our neurons. Those tiny globs slide up and down your neurons like trains on a track. When you remember something they slide into an area of the brain called "Salitzar's Pit" which spreads the glob across a cluster of cells that 'read' the memory.
These artificial neurons can transmit memories - a capability that eluded scientists until now. Meaning we can take all the globs of a person's memories and put them in an artificial brain and I'm making all of this nonsense up.
Hazzman t1_j34mxsq wrote
One of the biggest risks with AI now and in the future is people's propensity to anthropomorphize it. I've had endless discussions with people who want to get into inane arguments about whether or not this stuff is sentient. It isn't. You are looking at advanced pattern recognition systems. That is all. Go ahead and tell me how "wE aRe JusT adVanCeD pAtTeRn ReCoGnItIon sYsTeMs" so I know not to bother responding.
These systems are going to become more advanced as time goes by and people are going to be more willing and compelled to further anthropomorphize. It's annoying because it will eventually impact legislation and the same compulsions that drive the general public to make this mistake will be the same things that drive the legislature to create policy based on these misconceptions.
Hazzman t1_j2vx230 wrote
Reply to Asked ChatGPT to write the best supplement stack for increasing intelligence by micahdjt1221
ChatGPT is going to kill a lot of stupid people.
Hazzman t1_jcw7exy wrote
Reply to comment by Artanthos in 1.7 Billion Parameter Text-to-Video ModelScope Thread by Neither_Novel_603
You are talking about narrow AI. AGI is different - this is why the smartest people in the world who make this shit have repeatedly explained they are concerned.