ethereal3xp OP t1_j8nokh1 wrote
Reply to comment by QristopherQuixote in Elon Musk, who co-founded firm behind ChatGPT, warns A.I. is ‘one of the biggest risks’ to civilization by ethereal3xp
If its not Task based AI... what other kind of AI will there be?
Task based AI cater to human needs. To make human daily living easier
Even made up droids from movies like Interstellar are task based mainly. It is not going to override a human command. Only suggest/per calculation
QristopherQuixote t1_j8nqhxa wrote
Strong AI implies consciousness and self-awareness. This has been the holy grail of AI since the 1970s. Neural networks are function emulators where input produces the desired output. Neural networks use classified or labeled training data and feedback to self correct (back propagation) until their functional output is acceptable. Deep learning and layered networks are leveraging models that were already trained to produce a more complex network. There are several different types of neural networks like convolutional, feed forward, etc. By using a multi model and filtering approaches, models can be combined so that more and more complex tasks can be accomplished. For example, driving involves several models working in concert like one that determines a road type, a few more for feature extraction, etc. Many statistical models such as clustering and regression are called “machine learning” and AI, even though they weren’t when I first learned them. Many of the original AI systems were rules based and were called “expert systems.” However, how these techniques produce outputs is dramatically different than a brain. Mimicking human behavior and capabilities is very different from possessing them like any creature with a brain.
QristopherQuixote t1_j8nw2c1 wrote
You shouldn't rely on science fiction to be your guide on how AI will evolve in the future. Skynet says it will be evil. In I, Robot it was insane. In Bicentennial man, it became fully human. In Star Wars it was benign and essentially slavish. The robots in Interstellar were essentially assistants who did not act independently. In Transcendence a human mind was "uploaded" creating a strong AI. In Chappy, AI happened by accident, resulting in strong AI formed in a robot and by creating a digital copy of a human mind.
Strong AI doesn't exist... yet.
ethereal3xp OP t1_j8ny7ty wrote
>You shouldn't rely on science fiction to be your guide on how AI will evolve in the future.
Why not?
They are ideas...good or bad
Contagion was a brilliant movie... and many parts true/did happen in terms of Covid (real life)
- Self learning, consiousness, complex actions -... which is considered strong AI. 1st why are these things needed for humanity?
Eventually... it will mean "overriding" some of the human decisions
Is this notion better for humanity or not? Or pose as a danger?
And who will make this decision... a group of smart humans... or the so called strong AI they will create?
QristopherQuixote t1_j8ogjgs wrote
Do you need a history of all the things SciFi got wrong? Asimov, Heinlein, etc?
Contagion is based on actual science. Read books by Robin Cook if you want to see an actual scientist write science fiction. His book "vector" predicted the use of Anthrax as a terrorist weapon. However, folks like Michael Crichton have been spectacularly wrong even though he had an MD. Crichton was a science skeptic in some respects who questioned bans on DDT and wrote a book that made a mockery of environmental activism. He also wrote a book against AI called "Prey" which had a swarm intelligence using nanobots that was beyond silly.
We don't even know if strong AI is possible. It doesn't appear to be necessary for us to get value from task based AI. Artificial neural nets are everywhere including in cruise control in cars, smart thermostats, etc. Some smart phones like the Pixel have them. Components of AI are being used more and more.
We cannot confuse complexity with strong AI. Very complex AI systems can still be weak task based AI. Consciousness and independent action are not part of AI now. No existing AI system can be considered to be "thinking." This idea that an AI overlord will emerge to override human action is pure science fiction. The human brain has trillions of interconnections between billions of neurons with an incredible input system. No computer can match it yet.
Jake0024 t1_j8rx7du wrote
> what other kind of AI will there be?
General (not task based) AI
ethereal3xp OP t1_j8t2rub wrote
Ok but what?
Even self driving cars (automated).... is task based if you think about it
Its focus is to drive a car. And it wont be able to drive as well as a human....if factoring in traffic or accident situation.
General AI or strong AI as some of labelled it.... is almost like a human brain.
It can deep learn, make advance calculations and make this conscious decision without human approval. Its long ways away....
Jake0024 t1_j8tlta8 wrote
Correct, a self driving car is not general AI. And yes, it's a long way off. That's why Musk freaking out about ChatGPT is so hilarious.
ethereal3xp OP t1_j8tn6g5 wrote
Because he doesnt trust anybody else other than himself
General AI in theory would mean a easier/comfortable life for humans
But humans may end up becoming dumber. And in addition if the AI inventor was some kind of environmentalist.... he could set the AI to action based on "saving the planet"
Meaning shut off power, gas, factories (when not needed) etc.... even if it could mean some human suffering.
And a human couldnt override
[deleted] t1_j8tznm2 wrote
[removed]
Viewing a single comment thread. View all comments