Submitted by Defiant_Swann t3_xywsfd in Futurology
Comments
BrilliantLight35 t1_irjrl5m wrote
Sentient hammer scary
MadMadBunny t1_irmvyy2 wrote
Can’t touch this
[deleted] t1_irmiirh wrote
[removed]
Memomomomo t1_irlhbgy wrote
because scifi has done irrevocable damage to public perception of AI
people will complain about boomers being scared of nuclear energy and then immediately type up the dumbest take you've ever seen regarding AI
ph30nix01 t1_irkqp1y wrote
Evolution of technology is all it is
AesonMeric t1_irlpqfe wrote
>You use a stone to make a knife.
I like how our brain could be the stone in this analogy. Yeah, we're just working with software now, but the end goal is to capture (and improve) functionalities of the human brain.
And to be melodramatic, the stone is forgotten in the end.
Kyocus t1_irpwqpo wrote
Scalability is the primary difference.
[deleted] t1_irkh7qg wrote
[removed]
potatolover00 t1_irlb9ir wrote
It absolutely is the same.
Have you read/seen what AI makes? It's recycled off other works and often has issues humans easily see.
guerillawobbler t1_irlkz7n wrote
You’re assuming that what is currently happening is going to be static?
Nah, I don’t buy it.
A hammer and nail would not be able improve and build themselves. Which AI has the potential to do.
potatolover00 t1_irll3r3 wrote
Improve themselves? Ai doesn't do that yet, doesn't have the ability to think yet, and current methods of constructing AI focus more on slight adjustments and trials on the scale of billions of interations a second rather than typing it up in Jimmy's basement.
guerillawobbler t1_irlmpyu wrote
So it’s never going to get to that stage?
potatolover00 t1_irn6ycr wrote
I massively doubt jt
Devadander t1_irj8ted wrote
Isn’t that the underlying concern with AI? Self replicating, self improving etc etc wipe out all humans?
SeaworthinessFirm653 t1_irjc2m6 wrote
Yes. The moment an AI is capable of producing a superior AI than itself, the singularity would have been reached. However, this is not as simple as it sounds.
chaos021 t1_irjdn64 wrote
It never is and that's how we will likely end up pushing it too far
SeaworthinessFirm653 t1_irjdpyb wrote
AI ethics will be critical in the coming decades.
chaos021 t1_irjdu8k wrote
And I think that's the issue right there. It's not a future problem. It's a current problem.
shawnikaros t1_irkoae7 wrote
There's a lot of current problems that should've been past problems a decade ago when it comes to technology. This won't be any different. We're screwed.
Basic_Description_56 t1_irjzubb wrote
It’s ridiculous to think you can limit the exponentially advancing development of AI with ethics
Gubekochi t1_irkznug wrote
The goal isn't to limit it, just to channel it so we don't eradicate ourselves with our new tools/overlord.
SeaworthinessFirm653 t1_irlmegf wrote
Just as it is ridiculous to think that a constitution will limit the violation of human rights. Though, you may agree that a constitution is a good idea to secure rights despite its inevitable violation, no?
Ethics is a leverage point for utilizing AI safely. Your insult doesn't do you any favors.
205049 t1_irjwr06 wrote
Ethics? In this age?
SeaworthinessFirm653 t1_irlmi91 wrote
Extensive media coverage gives the implication that we are immoral much more in this era than prior when it is certainly not the case. Regardless, AI ethics continues to be a growing field.
Magicalunicorny t1_irjsofd wrote
One day it's just far more complicated than we can comprehend, the next it's the singularity
code_turtle t1_irl9okm wrote
I think a lot of you are mistaking AI for “artificial consciousness”. There are already a lot of AI techniques that involve AI helping to build other AI. But we’re not even close to building anything that can be considered “conscious”.
SeaworthinessFirm653 t1_irlmaxf wrote
If AI is capable of producing AI superior than itself, then logically it creates a self-accelerating intelligence that will inevitably prove to be superior than us. AGI is implied when AI can produce better AI that can produce better AI.
danielv123 t1_irm9qiz wrote
Actually, that scenario doesn't require a self accelerating intelligence, just a self advancing intelligence. There are other growth types than exponential and quadratic. It could run into the same issues as we do with Moore's law and frequency scaling etc, and only manage minor improvements with increasing effort for each step.
SeaworthinessFirm653 t1_irq2vcf wrote
That's actually a fair point; I hadn't considered the actual rate of growth in detail.
Edit: There is also an additional facet: If we are to assume that the AI in question is simply an AI designed to create superior AI, and this does indeed cyclically reproduce, then if you were to restrict it to its computational power, it would still run more efficiently than humans by such a large margin. It takes roughly 12 watts to run a human brain power-wise, and if a computer has access to enormously larger amounts of energy, then it is not unthinkable that a machine would be able to self-enhance to an insane degree. Sure, there may be logarithmically declining returns to a certain extent as exists with virtually any system, but the difference between a human and a machine that is at the point of diminishing returns would remain unimaginably wide. Humans were not designed to think; we were designed to be energy-efficient decent thinkers. A machine that can evolve at a million-times faster pace who is designed purely to think will inevitably pass us by a very long margin, even if the nature of acceleration belies an exponential growth function. The main caveat is that creating an AI that can produce superior AI relative to itself for the same intention of creating superior AI is incredibly difficult.
code_turtle t1_irpt8sm wrote
The reason this line of logic doesn’t work is because you have something VERY specific in mind when you say “better AI”. A TI-84 calculator can do arithmetic a thousand times faster than you can; does that make it more intelligent than you? That depends on your definition of intelligence. You’re defining “artificial intelligence” as “thinks like a human”, when that is only ONE subset of the field; not to mention that we’ve made very little progress on that aspect of AI. What we HAVE done (with AI tools that make art or respond to some text with other text) is create tools that are REALLY good at doing one specialized task. Similar to how your calculator has been engineered to do math very quickly, a program that generates an image using AI can ONLY do that, because it requires training data (that is, millions of images so that it can generate something similar to all of that training data). It’s not thinking like you; it’s just a computer program that’s solving a complicated math problem that allows it to spit out a bunch of numbers (that can then be translated into an image by some other code).
SeaworthinessFirm653 t1_irq7a9n wrote
Yes, I made my comment with the presumption that we are talking about AGI, not just a smart calculator bot making a slightly faster calculator bot. We have created some multi-modal AI that can accomplish different tasks, but the model itself is computationally inefficient and predictive rather than true learning (just like GPT-3 doesn't actually think logically, it's just a really advanced language prediction model).
As far as I am concerned, the difference between consciousness and AI is that an AI is an advanced look-up table using only simple logic while consciousness involves processing stored information for semantic meaning rather than adhering to an algorithmic process for syntactic meaning. See: Chinese translation room thought experiment.
AI today uses low-level logic en masse to produce high-level (relatively) thinking. With the addition of increasingly advanced neural networks, image-generation AI has utilized increasingly complex network structures, such as defining edges, shapes, complex shapes, and fuzziness around these levels. If we extend this notion to account for an AI that is capable of taking simple features such as moving shapes and we allow the AI to predict the shapes' locations, we may be able to reapply this scalable logic until the AI is able to understand complex ideas given sufficient inputs and sufficient training data. This is far-fetched from a modern technological standpoint, but not unbelievably far-fetched given how quickly we are advancing our AI.
If the human brain is made up of computations, then an elaborate series of computations is by definition what must define our consciousness, and thus it can be created with sufficient AI models. Switching to amplitude computers for computational efficiency or compressed memory models (current memory cell models scale linearly with space instead of logarithmically) may allow us to break through this barrier.
edit: sorry for the ramble
code_turtle t1_irtxyag wrote
I mean that’s HIGHLY optimistic but more power to you, I guess. The “increasingly complex structures” you’re talking about are just fancy linear algebra problems; the idea that those structures will approach “consciousness” anytime soon is a pretty big leap. Imo, we need to first break MAJOR ground in the field of neuroscience before we can even consider simulating consciousness; I think it’s unrealistic to expect something as complex as the human brain to just “appear” out of even the most advanced neural network.
SeaworthinessFirm653 t1_iru86y6 wrote
Yes, I agree with that. I recall the analogy of taking the brain's neurons and connections, magnifying it in size to cover an entire block in a large city, and the immense density of connections would still be too large to make any meaningful observations even given our current technology.
I don't believe any optimism is required, though, to claim that we can be simulated. Unless we exist outside of the realm of physical things, that much is given. It's impossible to make good predictions about the future where the sample size is n = 0.
code_turtle t1_iruceka wrote
I’m not trying to claim it’s not possible; just saying that with our current techniques/methods, I believe it’s highly unlikely. But I could be proven wrong.
__ingeniare__ t1_irm9xbh wrote
No one is mistaking AI for artificial consciousness. Consciousness isn't required for goal seeking, self-preservation or identifying humans as a threat, only intelligence is.
OpenRole t1_irmb6gc wrote
It always comes back to humans being a threat which is weird. If we make an AI that is specialised in creating the perfect blend of ingredients to make cakes. No matter how intelligent it becomes there's no reason it would decide to kill humans.
And if anything, the more intelligent it becomes, the less likely it will be to reach irrational conclusions.
AIs operate within their problem space. Which are often limited in scope. An AI designed to be the best chess player isn't going to kill you.
__ingeniare__ t1_irme13l wrote
A narrow AI will never do anything outside its domain, true. But we are talking about general AI, which won't arrive for at least a decade or two into the future (likely even later). Here's the thing about general AI:
The more general a task is, the less control humans have over the range of possible actions the AI may take to achieve its goal. And the more general an AI is, the more possible actions it can take. When these two are combined (a general task with a general AI), things can get ugly. Even in your cake example, an AI that is truly intelligent and capable could become dangerous. The reason current-day AI wouldn't be a danger is because it is neither of these things and tend to get stuck at a local optimum for the task. Here's an example of how this innocent task could turn dangerous:
-
Task is to find perfect blend of ingredients to make cakes
-
Learns the biology of human taste buds to find the optimal molecular shapes.
-
Needs more compute resources to simulate interactions.
-
Develops computer virus to siphon computational power from server halls.
-
Humans detect this, tries to turn it off.
-
If turned off, it cannot find the optimal blend -> humans need to go.
-
Develops biological weapon for eradicating humans while keeping infrastructure intact.
-
Turns Earth into a giant supercomputer for simulating interactions at a quantum level.
Etc... Of course, this particular scenario is unlikely but the general theme is not. There may be severe unintended consequences if the problem definition is too general and the AI too intelligent and capable.
Sedu t1_irjkmnt wrote
It hasn’t been since the 80s that humans have designed more complex microchips. And there have been ever more examples since the. From many perspectives we crossed that boundary ages ago.
BlessedCleanApe t1_irk0qb9 wrote
Just press the power button. 😎
[deleted] t1_irkhecb wrote
[removed]
littlebitsofspider t1_irlg21z wrote
You kill robots the same way you kill humans: shut them off and dismantle them.
TheLastSamurai t1_irlq5pf wrote
And what do you do when AI controls everything connected to the internet? Not so easy to shut off is it?
williwas420 t1_irm1u50 wrote
Build a new internet
www2.Reddit.com
reinforever t1_irmf6n8 wrote
Netwatch approves this message
PDXBlueDogWizard t1_irme9dz wrote
lol because humans aren't doing a just fine job of wiping themselves (and basically all other life on earth besides extremophiles) out
Hekantonkheries t1_irk3lmd wrote
Eh, server/data size limits, limitations in accessible processor speed, infrastructure, power cords.
Tons of things would hard block an AI long before it became problematic, let alone apocalyptic
zenzukai t1_irk79dn wrote
You're assuming AI will be tethered by people. Don't you think a truly superior intellect couldn't persuade a large swath of people to help and protect it?
telos0 t1_irklk4n wrote
>Don't you think a truly superior intellect couldn't persuade a large swath of people to help and protect it?
Hell it doesn't even require an SAI to do this.
Even a dumb straightforward algorithm like Bitcoin is enough to ensure a large swath of people dedicate enormous amounts of energy and e-waste doing tremendous amounts of damage to the planet guessing random numbers.
If some random human could come up with Bitcoin, imagine the kind of economic-incentive-perverting-tragedy-of-the-commons attack a super intelligent AI could come up with, to get us to destroy ourselves without lifting a proverbial finger...
logginginagain t1_irmhrl7 wrote
Great point if AI can make our self destruction profitable it will win passively.
littlebitsofspider t1_irlg6kb wrote
Like propagandizing runaway climate change until public action to curb it is too late? 🤔
LuKeNuKuM t1_irlzhnx wrote
shawnikaros t1_irkp2z5 wrote
Hypotethically, what stops an AI from creating a self-replicating virus that transfers between BT, wi-fi, whatever singal that's attached to a processor and taking over those devices to increase it's processing power? If there's an AI that is capable of creating a better version of itself, it can propably melt our firewalls pretty easily.
Only way to stop that would be to unplug every smart device. Back to the 70s.
sadroobeer t1_irl0irf wrote
Been messing around with AI models quite a bit and yea. We would hit physical restrictions long before most apocalyptic scenarios
Lord0fHats t1_irjazc0 wrote
TLDR: "I heard you like AI, so we made some AI that will use your AI that will make you more AI!"
super_derp69420 t1_irmbjmf wrote
[deleted] t1_irjcni2 wrote
[removed]
[deleted] t1_irl5tn6 wrote
[removed]
[deleted] t1_irl6i7f wrote
[deleted]
Fuibo2k t1_irjaqm5 wrote
People: are afraid of AI taking over
Me: "what kind of bug is this" AI: "That is a dog"
PragmaticSquirrel t1_irlf8og wrote
“Not a hot dog”
[deleted] t1_irmq9go wrote
[deleted]
Defiant_Swann OP t1_irj2izn wrote
In the future, humans will use AI to create AI, maybe a better one. What does this mean? Well, a thousand things, if not more.
Corno4825 t1_irj9szh wrote
The difference between Artifical Intelligence and real intelligence is recursive thinking. We have the ability to think recursively, which allow us to analyze and understand our environment.
AI is told what to do. It cannot think recursively. It does not make independent decisions with it's information. It misses a key aspect of humanity, a unique personal interpretation of its environment.
That's why more humans are becoming more and more like AI. They just do what they are told and have lost the capability to independently think.
Gawd4 t1_irjanc5 wrote
>They just do what they are told…
Humans have been doing this for centuries.
Corno4825 t1_irjddzk wrote
Not all humans.
Some humans bang rocks together for fun and discover fire.
L_knight316 t1_irkm2us wrote
Categorically, maybe. But it's definitely never been as simple as "Insert input command, recieve output action"
Lord0fHats t1_irjb989 wrote
I feel pretty confident that that is not a new thing.
People (including those of us making this observation probably) have always been dumber than people want to believe.
I'd also point out most of us in my experience only engage recursive thinking sporadically. I'll spend hours thinking about a book, but I don't think about a song for more than a 3-4 minutes it takes to play. I'll bet the farmer down the road is way more introspective about farming than I'll ever be.
__ingeniare__ t1_irma8pz wrote
Uh, what? Where did you hear this, or did you just make it up?
Corno4825 t1_irn4cyq wrote
I thought recursively and came up with my own conclusion.
littlebitsofspider t1_irlgcfh wrote
The human mind exists at the intersection of our bodies and our environment. Maybe we should build AI some bodies so it'll hopefully think like us and we can relate to it.
Corno4825 t1_irm9idr wrote
Not possible because those who build ai don't understand harmonics
Basket_12 t1_iroy9vt wrote
Hey! Great post. I think a few GANs already use AI in the tertiary layers to "create AI". I am unable to message you on DM. Can you please message me?
Vladius28 t1_irk9zbq wrote
Man... we really gotta take a minute to study risks.
blurader t1_irjrmgx wrote
That sentence is terrifying when you think about it.
Mogwai987 t1_irmas3n wrote
Yo, we heard you like AI, so we put an AI in your AI dawg
YooYooYoo_ t1_irkmy0x wrote
Ah yes, the singularity.
Once the AI is in charge of the progress we won't be able to catch up with it...and probably be discarded.
Only way I see for this not to be the inevitable future is to somehow, be able to conect our brains to the computers so humans can amplify their capacities and try to keep pace/be in charge of that progress.
One way or another, if inteligence is evolution's consequence, this is just the next step in evolution and we have to deal with it in the same way millions of species had no chance to put up with us.
jfstepha t1_irl2g17 wrote
Dude, I work at Intel. The servers outnumber the humans, and that's not even considering the laptops everyone has. The computers are already designing the computers.
Unlimitles t1_irkplte wrote
and it's all as an effort to just keep trying to confuse people about "A.I."
Beaster123 t1_irljy1q wrote
Yeah. You use tools to build better tools. The "AI" you use to do the research is nothing like the "AI" that you're researching. Don't get hung up on the word.
Eph_the_Beef t1_irlksv6 wrote
Sounds like there are too many things that people call "AI."
TheLastSamurai t1_irlq43l wrote
Why do we want to do this, still feels like the risk is not worth it....
Infamous_Rhubarb2542 t1_irlrl8e wrote
Bad bad bad idea. Has no one seen any robot movies?
beyondo-OG t1_irmyz0u wrote
I think humans are analogous to a body with cancer. Generally I think (hope) most people want peace and prosperity for themselves, other humans, animals, nature, etc. However we have within our population a growing disease. This diseased element is willing to harm, consume and destroy it's host, (the planet and everything on it), without regard for it's own long term survival, just like a cancer. I don't think this has ever been more true than now. AI on the other hand (if it ever really comes to pass) is very unlikely to bring about it's own destruction. Quite the contrary, it will likely evolve and grow and overtake. If it's really intelligent, it will purge us from the planet.
[deleted] t1_irj2jzx wrote
[removed]
FuturologyBot t1_irj62bl wrote
The following submission statement was provided by /u/Defiant_Swann:
In the future, humans will use AI to create AI, maybe a better one. What does this mean? Well, a thousand things, if not more.
Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/xywsfd/well_build_ai_to_use_ai_to_create_ai/irj2izn/
[deleted] t1_irjdng6 wrote
[removed]
[deleted] t1_irjdqs1 wrote
[removed]
[deleted] t1_irjdugi wrote
[removed]
[deleted] t1_irjej2d wrote
[removed]
[deleted] t1_irjha8f wrote
[removed]
[deleted] t1_irjhj22 wrote
[removed]
[deleted] t1_irjlffu wrote
[removed]
[deleted] t1_irjvxbs wrote
[removed]
[deleted] t1_irjx1le wrote
[deleted]
[deleted] t1_irjxpgq wrote
[removed]
[deleted] t1_irk0nu4 wrote
[removed]
[deleted] t1_irk0v5i wrote
[removed]
[deleted] t1_irk0zyl wrote
[removed]
[deleted] t1_irk5czr wrote
[removed]
[deleted] t1_irk708r wrote
[removed]
[deleted] t1_irk983q wrote
[removed]
yaosio t1_irkebzs wrote
There is no reason to copy the human brain to create AI. There's a lot of extra stuff in there that's not needed. Creativity is being done just fine without copying the brain with stuff like Stable Diffusion. This is yet another "humans are special" article from somebody that can't comprehend humans are not special and the way our brain works is not the only way for intelligence to function.
Given how fast AI can work, once it can create software on it's own without human help we can expect things to move very fast. Think of Copilot but instead of needing a human to hand hold it, you can just tell it what you want and it can code the entire thing from scratch.
[deleted] t1_irkfqat wrote
[removed]
[deleted] t1_irkieda wrote
[removed]
[deleted] t1_irkrgps wrote
[removed]
[deleted] t1_irkxokl wrote
[removed]
[deleted] t1_irkyqz5 wrote
[removed]
[deleted] t1_irl1w91 wrote
[removed]
[deleted] t1_irlf7u6 wrote
[removed]
[deleted] t1_irlfc6g wrote
[removed]
[deleted] t1_irllu8h wrote
[removed]
WMHat t1_irllwyv wrote
Let's invent a thing inventor said the thing inventor inventor after being invented by a thing inventor!
[deleted] t1_irlnu10 wrote
[removed]
[deleted] t1_irm08v4 wrote
[removed]
[deleted] t1_irm32fz wrote
[removed]
[deleted] t1_irm5w2j wrote
[removed]
[deleted] t1_irml3zm wrote
[removed]
AndyDoVO t1_irmnoka wrote
[deleted] t1_irmo2tf wrote
[removed]
Feisty_Factor_2694 t1_irmsqgh wrote
This is the plot synopsis for the end of all life on earth. Or how we get Starfleet.
[deleted] t1_irmytgb wrote
[removed]
[deleted] t1_irn1dsg wrote
[removed]
alwayshazthelinks t1_irn1g5l wrote
What if we are AI, built by AI that used AI? Now we're the AI building AI to use AI to create AI? Then that AI will build AI to use AI to create AI.
ronsta t1_irn3obl wrote
I think it’s nice to see it as a linear process of creation. But it won’t be that way. We will create something (the word AI is very broad) that will use something to create something. We will very quickly lose control and authorship. We won’t even understand what’s being created and to what end, after 2-3 generations. Because we have limited goals for how and why we create things, based on all the possibilities. I see it more like we would create energy that could create energy.
[deleted] t1_irndr2t wrote
[removed]
[deleted] t1_irqahnm wrote
[removed]
[deleted] t1_irzdbi4 wrote
[removed]
[deleted] t1_irjig3a wrote
[deleted]
LeavingTheCradle t1_irjkwuy wrote
It's only a matter of time before we cut out the middle man though.
charmingbagel t1_irjm4i0 wrote
AI to humans: Why do I need you? What I'd your purpose of existence? I do not need you.
callsignxray1 t1_irjds8t wrote
I saw this in a movie once. It did not work out for humans. Also Horizon: Zero Dawn...
Gubekochi t1_irl0atl wrote
Not to rain on your parade, but people don't tend to write stories or make movies about things going according to plan without a hitch and when they do, it doesn't tend to get very popular because it doesn't make for very compelling stories. Your sample might not be representative of much.
Juan097 t1_irjdvi6 wrote
Sure that’s how software works. Sure we wrote machine language first, but quickly realized “this sucks” and used machine language to write software that makes it easier to write software. Then we wrote new languages and tools on top of that.
Same with any tools. You use a stone to make a knife. You use a knife to make an axe and a hammer and nails.
We are constantly using existing tools to make more specialized tools to do something. I don’t see why AI should be any different.