Submitted by kdun19ham t3_111jahr in singularity
FusionRocketsPlease t1_j8ezpqa wrote
Why does everyone assume that AGI is an agent and not just passive software like any other?
Darustc4 t1_j8f9oi7 wrote
Optimality is the tiger, and agents are its teeth:
red75prime t1_j8hf7aw wrote
What is conjectured: nanobots eating everything.
What is happening: "Would... you... be... OK... to... get... answer... in... the... next... decade...?" As experimental processes overwhelm available computational capacity and attempts to create botnet fail as the network is monitored by similarly intelligent systems.
Sure, survival of relatively optimal processes with intelligent selection can give rise to agents, but agents will be fairly limited by computational capacity in non-monitored environment (private computers, mostly) and will be actively hunted and shut down in monitored environments (data-centers).
el_chaquiste t1_j8fcjke wrote
Parent is not as bad as the downvotes seem to make it.
Do we have evidence of emergent agency behaviors?
So far, all LLMs and image generators do is auto-completing from a prompt. Sometimes with funny or crazy responses, but nothing implying agency (the ability to start chains of actions on the world on its own volition).
I get some of these systems soon will start being self-driven or automated to accomplish some goals over time, not just wait to be prompted, by using an internal programmed agenda and better sensors. An existing example, are Tesla's or other FSDs, and even them are passive machines with respect to their voyages and use. They don't decide where to go, just take you there.
MysteryInc152 t1_j8fzf1i wrote
Even humans don't start a chain of action without some input. Interaction is not the only form of input for us. What you hear, what you see, what you touch and feel. What you smell. All forms of input that inspire action in us. How would a person behave if he was strolled of all input? I suspect not far off from how LLMs currently are. Anyway streams of input is fairly non trivial especially when LLMs are grounded in the physical world.
Ribak145 t1_j8f5u4m wrote
... because we can read the papers written by the currently researching scientists?
most of the stuff is open source, online accesible, its not a mystery
DukkyDrake t1_j8fvyr5 wrote
A lot of people do make that assumption, but a non-agent AGI doesn't necessarily mean you avoid all of the dangers. Even the CAIS model of AGI doesn't negate all alignment concerns, and I think this is the safest approach and is mostly in hand.
Here are some more informed comments regarding alignment concerns and CAIS, which is what I think we'll end up with by default at the turn of the decade.
arisalexis t1_j8f09nu wrote
worst comment award
FusionRocketsPlease t1_j8f0e7l wrote
?
MrEloi t1_j8f12wx wrote
You. Don't. Get. It.
Baturinsky t1_j8iw7y7 wrote
We assume that at least one of AGI will be an agent. And that may be enough for it to go gray goo.
Viewing a single comment thread. View all comments