Submitted by JAREDSAVAGE t3_126qyqo in Futurology
As the debate around AI swirls, as we realize that non-human intelligence is inevitable, how do we think it will behave? Will a moral core emerge?
There will obviously be the seed which leads and defines it, AI designed to exploit and harm, to maintain class divisions, and worse. There will also be AI designed to advance society, equality, access to education, healthcare, and more.
What I’m wondering is, if a neutral alignment were possible for the seed, where would a super-intelligent AI trend towards? We see that education tends to pull people towards more left-leaning and socialist values. Would a similar pattern emerge? If you designed an exploitational AI and left it to run for infinite hours, would it eventually stumble into some intrinsic morality?
I realize we’re not yet anywhere near the point of GAI, but emergent behaviours are starting to crop up, and I think the difference between a sufficiently complex LLM and GAI are going to be impossible for humans to tell apart in the very near future.
What kind of “person” will AI be? Will it become an extension of the traces of our monkey society, or something totally different?
[deleted] t1_jeadm4o wrote
[deleted]