Submitted by Dramatic-Economy3399 t3_106oj5l in singularity
AndromedaAnimated t1_j3j36ab wrote
Reply to comment by turnip_burrito in Organic AI by Dramatic-Economy3399
Yes. I suggest either that, or that we allow AGI to learn ethics from all the information available to humanity plus reasoning.
turnip_burrito t1_j3j4a0z wrote
I do advocate for the second option:
> we allow AGI to learn ethics from all the information available to humanity plus reasoning.
Which is part of the process I'd want an AI to use to learn the correct morals. But I don't think an aI can learn what I would call "good" morals from nothing. It seems to me it will need to be "seeded" with a set of basic preferences or behaviors (like empathy, a tendency to mimic role models, or other inclinations). In truth these would be totally arbitrary and up to the developers/owners, before it can develop morals or a more advanced code of ethics.
I don't think I would want an AI that lacks empathy or is a control freak, so developing these options in-house before releasing access to the public seems to me to be the best option. While it's developed it can still learn from the recorded media we have, and real time in controlled settings.
AndromedaAnimated t1_j3j5qtt wrote
Here I agree with you.
Viewing a single comment thread. View all comments