Submitted by Nalmyth t3_100soau in singularity
Ortus14 t1_j2luhse wrote
Reply to comment by Nalmyth in Alignment, Anger, and Love: Preparing for the Emergence of Superintelligent AI by Nalmyth
From their website:"Our approach to aligning AGI is empirical and iterative. We are improving our AI systems’ ability to learn from human feedback and to assist humans at evaluating AI. Our goal is to build a sufficiently aligned AI system that can help us solve all other alignment problems."
https://openai.com/blog/our-approach-to-alignment-research/
ChatGTP has some alignment in avoiding racist and sexist behavior, as well as many other human morals. They have to use some Ai to help with that alignment because there's no way they could manually teach it all possible combinations of words that are racist and sexist.
Viewing a single comment thread. View all comments