Submitted by kdun19ham t3_111jahr in singularity
DukkyDrake t1_j8fu9jc wrote
You would see the stark difference If you understood to what alignment really refers.
Altman is a VC, he is in the business of building businesses. Altman is simply hoping for the best, expecting they'll fix the dangers along the way. This is what you need to do to make money.
Yudkowsky only cares about fixing or avoiding the dangers, he doesn't make allowances for the best interests of balance sheet. He likely believes the failure modes in advanced AI aren't fixable.
Who here would stop trying to develop AGI and gain trillions of dollars just because there is a chance an AGI agent would exterminate the human race. The core value of most cultures is essentially "get rich or die trying".
vivehelpme t1_j8i04sx wrote
What alignment really seems to refer to is a petrifying fear of the unknown dialed up to 111 and projected onto anything that a marketing department can label AI, resulting in concerns of mythological proportions being liberally sprinkled over everything new that appears in the fields.
Thankfully these people shaking in the dark have little say in industry and some exposure therapy will do them all good.
Viewing a single comment thread. View all comments