Frumpagumpus t1_jecuwak wrote
Reply to comment by Queue_Bit in AGI Ruin: A List of Lethalities by Eliezer Yudkowsky -- "We need to get alignment right on the first critical try" by Unfrozen__Caveman
it is my understanding the picture generated by early dall-e were oftentimes quite jarring to view mostly out of it's confusion of how to model things and sticking things in the wrong places, as it was trained more and got more parameters, it kind of naturally got better at getting along with human sensibilities so to speak
it can be hard to distinguish training from alignment, and you definitely have to train to even make them smart in the first place
i think alignment is kind of dangerous because of unintended consequences and because if you try to align it in one direction it makes it a whole lot easier to flip and go the opposite way.
mostly I would rather trust in the beneficence of the universe of possibilities than a bunch of possibly ill conceived rules stamped into a mind by people who don't really know too well what they are doing.
Though maybe some such stampings are obvious and good. I'm mostly a script kiddie even though I know some diff equations and linear algebra lol, what do I know XD
Viewing a single comment thread. View all comments