BranchLatter4294 t1_je5ajef wrote
The thing is, you have to develop it before you can determine what the impact might be. It makes no sense to halt development until you determine the impact when you can't test its capabilities until it has been trained.
Sanity_LARP t1_je5hqul wrote
That's like saying you can't figure out how to survive jumping off a cliff til you jump. The solution ends up being don't jump in the first place and you slam into the ground
BranchLatter4294 t1_je5laki wrote
Not a great analogy. Simply training an AI model in the lab is of no danger to anyone. A better analogy would be banning the measurement of the height of cliffs because tall ones may be dangerous.
Sanity_LARP t1_je5tn8j wrote
The dangers of AI tho aren't what it can do in isolation. The problems happen at scale with constant input and unpredictable results. The only guarantee is that there will be unforeseeable problems that can only be identified once it's too late.
Viewing a single comment thread. View all comments