2D_VR t1_iu7u6j1 wrote
We know how what we build works to an extent. For instance a chatbot only responds once queried and only replies with "the first thing it thinks of" .we need to allow for repeated thought an non selection. As well as a recursive structure. The depth of neurons problem has nearly been solved. See stable diffusion. So it should soon be an integration problem. Basically I think we'll know when we've made one. We'll be able to ask it to explain something to us and have it display the images on a screen that it's thinking of while it talks. The fact that we will be able to see it's thoughts, means we don't have to rely on a conversation prompt alone to tell if it's human level intelligent. It shouldn't be a big surprise to the people building it.
Viewing a single comment thread. View all comments