2D_VR

2D_VR t1_iu7u6j1 wrote

We know how what we build works to an extent. For instance a chatbot only responds once queried and only replies with "the first thing it thinks of" .we need to allow for repeated thought an non selection. As well as a recursive structure. The depth of neurons problem has nearly been solved. See stable diffusion. So it should soon be an integration problem. Basically I think we'll know when we've made one. We'll be able to ask it to explain something to us and have it display the images on a screen that it's thinking of while it talks. The fact that we will be able to see it's thoughts, means we don't have to rely on a conversation prompt alone to tell if it's human level intelligent. It shouldn't be a big surprise to the people building it.

1

2D_VR t1_itseri8 wrote

Something to consider about "free will" is that we can imagine multiple futures and select which we think is best. Whereas a chatbot receives a query and responds with the first think it thinks of and then stops thinking. It's like we run the same query continuously until some time limit has been reached. Making slight changes to internally determined weights. And then with some rating system select the highest scored future plan. This is a selection algorithm that still works with determinism

6