Viewing a single comment thread. View all comments

green_meklar t1_j5s4i0n wrote

>The evidence of disparate regions serving specific functions is indisputable.

Oh, of course they exist, the human brain definitely has components for handling specific sensory inputs and motor skills. I'm just saying that you don't get intelligence by only plugging those things together.

>I think he points out that the training done for each model could be employed on a common model

How would that work? I was under the impression that converting a trained NN into a different format was something we hadn't really figured out how to do yet.

1

JavaMochaNeuroCam t1_j64e3yg wrote

Alan was really psyched about GATO (600+ tasks/domains)

I think it's relatively straightforward to bind experts to a general cognitive model.

Basically, the MOE, Mixture of Experts, would dual train the domain-specific model with simultaneous training of the cortex (language) model. That is, a pre-trained image-recognition model can describe an image (ie, a cat) in text to an LLM, but also bind it to a vector that represents the neural state that captures that representation.

So, you're just binding the language to the domain-specific representations.

Somehow, the hippocampus, thalamus and claustrum are involved in that in humans. If I'm not mistaken.

1