Viewing a single comment thread. View all comments

SoylentRox t1_j8cblun wrote

Theoretically it should query a large number of models, and have a "confidence" based on how likely each model's answer is to be correct. Then return the most confidence answer.

11

ReadSeparate t1_j8fb4cr wrote

One can easily imagine a generalist LLM outputting an action token which represents prompting the specialized LLM, which then gets routed to the specialized LLM, then the response is formatted and put into context by the generalist.

1