[deleted] t1_j8c34us wrote
SoylentRox t1_j8cblun wrote
Theoretically it should query a large number of models, and have a "confidence" based on how likely each model's answer is to be correct. Then return the most confidence answer.
duboispourlhiver t1_j8cpldd wrote
Artificial expert panel
RabidHexley t1_j8dzxsv wrote
I Am Legion
ReadSeparate t1_j8fb4cr wrote
One can easily imagine a generalist LLM outputting an action token which represents prompting the specialized LLM, which then gets routed to the specialized LLM, then the response is formatted and put into context by the generalist.
[deleted] t1_j8g761n wrote
[deleted]
Viewing a single comment thread. View all comments