Submitted by jaxolingo t3_125qztx in MachineLearning
kromem t1_je6uv46 wrote
Reply to comment by visarga in [D] The best way to train an LLM on company data by jaxolingo
"Moar layers" doesn't only need to apply to the NN.
CoT prompting works by breaking analysis down into smaller steps that each generate their own additional context.
Doing something similar with DB analysis is absolutely possible, such as preemptively summarizing schema and using that summary as part of the retrieval to contextualize the specific fragments.
Additionally, having static analysis examples on hand for related tables that's fed in to go from zero shot to few shot would go a long way at reducing some of the issues you highlight.
Tostino t1_je847jg wrote
Literally just worked through this today manually as a proof of concept, using the LLM to augment the DB schema with comments describing any relevant info or corner cases. I'm essentially just manually feeding it as context to my prompts when I need to know something related to that set of tables, but it seems pretty powerful. Automating this is going to be nuts.
kromem t1_je84zam wrote
> Automating this is going to be nuts.
Yes, yes it is.
Viewing a single comment thread. View all comments