Submitted by [deleted] t3_11gljui in singularity
wisintel t1_japjask wrote
Reply to comment by Slow-Schedule-7725 in Really interesting article on LLM and humanity as a whole by [deleted]
Actually, the makers of chatgpt can’t tell how it decides what to say in answer to a question. My understanding is there is a black box between the training data and the answers given by the model.
Baturinsky t1_jaqdap0 wrote
Not exactly. There are methods to analyse the LLM to figure, say, which "neurons" do what. But they are quite undeveloped still
https://alignmentjam.com/post/quickstart-guide-for-mechanistic-interpretability
gskrypka t1_jar05nt wrote
As far as I understand we cannot reverse engineer the way text is generated due to large amount of parameters but I believe we understand basic principles of how those work.
Viewing a single comment thread. View all comments