Submitted by IluvBsissa t3_11e7csf in singularity
TFenrir t1_jacuy0q wrote
I think this is really hard to predict, because there are many different paths forward. What if LLMs get good at writing directly minified code? What if they make their own software language? What happens with new architectures that maybe have something like....RETRO or similar memory stores built in. Heck even current vector stores allow for some really impressive things. There are tons of architectures that could potentially come into play that make the maximum context window of 32k tokens more than enough, or maybe 100k is needed. There was a paper I read a while back that was experimenting with context windows that large.
Also you should look into Google pitchfork, which is the code name for a project Google is working on that is essentially an LLM tied to a codebase, that can iteratively improve it through natural language requests.
My gut is, by this summer we will start to see very interesting small apps built with unique architectures that are LLMs iteratively improving a codebase. I don't know where it will go from there.
IluvBsissa OP t1_jacxdtx wrote
Oooh I totally forgot about the "Top-Secret" Pitchfork project ! I really hope it gets somewhere.
Viewing a single comment thread. View all comments