Submitted by CommunismDoesntWork t3_zy9soz in singularity
Belostoma t1_j256ays wrote
Reply to comment by CommunismDoesntWork in ChatGPT is cool, but for the next version I hope they make a ResearchAssistantGPT by CommunismDoesntWork
>I imagine a research orientated GPT could keep going deeper and deeper until it hits the current limit of our understanding about a particular subject.
The problem is that you're kind of running up agains the limits of what the tech behind ChatGPT can do. It doesn't understand anything it's saying; it's just good at predicting what word should come next when it has lots of training data to go on. When you start talking about technical details that have maybe only been addressed in a few scientific publications, it takes some understanding of the meaning behind the words to assemble those details into a coherent summary; it can't be done based on language patterns alone. Even something like knowing which details are extraneous and which belong in a summary requires a large sample size to see which pieces of language are common to many sources and which are specific to one document. There's not enough training data for the deep dives you seek.
>Where I think a research assistant GPT would really shine is by being able to absorb all of these independent data points and instantly making the connections.
I think this is a great possibility for a research assistant AI eventually, but it will probably require advances in a different type of AI than the language models ChatGPT is using.
SgathTriallair t1_j25bvt9 wrote
This is exactly it. A language model can't get predictions when only a handful of people have ever talked about a specific topic, which will happen with deep scientific topics.
CommunismDoesntWork OP t1_j259z6s wrote
ChatGPT has been shown to have problem solving and analytical reasoning skills. It can also explain the reason behind it's answers. It can be confidently incorrect sometimes, but ChatGPT is for sure more than just "predicting what word should come next". There's a spark of AGI in it, even if it's not perfect. Transformers have been shown to be turing complete, so there's nothing fundamentally limiting it.
Belostoma t1_j25kjax wrote
>It can also explain the reason behind it's answers. It can be confidently incorrect sometimes, but ChatGPT is for sure more than just "predicting what word should come next".
But the explanation of the reasoning is just a part of what should come next, based on other people having explained reasoning similarly in similar contexts. It's still basically a pattern-matching autocomplete, just an insanely good one.
Viewing a single comment thread. View all comments