Submitted by CommunismDoesntWork t3_zy9soz in singularity
Belostoma t1_j24yu83 wrote
New research hyper-relevant to mine is likely to cite at least one of my papers, so I already get an alert. And ChatGPT wouldn't write a better summary of it than the authors did in the abstract. So I don't see the specific case you describe being especially useful.
There are many times when my research takes me into a new sub-field for just one or two questions ancillary to my own work, and I could see a more advanced, research-oriented form of ChatGPT (especially one that can cite and quote its sources) being potentially useful for the early stages of exploring a new idea and an unfamiliar body of work.
CommunismDoesntWork OP t1_j2531og wrote
>being potentially useful for the early stages of exploring a new idea and an unfamiliar body of work.
Exactly, this is what I had in mind when I was quizzing ChatGPT on the immune system. I wanted it to teach me everything there is to know about the immune system basically, which is something I know almost nothing about. If you keep asking ChatGPT "why", it will eventually bottom out and won't go into any more detail, whereas I imagine a research orientated GPT could keep going deeper and deeper until it hits the current limit of our understanding about a particular subject.
>New research hyper-relevant to mine is likely to cite at least one of my papers, so I already get an alert. >There are many times when my research takes me into a new sub-field for just one or two questions ancillary to my own work
But how do you know a completely separate area isn't relevant to your work? Not a sub field, but a completely separate area. Let's say a team is trying to cure Alzheimer's. At the same time, a different team is working to cure aids. The aids group makes a discovery about biology that at first only looks applicable to aids, and so only people studying aids learn about it. But as the alzheimer's team uncovers more raw facts about Alzheimer's, they uncover a fact that when combined with the aids discovery could create a cure for alzheimer's. But then many years go by without anyone making the connection, or worse case scenario the alzheimer's team randomly rediscovers the same thing the aids team discovered years ago. Where I think a research assistant GPT would really shine is by being able to absorb all of these independent data points and instantly making the connections. If it even speeds up research by a week it would totally be worth it.
Belostoma t1_j256ays wrote
>I imagine a research orientated GPT could keep going deeper and deeper until it hits the current limit of our understanding about a particular subject.
The problem is that you're kind of running up agains the limits of what the tech behind ChatGPT can do. It doesn't understand anything it's saying; it's just good at predicting what word should come next when it has lots of training data to go on. When you start talking about technical details that have maybe only been addressed in a few scientific publications, it takes some understanding of the meaning behind the words to assemble those details into a coherent summary; it can't be done based on language patterns alone. Even something like knowing which details are extraneous and which belong in a summary requires a large sample size to see which pieces of language are common to many sources and which are specific to one document. There's not enough training data for the deep dives you seek.
>Where I think a research assistant GPT would really shine is by being able to absorb all of these independent data points and instantly making the connections.
I think this is a great possibility for a research assistant AI eventually, but it will probably require advances in a different type of AI than the language models ChatGPT is using.
SgathTriallair t1_j25bvt9 wrote
This is exactly it. A language model can't get predictions when only a handful of people have ever talked about a specific topic, which will happen with deep scientific topics.
CommunismDoesntWork OP t1_j259z6s wrote
ChatGPT has been shown to have problem solving and analytical reasoning skills. It can also explain the reason behind it's answers. It can be confidently incorrect sometimes, but ChatGPT is for sure more than just "predicting what word should come next". There's a spark of AGI in it, even if it's not perfect. Transformers have been shown to be turing complete, so there's nothing fundamentally limiting it.
Belostoma t1_j25kjax wrote
>It can also explain the reason behind it's answers. It can be confidently incorrect sometimes, but ChatGPT is for sure more than just "predicting what word should come next".
But the explanation of the reasoning is just a part of what should come next, based on other people having explained reasoning similarly in similar contexts. It's still basically a pattern-matching autocomplete, just an insanely good one.
Viewing a single comment thread. View all comments