Submitted by oddlyspecificnumber7 t3_10bt523 in singularity
I've been seeing this mentality a lot. People in my personal social circles went from claiming that AI created art would be literally impossible, or a century away, to saying that it isn't impressive for any number of reasons. This just over the course of around a year or so.
I think the same is happening with LLMs like chatGPT. How long before we hear "sure it can solve any problem a 2nd year college student could solve, but it will never do original research"? As if that doesn't mean we are just a generation or two out from having language models doing original physics/biology/mathematics research.
The whole world is about to be turned upside down with this technology and I'm pretty sure we are not ready. If you had asked me last year when human level AGI arrives, I would have guessed at least 2050+. Now? I'd say its 50/50 that we will have something that is very close to an average human within the next 5 years.
Thoughts?
Mylnternet t1_j4cnya0 wrote
There is a great quote by Ray Kurzweil, that goes something like this.
"When a new technology arrives, people dissmiss it because it doesn't work very well. Then after it improves they will say we've always had that"
I think he said it on Lex Fridman but can't find it right now.