Submitted by EntireContext t3_zasjrg in singularity
EntireContext OP t1_iyntah2 wrote
Reply to comment by ReadSeparate in Have you updated your timelines following ChatGPT? by EntireContext
Well they will make a better algorithm than transformers then (which have already been improved to performers and whatnot).
At any rate, I still see AGI in 2025.
EpicMasterOfWar t1_iyo3tr2 wrote
Based on what?
EntireContext OP t1_iyo9fg4 wrote
The difference between what was possible in 2019 and what the models can do now.
Back when GPT-2 was out it could barely produce coherent sentences.
This GPTChat model does make mistakes, but it always speaks in a coherent way.
ReadSeparate t1_iyo883j wrote
I do agree with this comment. It’s feasible that long term memory isn’t required for AGI (though I think it probably is) or that hacks like reading/writing to a database will be able to simulate long term memory.
I think it may take longer than 2025 to replace transformers though. They’ve been around since 2017 and we haven’t seen any real promising candidates yet.
I can definitely see a scenario where GPT-5 or 6 has prompts built into is training data which are designed to teach it to utilize database read/writes.
Imagine it says hello to you after seeing your name only once six months ago. It could have a read database token which has sub-input tokens to fetch your name from a database based on some sort of identifier.
It could probably get really good at doing this too if it’s actually in the training data.
Eventually, I could see the model using its coding knowledge to design the database/promoting system on its own.
ChronoPsyche t1_iyp084x wrote
Eventually, but without any knowledge of specific breakthroughs that will happen very shortly, your 2025 estimation is an uninformed guess at best.
EntireContext OP t1_iyskmjg wrote
I don't see a need for specific breakthroughs. I believe the rate of progress we've been seeing since 2012 will get us to AGI by 2025.
ChronoPsyche t1_iytra7q wrote
Well you can believe whatever you want but you're not basing those beliefs on anything substantive.
Honestly, the rate of progress since 2012 has been very slow. It's only in the past few years that things have picked up substantially and that was only because of recent breakthroughs with transformer models.
That's kind of how the history of AI progress has worked. We typically have breakthroughs that lead to a surge in progress that eventually plateaus and then stalls for a while as bottlenecks are reached and then eventually a new breakthrough is reached and there is another surge in progress.
It's not guaranteed there will be another plateau before AGI, but we're gonna need new breakthroughs to get there, because as I said, we are approaching bottlenecks with the current technology that will slow down the rate of progress.
That's not necessarily a bad thing, by the way. Our society isn't currently ready to handle AGI. It's good to have some time pass to actually integrate the new technology rather than developing it faster than we can even use it.
Viewing a single comment thread. View all comments