Submitted by levoniust t3_10zsymm in singularity
_sphinxfire t1_j85ota3 wrote
Reply to comment by Hunter62610 in Everybody is always talking about AGI. I'm more curious about using the tools that we have now. by levoniust
I'm not sure what your "linear thoughts" look like if you think a language model that constantly updates itself to maintain its predictive ability is "thinking" in any way. Where does agency come into the picture here? Where's the goal structure?
PleasantlyUnbothered t1_j85u2su wrote
I imagine a huge slender cylindrical (whatever shape) “database” existing at the center, and “linear thoughts” would be a data stream that would propagate from this central position, but remain connected to it, like a small tendril. This tendril would be 1 conversation, or a line of thought, similar to the stream of consciousness literary device.
When that instance, conversation, thought, etc. is deemed “complete”, either by outside environment or internal timers, this tendril will fold itself perfectly back into that center database, allowing the AI to “re-center” itself, seamlessly (once systems have been perfected) integrating the most recent information into its already existing database, and allowing it to bring that entire knowledge base into the next virtual space it occupies.
Hope this helps!
Viewing a single comment thread. View all comments