Viewing a single comment thread. View all comments

RadioFreeAmerika t1_jdrezc7 wrote

Could be. I asked in another post about LLMs and maths capabilities, and it seems that LMMs would profit greatly from the capability to do internal simulations. LLMs can't do this currently, and people commented that in the Microsoft paper, they state that (current?) LLMs models are conceptually unable to do more than linear sequence processing of one sequence. Possible workarounds are plug-ins or neuro-symbolic AI models.

Nevertheless, maybe our reality is just the internal simulation of an ASIs prompt response. Who knows, would that be ironic?

Your second question is an eons-long discussion and greatly depends on how you define god.

5