Beor_The_Old t1_izc1lpo wrote
Reply to comment by hophophop1233 in [D] We're the Meta AI research team behind CICERO, the first AI agent to achieve human-level performance in the game Diplomacy. We’ll be answering your questions on December 8th starting at 10am PT. Ask us anything! by MetaAI_Official
Bridging rl and planning and connecting both to NLP has been an area of interest for a while. You could adjust large language models to have more of a personality and understanding of the concept of a state. This is at least a good demonstration that those three concepts can be connected well in a HCI setting
JustOneAvailableName t1_izc2l2j wrote
> understanding of the concept of a state.
OSeady t1_izcsujq wrote
Is this for real?
blose1 t1_izczrn8 wrote
I asked ChatGPT about your comment:
>someone asked on reddit "is this for real?" about article with title "Building A > Virtual Machine inside ChatGPT", how would you respond to him ?
A: It's difficult to say without more context. It's possible that the article in question is discussing the use of a large language model, such as GPT-3, to create a virtual machine within a chat environment. However, without being able to view the article and verify its contents, it's impossible to say for sure whether the claim is true or not. In general, it's always a good idea to approach claims with a healthy dose of skepticism and to verify information before accepting it as fact.
>article is indeed about creating a virtual machine inside large language model and simulating running different commands while this model returns accurate responses, specifically using you - ChatGPT, this person is indeed skeptical, knowing now that additional context, how would you respond to that person?
A: It's certainly possible to use a large language model, such as ChatGPT, to simulate running commands within a virtual machine. Language models are able to generate text based on a given input, so they can be used to generate responses to a wide range of prompts. In the case of a virtual machine, the prompts might be commands that a user could enter, and the responses would be the output of those commands. However, it's important to note that these responses are generated by the model and are not the result of the model actually executing the commands. In other words, while the model may be able to generate responses that are similar to the output of a real virtual machine, it is not actually running the commands and therefore is not a true virtual machine.
OSeady t1_izd05fm wrote
Incredible. Thank you.
Sinity t1_izcwah8 wrote
Yes. Through it's not actually simulating the machine - it's just superficially good at pretending.
cat "asdf" > file.txt
Works well. file.txt
is visible if you ls
. Then you maybe do cat "qwerasdf" > somedata.dat
... and on ls
it, again, shows up. But maybe ChatGPT forgot about file.txt
and it doesn't show up anymore.
TBF, humans wouldn't necessarily even outperform it on "pretending to run a computer" (not just an algorithm, but actual OS and such).
I think scale would make it way better at keeping track of things well.
[deleted] t1_izcyuz0 wrote
[deleted]
Viewing a single comment thread. View all comments