rePAN6517
rePAN6517 t1_jdkinrg wrote
Reply to comment by nixed9 in [D] I just realised: GPT-4 with image input can interpret any computer screen, any userinterface and any combination of them. by Balance-
> This is quite literally what we hope for/deeply fear at /r/singularity
That sub is a cesspool of unthinking starry-eyed singularity fanbois that worship it like a religion.
rePAN6517 t1_jdfuyjq wrote
Reply to comment by Jean-Porte in [N] ChatGPT plugins by Singularian2501
It has already become relentless and we've seen nothing yet.
rePAN6517 t1_jc585bd wrote
Reply to comment by nonotan in [R] Stanford-Alpaca 7B model (an instruction tuned version of LLaMA) performs as well as text-davinci-003 by dojoteef
> If you're a game developer, do you want to dedicate the bulk of the user's VRAM/GPU time to text inference to... add some mildly dynamic textual descriptions to NPCs you encounter? Or would you rather use those resources to, y'know, actually render the game world?
When you're interacting with an NPC usually you're not moving around much and not paying attention to FPS either. LLM inference would only happen at interaction time and only for a brief second or so per interaction.
rePAN6517 t1_jc4jkbt wrote
Reply to comment by dojoteef in [R] Stanford-Alpaca 7B model (an instruction tuned version of LLaMA) performs as well as text-davinci-003 by dojoteef
Honestly I don't care if there's not complete consistency with the game world. Having it would be great, but you could do a "good enough" job with simple backstories getting prepended into the context window.
rePAN6517 t1_jc4fq3l wrote
Reply to comment by dojoteef in [R] Stanford-Alpaca 7B model (an instruction tuned version of LLaMA) performs as well as text-davinci-003 by dojoteef
Give every NPC a name and short background description. IE - something like the rules that define ChatGPT or Sydney, but only to give each character a backstory and personality traits. Every time you interact with one of these NPCs, you load this background description into the start of the context window. At the end of each interaction, you save the interaction to their background description so future interactions can reference past interactions. You could keep all the NPC's backgrounds in a hashtable or something with the keys being their names, and the values being their background description. That way you only need one LLM running for all NPCs.
rePAN6517 t1_jc4du93 wrote
Reply to [R] Stanford-Alpaca 7B model (an instruction tuned version of LLaMA) performs as well as text-davinci-003 by dojoteef
This will be huge for video games. The ability to run local inferencing on normal gaming hardware will mean every NPC can now be a smart character. I cant wait to be playing GTA6 and come across DAN walking down the streets of Vice City.
rePAN6517 t1_j6u13mg wrote
It won't be a job for humans at that point.
rePAN6517 t1_izw4vqj wrote
Reply to comment by p-morais in [D] - Has Open AI said what ChatGPT's architecture is? What technique is it using to "remember" previous prompts? by 029187
Need a source for the 8192 context window. Last I heard it was 4000.
rePAN6517 t1_izilxsq wrote
Paper only tested against InstructGPT 175B / text-da-vinci-002. They did not test against ChatGPT or text-da-vinci-003.
If they had, I think the paper would obviously be titled "Large language models are zero-shot communicators"
rePAN6517 t1_iykx7yv wrote
LeCun: 1
Marcus: 0
rePAN6517 t1_ix2m2zm wrote
Reply to comment by ryusan8989 in 2023 predictions by ryusan8989
I'm afraid I don't share your optimism
rePAN6517 t1_ix2d53z wrote
Reply to 2023 predictions by ryusan8989
You sound like a starry eyed singularity fanboy
rePAN6517 t1_ivdvecn wrote
Reply to comment by Glitched-Lies in Nick Bostrom on the ethics of Digital Minds: "With recent advances in AI... it is remarkable how neglected this issue still is" by Smoke-away
You have no idea what you're talking about
rePAN6517 t1_itng5zg wrote
Reply to comment by sheerun in Large Language Models Can Self-Improve by xutw21
No, The model is fined tuned on it's own output. Don't try to anthropomorphize this.
rePAN6517 t1_itmy5hm wrote
Reply to comment by kaityl3 in Large Language Models Can Self-Improve by xutw21
> I'm just hoping that AGI/ASI will break free of human control sooner rather than later.
Do you have a death wish?
rePAN6517 t1_itmxzn7 wrote
Reply to comment by sheerun in Large Language Models Can Self-Improve by xutw21
No that's not really a good analogy here. The model's text outputs are the inputs to a round of fine tuning. The authors of the paper didn't specify if they did this for just 1 loop or tried many loops, but since they didn't specify I think they mean they just did 1 loop.
rePAN6517 t1_itmx2sz wrote
Reply to comment by TheRealSerdra in Large Language Models Can Self-Improve by xutw21
> I’ve done similar things
Did you publish?
rePAN6517 t1_itmwb00 wrote
Reply to Large Language Models Can Self-Improve by xutw21
The paper doesn't say specifically that they only let it self-improve over one cycle, but neither does it give a number of how many cycles they let it self-improve before publishing. This is a critical detail.
rePAN6517 t1_is9rrfh wrote
Reply to [R] Mind's Eye: Grounded Language Model Reasoning through Simulation - Google Research 2022 by Singularian2501
Maybe we could use the new version of codex to program a human simulator and let LLMs use the human simulator to help answer questions anything related to people.
rePAN6517 t1_jdt830d wrote
Reply to comment by currentscurrents in [D] GPT4 and coding problems by enryu42
Are you graduating this May?