omgpop
omgpop t1_javey7y wrote
Reply to comment by rumovoice in [P] LazyShell - GPT based autocomplete for zsh by rumovoice
How does it get the current dir in your example?
omgpop t1_jav1omw wrote
Does it/could it send your directory/file tree as part of the prompt?
omgpop t1_j6cmydr wrote
Reply to comment by JohnConquest in [R] InstructPix2Pix: Learning to Follow Image Editing Instructions by Illustrious_Row_9971
There’s Buzz.
omgpop t1_j28w0m9 wrote
Does not work for me at all on iPhone XS. All photos indexed and the search finds nothing. Want my money back lol. Since there are no settings, there’s nothing to troubleshoot. It simply does not work, search produces 0 results.
omgpop t1_jdgz4xl wrote
Reply to comment by Maleficent_Refuse_11 in [D] "Sparks of Artificial General Intelligence: Early experiments with GPT-4" contained unredacted comments by QQII
If I understand correctly, the model is optimised to effectively predict the next word. That says nothing of its internal representations or lack thereof. It could well be forming internal representations as an efficient strategy to predict the next word. As Sam Altman pointed out, we’re optimised to reproduce and nothing else, yet look at the complexity of living organisms.
EDIT: Just to add, it’s not quite the same thing, but another way of thinking of “most probable next word” is “word that a person would be most likely to write next” (assuming the training data is based on human writings). One way to get really good at approximating what a human would likely write given certain information would be to actually approximate human cognitive structures internally.