Submitted by [deleted] t3_yw3ear in singularity
phriot t1_iwhq4lq wrote
Reply to comment by Kaarssteun in models superior to GPT-3? by [deleted]
I tried this out just now. It did okay for what it wrote with my prompt, but was super incomplete.
Kaarssteun t1_iwhqov0 wrote
which is what the "generate more" button is for. One click & it extends by 400 tokens. Do that until it's done
phriot t1_iwhrv3y wrote
Okay, I guess I probably should have just tried clicking it. Often a button with similar text is for something like "try a new prompt."
Kaarssteun t1_iwhs4yl wrote
let me know how it ends up doing!
phriot t1_iwhu5kg wrote
Overall, pretty decent. Maybe Wikipedia quality in terms of the information? That's actually better than I may have expected. It doesn't seem to be wholesale plagiarized from a website. (I don't have access to Turnitin anymore, and I haven't taken the time to find a free alternative to run it through.) That said, some sections have weird levels of detail, and the organization within each paragraph is simple and/or lacking. If you told me a computer wrote it, I would believe you, but I'd probably also believe you if you told me a random undergrad Biology student wrote it, too.
randomrealname t1_iwht9lg wrote
does it provide sources for the information given also?
Kaarssteun t1_iwhtr3s wrote
from here: "Galactica models are trained on a large corpus comprising more than 360 millions in-context citations and over 50 millions of unique references normalized across a diverse set of sources. This enables Galactica to suggest citations and help discover related papers."
Always remember that the outputs of a language model are however, very prone to hallucination. I would not trust its outputs.
Viewing a single comment thread. View all comments