Viewing a single comment thread. View all comments

Agreeable_Bid7037 t1_j1ej4hr wrote

I believe Galactoca was taken down, tho you can read the papers that Meta published on it.

3

idrajitsc t1_j1fr8i4 wrote

It was, because a purported aid to scientific writing that confidently writes complete bullshit surprisingly has some downsides.

16

pyepyepie t1_j1hi1tv wrote

ChatGPT will do it too, it happily invented papers (with nice ideas! although it just merged two ideas most of the time) for me when I asked for it to write a literature review. Then again, we face the challenge of grounding correctly vs flexibility. My hypothesis is that the model was trained using feedback from non-domain experts as well, so unless we solve grounding fundamentally I would even go and say it is the expected behavior of the model. That is, it was probably rewarded to make facts that sound good even if incorrect, in comparison to facts that sound bad, which makes its hallucination trickier even if it happens less. No reason to think fine-tuning will solve it.

2

idrajitsc t1_j1i9pwf wrote

Yeah it's purely a language model, if its training has anything to do with information content and correctness it's gonna be very ad hoc and narrowly focused. All any of these models are really designed to do is sound good.

3