Viewing a single comment thread. View all comments

EndTimer t1_j9kl706 wrote

We would have to read the study methodology to evaluate how they were testing GPT 3.5's image context.

But in this case, multimodal refers to being trained on not just text (like GPT 3.5), but also images associated with that text.

That seems to have improved their model, which requires substantially fewer parameters while scoring higher, even in text-only domains.

4

FirstOrderCat t1_j9krjxb wrote

>which requires substantially fewer parameters while scoring higher, even in text-only domains.

Which tests in paper refer on text-only domains?

1

EndTimer t1_j9l1xxj wrote

Presumably, TXT (text context). LAN (language sciences) are unlikely to have many images in their multiple choice questions. The other science domains and G1-12 probably have majority text questions.

1

FirstOrderCat t1_j9l4bho wrote

What is IMG for GPT then there?

How come GPT performed better without seeing context compared to seeing text context?..

1

EndTimer t1_j9l4tc6 wrote

I don't know. It's going to be in the methodology of the paper, which neither of us have read.

1

FirstOrderCat t1_j9l8xn9 wrote

Yes, and then reproduce results from both papers, check the code to see nothing creative happens in datasets or during training, and there are much more claims in the academia than one has time to verify.

1

iamascii t1_j9o2v76 wrote

They used the captions instead of the images. The captions are pretty descriptive imho.

1