Submitted by enryu42 t3_122ppu0 in MachineLearning
currentscurrents t1_jdrpl3u wrote
I'm not really surprised. Anybody who's extensively used one of these tools has probably already run into their reasoning limitations.
Today's entire crop of self-supervised models can learn complex ideas, but they have a hard time manipulating them in complex ways. They can do a few operations on ideas (style transfer, translation, etc) but high-level reasoning involves many more operations that nobody understands yet.
But hey, at least there will still be problems left to solve by the time I graduate!
enryu42 OP t1_jds18j4 wrote
I absolutely agree, however, these models repeatedly exceeded expectations (e.g. 5 years ago I thought that "explaining jokes" would be a hard problem for them, with a similar reasoning...)
I tried that because I've heard that there are people inside competitive programming community claiming that GPT4 can solve these problems. But from what I gather, it is still not there.
rePAN6517 t1_jdt830d wrote
Are you graduating this May?
Disastrous_Elk_6375 t1_jdu2tzp wrote
badum-tsss
Viewing a single comment thread. View all comments