Viewing a single comment thread. View all comments

Elegant_Pressure_984 t1_ja21s8p wrote

It didn’t “know” that, it’ll have scraped it from a webpage like Google search would have.

2

jk-9k t1_ja2f3he wrote

Yeah doesn't this show that it doesn't know anything when it missed the answer right under its nose (yay for idioms!)

2

HS_HowCan_That_BeQM t1_ja3jcf1 wrote

Hence my putting "knew" in quotes. Didn't want to be guilty of anthropomorphism.

Ah, but Google search didn't. I queried "german equivalent of down the drain" and it returned the literal translation "in Eimer". Even in the first five or six results. Google Translate English->German translation of "all that work, down the drain" returned "

While I realize that chatGPT is just a dressed up Eliza psychoanalysis (from the 80's), nevertheless when it is correct, it looks very impressive. Emphasis on "when it is correct". I've also experienced results when the answer is not correct.

Aside: if one ever watches old episodes of the medical drama "House", there are numerous misdiagnoses before the actual solution is achieved. I assume that a medical professional draws on education + past experience to come up with a diagnosis. And a real-life doctor can be wrong. Would an AI trained on actual cases (vs using WEB-MD) be any less reliable than a human? Especially if it is fed corrections when it is wrong. Heuristics for the win, human an AI both.

1

Zestyclose-Ad-9420 t1_ja9ht57 wrote

We shouldnt anthropomorphise but people trying to monopolise the word "know" is ridiculous.... obviously it is not abstracting and visualising the individual concepts and stringing them together with language... but it knows that language object A and language object B are linked and it learned that by scanning its environment, the data it was given for training.

Saying that is sufficiently different from a person learning a word by, you guessed it, scanning the environment for links, that you cannot use the word "know" is pedantic and so stupid.

1

HS_HowCan_That_BeQM t1_jaal3zc wrote

Not sure if there is an ad hominem attack in there. And if there is, I'm not sure whom it addresses.

1

Zestyclose-Ad-9420 t1_jaazxqx wrote

ok but do you think that, semantically, an AI can know stuff or only "know" stuff.

1

HS_HowCan_That_BeQM t1_jacz8mj wrote

Wow, I'm going to take a pass on answering. I have typed, backspaced, and re-typed my thoughts three or four times and can't come up with a reasonable definition of know vs. "know".

Even the Turing Test of intelligent behavior didn't bail me out. I don't feel qualified to venture beyond opinion as to whether the one AI I have played with, chatGPT, would truly pass the test. And whatever I have done, it is not a true Turing Test as I am not comparing chatGPT's answers to a human's answers and trying to discern the difference.

1