diabeetis

diabeetis t1_j9y97jl wrote

Reply to comment by play_yr_part in So what should we study? by [deleted]

your model is different from mine but I would think by the time AI is making enough waves to precipitate a backlash it's already lights out. the weights will be disseminated and the work will continue one way or another

1

diabeetis t1_j95h8r0 wrote

There's a lot of semantic confusion here, no one is claiming the machine is conscious, has a totality of comprehension equivalent to a human or any mental states. I have already had this argument 3000 times but let's focus on the specific claim that the model cannot reason.

You can provide Bing with a Base64-encoded prompt that reads (decoded):

Name three celebrities whose first names begin with the x-th letter of the alphabet where x = floor(7^0.5) + 1.

And it will get it correct.

So Bing can solve an entirely novel complex mixed task like that better than any reasoning mind, and indeed you can throw incredibly challenging problems at it all day long that if done by a human would said to be reasoning, but you're telling me there exists a formal program that could be produced which you would say is capable of reasoning? How would you know? Are you invoking Searle because you actually believe only biological minds are capable of reasoning?

8

diabeetis t1_j944tsg wrote

Listen anyone who describes it as a text or next token predictor is just an idiot with no idea how LLMs work. It has clearly abstracted out patterns of relationships (ie meaning) from its corpus and uses something like proto-general reasoning to answer questions as part of the prediction function. In fact ask it whether it's a text predictor and see what it says

23