GreenWeasel11
GreenWeasel11 t1_iw3yp9u wrote
GreenWeasel11 t1_iw1hgpn wrote
Reply to comment by RobleyTheron in The CEO of OpenAI had dropped hints that GPT-4, due in a few months, is such an upgrade from GPT-3 that it may seem to have passed The Turing Test by lughnasadh
What do you make of people like Ben Goertzel who are obviously highly intelligent and are explicitly working toward AGI but apparently haven't realized how hard it is because they still think it's a few decades away at most?
GreenWeasel11 t1_iw3zyht wrote
Reply to comment by SurroundSwimming3494 in The CEO of OpenAI had dropped hints that GPT-4, due in a few months, is such an upgrade from GPT-3 that it may seem to have passed The Turing Test by lughnasadh
Here's Goertzel in 2006; in particular, he said "But I think ten years—or something in this order of magnitude–could really be achievable. Ten years to a positive Singularity." I don't think he's become substantially more pessimistic since then, but I may have missed something he's said.
One also sees things like "Why I think strong general AI is coming soon" popping up from time to time (specifically, "I think there is little time left before someone builds AGI (median ~2030). Once upon a time, I didn't think this."), and while I don't know anything about that author's credentials, the fact that someone can assess the situation and come to that conclusion demonstrates that at the very least, if AI is actually as hard as it seems to the pessimists to be, that fact has not been substantiated and publicized as well as it should have been by now. Though actually, it's probably more a case of the people who understand how hard AI is simply not articulating it convincingly enough when they do publish on the subject; Dreyfus may have had the right idea, but the way he explained it was nontechnical enough that a computer scientist with a religious belief in AI's feasibility can read his book and come away unconvinced.