Azuladagio t1_jdxpj9e wrote
Reply to comment by acutelychronicpanic in The goalposts for "I'll believe it's real AI when..." have moved to "literally duplicate Einstein" by Yuli-Ban
But... Wouldn't a human scientist be doing the exact same thing?
acutelychronicpanic t1_jdxpxl9 wrote
Yes. Otherwise we'd each need to independently reinvent calculus.
MultiverseOfSanity t1_jdyz0ch wrote
Even further. We'd each need to start from the ground and reinvent the entire concept of numbers.
So yeah, if you can't take what's basically a caveman and have them independently solve general relativity with no help, then sorry, they're not conscious. They're just taking what was previously written.
Alex_2259 t1_jdz9vro wrote
And if you want to use a computer for your research, you guessed it bud, time to build a fabrication facility and re-invent the microprocessor.
Oh, you need the internet? You guessed it, ARPA 2.0 done by yourself.
SnipingNinja t1_jdzkv7n wrote
You want to cite someone else's research, time to build humans from the ground up
Alex_2259 t1_jdzz6j0 wrote
Oh wait, I think he wanted to also exist on planet Earth in our universe.
Gotta form the Big Bang, create something out of nothing and form your own universe.
Wow this is getting challenging!
featherless_fiend t1_jdygszy wrote
It's the art generators debate all over again.
The_Woman_of_Gont t1_jdyy87t wrote
Exactly, and that’s kind of the problem. The goalposts that some people set this stuff at are so high that you’re basically asking it to just pull knowledge out of a vacuum, equivalent to performing the Forbidden Experiment in the hopes of the subject spontaneously developing their own language for no apparent reason(then declaring the child no sentient when it fails).
It’s pretty clear that at this moment we’re a decent ways away from proper AGI that is able to act on its own “volition” without very direct prompting or to discover scientific processes on it’s own, but I also don’t think anyone has adequately defined where the line actually is in terms of when the input is sufficiently negligible as to make the novel or unexpected output a sign of emergent intelligence rather than just a fluke of the programming.
Honestly I don’t know that we actually even can agree on the answer to that question, especially if we’re bringing relevant papers like Bargh & Chartrand 1999 into the discussion, and I suspect as things develop the moment people decide there’s a ghost in the machine will ultimately boil down to a gut level “I know it when I see it” reaction rather than any particular hard-figure. And some people will simply never reach that point, while there are probably a handful right now who already have.
Kaining t1_jdzg6if wrote
Looking at all those french nobel prize/nomine we have that have sunkun into pseudoscience and voodoo 40y later, we could argue that human scientist do not understand science either >_>
Crackleflame35 t1_je0reg1 wrote
"If I have seen further it was because I stood on the shoulders of giants", or something like that, written by Newton
overlydelicioustea t1_jdzi8zh wrote
if you go deep enough into the rabbit hole of how these things work and come to a relevant output the clear destinction between real and fake reveals itself to blur into each other.
Viewing a single comment thread. View all comments