Viewing a single comment thread. View all comments

acutelychronicpanic t1_jdxk8wn wrote

I'm calling it now. When we see an AI make a significant scientific discovery for the first time, somebody is going to comment that "AI doesn't understand science. Its just applying reasoning it read from human written papers."

57

Azuladagio t1_jdxpj9e wrote

But... Wouldn't a human scientist be doing the exact same thing?

35

acutelychronicpanic t1_jdxpxl9 wrote

Yes. Otherwise we'd each need to independently reinvent calculus.

47

MultiverseOfSanity t1_jdyz0ch wrote

Even further. We'd each need to start from the ground and reinvent the entire concept of numbers.

So yeah, if you can't take what's basically a caveman and have them independently solve general relativity with no help, then sorry, they're not conscious. They're just taking what was previously written.

16

Alex_2259 t1_jdz9vro wrote

And if you want to use a computer for your research, you guessed it bud, time to build a fabrication facility and re-invent the microprocessor.

Oh, you need the internet? You guessed it, ARPA 2.0 done by yourself.

3

SnipingNinja t1_jdzkv7n wrote

You want to cite someone else's research, time to build humans from the ground up

3

Alex_2259 t1_jdzz6j0 wrote

Oh wait, I think he wanted to also exist on planet Earth in our universe.

Gotta form the Big Bang, create something out of nothing and form your own universe.

Wow this is getting challenging!

3

The_Woman_of_Gont t1_jdyy87t wrote

Exactly, and that’s kind of the problem. The goalposts that some people set this stuff at are so high that you’re basically asking it to just pull knowledge out of a vacuum, equivalent to performing the Forbidden Experiment in the hopes of the subject spontaneously developing their own language for no apparent reason(then declaring the child no sentient when it fails).

It’s pretty clear that at this moment we’re a decent ways away from proper AGI that is able to act on its own “volition” without very direct prompting or to discover scientific processes on it’s own, but I also don’t think anyone has adequately defined where the line actually is in terms of when the input is sufficiently negligible as to make the novel or unexpected output a sign of emergent intelligence rather than just a fluke of the programming.

Honestly I don’t know that we actually even can agree on the answer to that question, especially if we’re bringing relevant papers like Bargh & Chartrand 1999 into the discussion, and I suspect as things develop the moment people decide there’s a ghost in the machine will ultimately boil down to a gut level “I know it when I see it” reaction rather than any particular hard-figure. And some people will simply never reach that point, while there are probably a handful right now who already have.

6

Kaining t1_jdzg6if wrote

Looking at all those french nobel prize/nomine we have that have sunkun into pseudoscience and voodoo 40y later, we could argue that human scientist do not understand science either >_>

1

Crackleflame35 t1_je0reg1 wrote

"If I have seen further it was because I stood on the shoulders of giants", or something like that, written by Newton

1

overlydelicioustea t1_jdzi8zh wrote

if you go deep enough into the rabbit hole of how these things work and come to a relevant output the clear destinction between real and fake reveals itself to blur into each other.

0

the_new_standard t1_jdyigxl wrote

"You don't understand, that completely original invention was just part of it's training dataset."

12

AnOnlineHandle t1_jdyx2fa wrote

It's easy to show that AI can do more than it was trained on with a single neuron. Just build an AI which converts Metric to Imperial, just a single conversion, calibrating that one multiplier neuron from a few example measurements. It will then be able to give outputs for far more than its training data, because it's learned the underlying logic.

1

the_new_standard t1_jdyz1s3 wrote

So here's the thing. I don't really care about what it's technically classified as.

For me I categorize AI but what end result it can produce. And at the moment it can produce writing, analysis, images and code. If any of that were coming from a human we wouldn't need to have an argument about training data. It doesn't matter how it does what it does. What matters is the end result.

0

FroHawk98 t1_jdz8j0f wrote

I mean it sort of has, all the protein chain folding stuff was practically discovered overnight.

4

acutelychronicpanic t1_jdzev0i wrote

Thats a good point. Maybe after we are all just sitting around idling our days away, we can spend our time discussing whether or not AI really understands the civilization its running for us.

2

imnos t1_jdzae7m wrote

"It's just predicting the next word."

4

Saerain t1_jdzpgji wrote

Written across the stars in luminescent computronium, "Actually we don't even know what intelligence is."

4

Tememachine t1_jdyqgeq wrote

Radiology AI's discovered some weird shit. IIRC. They suppressed the news bc it was a bit "racist".

2

acutelychronicpanic t1_jdyr3mv wrote

Anything about what it discovered? Or is it just that it can predict race?

4

Tememachine t1_jdyrjm5 wrote

The way it predicts race is unclear. But once we figure it out; it discovered that difference.

1

audioen t1_jdz3nxt wrote

This is basically a fluff piece inserted into the conversation that worries about the machine bias, the ability for it to do stuff like figure out race by proxy, and possibly use that knowledge to learn biases assumed to be present in its training data.

To be honest, the network can always be run in reverse. If it lights up a "black" label, or whatever, you can ask it to project back to the regions in image which contributed most to that label. That is the part it is looking, in some very real sense. I guess they did that and it lighted up big part of the input, so it is something like diffuse property that nevertheless is systematic enough for the AI to figure out.

Or maybe they didn't know they could do this and just randomly stabbed around in the dark. Who knows. As I said, this is fluff piece that doesn't tell you anything about what these researches were actually doing except doing some image oversaturation tricks, and when that didn't make a dent in machine's ability to identify race, they were apparently flummoxed.

3

Exel0n t1_jdzwq6n wrote

there must be differences in bone structure.

if diff races have clear diff in skin structure, fat deposition etc. it must be in the bones too.

the diff races have been seperated for like 10,000 and some even 50,000 years, enough to have differences in bone structure/density on an overall level.

2

Bierculles t1_jdzm74e wrote

how are genetic patterns from diffrent ethnicities racist?

2

Tememachine t1_jdzyq4u wrote

How can chest X-rays tell the AI someone's genetics?

1

Bierculles t1_je04wb4 wrote

you can see what ethnicity someone is by looking at their skull, why not organs and bones or whatever else you could look at in an xray

3

Cunninghams_right t1_je2muz5 wrote

"it's just standing on the shoulders of giants in the scientific field, not original!"

2

Durabys t1_jdzt6eg wrote

Already happened with DABUS AI... and they proceeded to move the goalposts.

1