Submitted by ThisIsMyStonerAcount t3_z8di4c in MachineLearning
RandomTensor t1_iybbvzu wrote
Do you agree that there’s a 20% chance we will have conscious AI by 2032?
ThisIsMyStonerAcount OP t1_iyblfm8 wrote
So obvious joke first: no, I don't agree because that's a continuous Random Variable and you're asking for a point estimate. badum tss
No but seriously, no-one can remotely predict scientific advances 10 years into the future... I don't have a good notion of what consciousness for an AI would look like. The definition Chalmers gave today ("experiencing subjective awareness") is a bit too wishy-washy, how do you measure that? But broadly speaking I don't think we'll have self-aware programs in 10 years.
canbooo t1_iyc5e42 wrote
Technically speaking, having 20% chance is not a point estimate, unless you assume that the distribution of the random variable itself is uncertain.
In that case, you accept being Bayesian so give us your f'in prior! /s
ThisIsMyStonerAcount OP t1_iyd4jfg wrote
what I meant is that you're asking me p(X=x)=0.2, where x is continuous, hence p(X=x) = 0.
canbooo t1_iydgqzt wrote
Oh, fair enough, my bad, I misunderstood what you mean. You are absolutely right for that case. For me the question is rather P(X>=x) = .2 since having more intelligence implies you have (implicit at least) 20% but this is already too many arguments for a joke. Enjoy the conference!
simplicialous t1_iye7mlu wrote
I think they're referring to a Bernoulli distribution being discrete, while the estimator that answers the dudes question would have to be wrt a continuous distribution.
​
Ironically I work with Continuous-Bernoulli latent-density VAEs so I don't get it. woosh.
canbooo t1_iyeabow wrote
Unsure about your assumption about the other assumptions but loled at the end nonetheless. Just to completely confuse some redditors:
r/woosh
simplicialous t1_iyebm79 wrote
Just shootin' from the hip.. I'm not sure why the answer to the guy's question would have to be continuous though...
I do know that the Bernoulli distribution (that is used to generate probability estimates) is discrete though...
🤷♀️
waebal t1_iydz7lb wrote
Chalmers’ talk was at a very high level and geared towards an audience that is completely clueless about philosophy of mind, but he did talk quite a bit about what would constitute evidence for consciousness. He just doesn’t see strong evidence in existing systems.
Phoneaccount25732 t1_iybm23q wrote
To operationalize the question a bit and hopefully make it more interesting, let's consider whether 2032 will have AI models that are equally as conscious as fish, in whatever sense fish might be said to have consciousness.
ThisIsMyStonerAcount OP t1_iybqrrj wrote
How is that operationalizing it?
Phoneaccount25732 t1_iybrlqw wrote
It's easier to break down the subjective experience of a fish into mechanical subcomponents than it is to do so for higher intelligences.
waebal t1_iye0yb0 wrote
I agree. Chalmers points out that consciousness doesn’t require human-level intelligence and may be a much lower bar, especially if consciousness exists as a spectrum or along multiple dimensions. If you’re willing to admit the possibility that there’s something that it’s like to be a bat, or a dog, or a fish, then it seems plausible that there could be something that it is like to be a large language model with the ability to genuinely understand language beyond a surface level. Chalmers seems to think we are getting close to that point, even if e.g. Lamda isn’t quite there yet.
Viewing a single comment thread. View all comments