waebal
waebal t1_iydzvzt wrote
Reply to comment by chechgm in [D] I'm at NeurIPS, AMA by ThisIsMyStonerAcount
The “deals” are maybe $1 cheaper than the price listed on amazon.com.
waebal t1_iydz7lb wrote
Reply to comment by ThisIsMyStonerAcount in [D] I'm at NeurIPS, AMA by ThisIsMyStonerAcount
Chalmers’ talk was at a very high level and geared towards an audience that is completely clueless about philosophy of mind, but he did talk quite a bit about what would constitute evidence for consciousness. He just doesn’t see strong evidence in existing systems.
waebal t1_iye0yb0 wrote
Reply to comment by RandomTensor in [D] I'm at NeurIPS, AMA by ThisIsMyStonerAcount
I agree. Chalmers points out that consciousness doesn’t require human-level intelligence and may be a much lower bar, especially if consciousness exists as a spectrum or along multiple dimensions. If you’re willing to admit the possibility that there’s something that it’s like to be a bat, or a dog, or a fish, then it seems plausible that there could be something that it is like to be a large language model with the ability to genuinely understand language beyond a surface level. Chalmers seems to think we are getting close to that point, even if e.g. Lamda isn’t quite there yet.