hackinthebochs
hackinthebochs t1_ja6tpln wrote
Reply to [R] Large language models generate functional protein sequences across diverse families by MysteryInc152
At what point do we stop calling it a language model?
hackinthebochs OP t1_j0aj2im wrote
Reply to comment by AgentSmith26 in AI could have 20% chance of sentience in 10 years, says philosopher David Chalmers by hackinthebochs
I think that's a good way to think about it. If we have a reasonably accurate understanding of the work remaining, then the credence is his expectation of how fast progress will proceed. The other relevant dimension is the accuracy of this understanding of how much is left to do. For example, is artificial sentience even possible at all? Is it a few technological innovations away, or very many?
hackinthebochs OP t1_j0aeh1d wrote
Reply to comment by AgentSmith26 in AI could have 20% chance of sentience in 10 years, says philosopher David Chalmers by hackinthebochs
Probability in this context usually means credence, that is, subjective probability. It's a way to quantify your expectation of an event when you can't do a frequency analysis. So Chalmers claim should be understood as "I give 20% credence to AI sentience within 10 years".
hackinthebochs OP t1_izpq7vt wrote
Reply to comment by JHogg11 in AI could have 20% chance of sentience in 10 years, says philosopher David Chalmers by hackinthebochs
The issue of how to explain consciousness is importantly different than whether an AI can be or is conscious. An explanation of consciousness will identify features of systems that determine their level of consciousness. The hard problem of consciousness places a limit on the kinds of explanations we can expect from just physical dynamics alone. But some theories of consciousness allow that physical or computational systems intrinsically carry the basic properties to support consciousness. For example, panpsychism says that the fundamental properties that support consciousness are found in all matter. This includes various mechanical and computational devices. And so there is no immediate contradiction in being anti-physicalist and also believing that certain computational systems will be conscious.
hackinthebochs OP t1_izp7r6g wrote
Reply to comment by Opus-the-Penguin in AI could have 20% chance of sentience in 10 years, says philosopher David Chalmers by hackinthebochs
We can always imagine the behavioral/functional phenomena occurring without any corresponding phenomenal consciousness. So this question can never be settled by experiment. But we can develop a theory of consciousness and observe how well the system in question corresponds to the features our theory says correspond with consciousness. Barring any specific theory, we can ask in what ways are the system similar and different from systems we know that are conscious and whether the similarities or differences bear on the credibility of attributing conscious to the system.
Theory is all well and good, but in the end it will have little practical significance. People tend to be quick to attribute intention or minds to inanimate or random occurrences. Eventually the behavior of these systems will be so similar to humans that most people's sentience-attribution machinery will fire and we'll be forced to confront all the moral questions we have been putting off.
hackinthebochs t1_ja6uapk wrote
Reply to comment by Facts_About_Cats in Large language models generate functional protein sequences across diverse families by MysteryInc152
Any structured data is a language in a broad sense. Tokens identify structural units and the grammar determine how these structural units interrelate. But the grammar can be arbitrarily complex and so can encode deep relationships among data in any domain. This is why "language models" are so powerful in a vast array of contexts.