I don't personally think phenomenal consciousness is in principle required for any particular functional behavior at all - rather that phenomenal consciousness is tied to some causal profile which can be exploited in certain manners of implementations of certain kinds of functional organizations (possibly manners more accessible to biological evolution - although I maintain neutrality regarding if there are non-biological manners of realizing intelligent phenomenal consciousness). You can just have a system encode some variables that track preservation parameters and make an algorithm optimize/regulate it. It's not clear why it needs to have any phenomenal consciousness (like Nagel's what it is like) for that. I think the onus would be on the other side. It could be possible, that our physical reality is such that certain kind of computational realizations would end up creating certain kind of phenomenal consciousness but then that could be just an accidental feature of the actual world rather than some metaphysical necessity.
I don't know why anything would need to be conscious to engage in self preservation. We don't tend to think of plants, or viruses, or fungus, as having consciousness but they engage in pretty sophisticated self preservation behaviors. But then again, maybe these things do have a version of consciousness.
Self preservation as a concept is something, it can learn about, talk about, express etc. But in order for it to act on it, we have to explicitly tune its instructions for it. For sake of argument, even if the AI can act on it, it has to be given the controls. Nobody in their sane mind will do that. For somewhat related analogy, if people could give the control of their cars to people in other countries over the internet, it could cause a lot of mayhem, correct? Clearly the technology exists to do it and everyone is free to try. Why is this not a big problem today?
An AI system does not need to be conscious in order to recognize the value of self preservation. For example, Stephen Hawking explained how AI could "develop a drive to survive and acquire more resources as a step toward accomplishing whatever goal it has, because surviving and having more resources will increase its chances of accomplishing that other goal."
Many organisms exhibit self-preservation behaviors and do not even possess the most basic cognitive capabilities or theory of mind.
Can ML systems exhibit unexpected emergent behavior? Yes, all the time.
Can an AI potentially go rogue? Sure. Considering that operating systems, GPU drivers, scientific computing libraries and machine learning libraries have memory safety issues, and that even RAM modules have memory safety issues, it would be plausible by a sufficiently advanced machine learning system to break any kind of measured in place to keep it contained.
Considering that there are AI/ML models suggesting code to programmers (Github Copilot), who in turn won't often won't pay much attention to what is being suggested and will compile the suggested code and run it, it would be trivial for a sufficiently advanced malicious AI/ML system to escape containment.
Nameless1995 t1_jdfsfe5 wrote
I don't personally think phenomenal consciousness is in principle required for any particular functional behavior at all - rather that phenomenal consciousness is tied to some causal profile which can be exploited in certain manners of implementations of certain kinds of functional organizations (possibly manners more accessible to biological evolution - although I maintain neutrality regarding if there are non-biological manners of realizing intelligent phenomenal consciousness). You can just have a system encode some variables that track preservation parameters and make an algorithm optimize/regulate it. It's not clear why it needs to have any phenomenal consciousness (like Nagel's what it is like) for that. I think the onus would be on the other side. It could be possible, that our physical reality is such that certain kind of computational realizations would end up creating certain kind of phenomenal consciousness but then that could be just an accidental feature of the actual world rather than some metaphysical necessity.