I don't agree with the article (in its form and content) so this comment is not to defend it, I will also say that it's not making much sense and is disorganized.
But your comment is also incorrect. "No relationship whatsoever" is a strong claim and nothing has been shown one way or the other. There are valid paths of inquiries that are trying to understand consciousness in the light of control theory / cybernetic, which is all about complexity.
While IIT is far too naive and already shown incorrect, I think there is a nugget of sense to take there about the partitioning of the network and measuring the information entropy in each. What it lacks in my opinion is that not only both partitions should have a degree of Shannon entropy, but there should also show tangled hierarchies[1]. I think consciousness is one part of the network building symbolic representations of the states of the other part, being in the same time transformed in its structure (which seems to be how memory works). Having an interpreter running, being itself modified by its input but also issuing orders, is a tangled hierarchy.
There is nothing proven at all of course, it is all personal opinion. But I consider it a much better direction than some other current theories and a more realistic description of how such process could be organized. And in that sense, while causality is definitely not decided, it is absolutely possible that either such level of complexity is necessary for complex behaviour, or that complex behaviour will mechanically create this organization.
Of course designing simple machines for complex purpose is not the point. But designing simple computations to generate complex behaviour might definitely be tightly coupled with how consciousness evolved in humans (and other thinking animals).
[1]: while this paper goes against the idea, it's not contradicting it. Nenu says that Hofstadter didn't prove anything, which is correct. It doesn't mean it's shown incorrect or even less likely. It's still useful though to contextualize and try to formalize the idea.
Nervous_Recursion t1_j6nhonk wrote
Reply to comment by Olive2887 in The Conscious AI Conundrum: Exploring the Possibility of Artificial Self-Awareness by AUFunmacy
I don't agree with the article (in its form and content) so this comment is not to defend it, I will also say that it's not making much sense and is disorganized.
But your comment is also incorrect. "No relationship whatsoever" is a strong claim and nothing has been shown one way or the other. There are valid paths of inquiries that are trying to understand consciousness in the light of control theory / cybernetic, which is all about complexity.
While IIT is far too naive and already shown incorrect, I think there is a nugget of sense to take there about the partitioning of the network and measuring the information entropy in each. What it lacks in my opinion is that not only both partitions should have a degree of Shannon entropy, but there should also show tangled hierarchies[1]. I think consciousness is one part of the network building symbolic representations of the states of the other part, being in the same time transformed in its structure (which seems to be how memory works). Having an interpreter running, being itself modified by its input but also issuing orders, is a tangled hierarchy.
There is nothing proven at all of course, it is all personal opinion. But I consider it a much better direction than some other current theories and a more realistic description of how such process could be organized. And in that sense, while causality is definitely not decided, it is absolutely possible that either such level of complexity is necessary for complex behaviour, or that complex behaviour will mechanically create this organization.
Of course designing simple machines for complex purpose is not the point. But designing simple computations to generate complex behaviour might definitely be tightly coupled with how consciousness evolved in humans (and other thinking animals).
[1]: while this paper goes against the idea, it's not contradicting it. Nenu says that Hofstadter didn't prove anything, which is correct. It doesn't mean it's shown incorrect or even less likely. It's still useful though to contextualize and try to formalize the idea.