Consciousness as Compression: Part 1
What if consciousness is indistinguishable from compression?
Note: most of the contents below come from this paper discussing the compression conjecture, which asserts that no meaningful distinction can be drawn between the compression and consciousness.
One fundamental characteristic of the conscious brain is its ability to compress information and discern patterns. Its essential for survival, specifically for the ability to make predictions:
Algorithmic theory provides a clear answer as to why organisms should seek to compress observational data. Specifically, Solomonoff’s (1964) theory of inductive
inference reveals that compression is a necessary component of prediction. The theory provides a universal measure of the probability of an object by taking into account all of the ways in which it might have been produced. This universal a
priori probability can then be incorporated into Bayes’ rule for inductive inference in order to make optimal predictions based on a set of prior observations.
In general, the better the compression, the stronger the predictive qualities.
Consider the sequence:
4, 6, 8, 12, 14, 18, 20, 24
Two patterns we can infer are:
Start at 4 and add 2, except if the digits of the previous number sum to 2, 5 or 8, in which case add 4. Next number will be 26
Go through all odd prime numbers and add 1. Next number will be 30
It stands to reason, the second explanation is better and likely more predictive due to its simplicity (compression):
Solomonoff’s theory of inductive inference reveals that the more a set of observations can be compressed, the more accurately subsequent events can be predicted... According to Solomonoff’s theory, the latter must be the better prediction, because it involves a fewer number of assumptions: the shorter the
length of the description, the more likely it is to be correct.
Since compression helps make predictions about the future and ensure your survival, it’s very useful in life.
The compression conjecture takes the leap to suggest that, not only is compression essential to consciousness but it is indistinguishable from consciousness.
Evolution of Consciousness
The more complex an organism, the more coordination efforts are required to ensure survival and opportunity to reproduce. It would not be productive to have consciousness imbued on each limb. Rather, it would be beneficial to have a central mechanism perceiving all stimuli and coordinating actions.
Similarly, coordination of future and past self is equally important. We want to be able to learn from past states and anticipate future states:
The utility of memory can again be explained in terms of enhancing algorithmic induction. Memory allows us to make greater sense of the world by enhancing our ability to carry out compression. Incoming sensory data are compressed in parallel with stored historical data, allowing redundancy to be identified more efficiently and, consequently, enhancing predictive accuracy. Thus, the form of understanding that the brain produces unites not only distributed sensory organs but also past and current states of an organism. The compression conjecture proposes that the experience of this unitary form of understanding is what we mean when we use the term ‘consciousness’.
Self-Awareness
Everything discussed so far does not rule out consciousness from something like a standard computer compression algorithm. What distinguishes consciousness is its ability to reflect on itself.
The compression conjecture posits that a sense of self evolved as a mechanism to conceptualize our past and future selves, as well as those around us. This helps us make predictions, plan and ensure our survival:
The human brain is a self-representational structure which seeks to understand its own behavior. For example, people model their own selves in order to more accurately predict how they are going to feel and react in different situations. They build up internal models about who they think they are and use these models to inform their decisions. In addition, the human brain compresses the observed behavior of other organisms. When we watch other individuals, we realize that there is a great deal of redundancy in their activity: rather than simply cataloguing and memorizing every action they perform, we can instead posit the more succinct hypothesis of a concise ‘self’ which motivates these actions. By representing this self we can then make accurate predictions as to how the people around us will behave. The idea that the actions of an organism are controlled by a singular self is merely a theoretical model which eliminates redundancy in the observed behavior of that organism. People apply this same process to themselves: what you consider to be the essence of you is simply a model which compresses your observations of your own past behavior.
Experience similarly helps with compress information. Instead of seeing a smooth green spherical object, we perceive and encode an apple. This is our subjective experience:
When people talk about their subjective experience they are referring to the articular form of compression that their brain provides. The reason that these qualitative descriptions differ from objective scientific descriptions is because the subjective experience of a stimulus is dependent on how it is processed
But it’s not enough to just receive the data. We could read a vivid description of an experience, and although it may move us, it’s not comparable to actually experiencing it ourselves. That’s because the compression itself is the experience (consciousness):
If a system is incapable of compressing the data, then it cannot ‘understand’ the experience which is contained within. Experience is dependent on the system which is doing the experiencing, as opposed to being intrinsic to a stimulus. Because reading a description of compression will not necessarily cause the same compression to occur in your own brain, reading about the experience of red will not make you experience red.
Artificial Consciousness
The paper goes on to discuss the implications of the compression conjecture on artificial intelligence and measuring consciousness. I’ll discuss those topics in my next post as well as give some thoughts about the implications of the theory. But if you can’t wait, read the paper. It’s fairly short and easily digestible.