In a previous post, I discussed the compression conjecture: the idea that consciousness or indistinguishable from compression. This post will discuss the implications of this idea on artificial intelligence. Most of the ideas in the article are from this paper.
Why does consciousness feel like anything?
As discussed in the last post, under the compression conjecture, consciousness evolved to help living things understand the world, their past, present and future selves and help them plan accordingly.
We understand behavior through the hypothesis of a centralized self, which may or may not exist. This is an assumption we hold about ourselves and is manifested through the compression we carry out. Therefore, feeling is another layer of the abstraction:
Understanding the hypothesis that one is feeling something and the actual experience of feeling are the same thing. Amy’s feeling therefore exists relative to the assumption of her own existence, an assumption which the system itself is capable of making.
Artificial consciousness
Nothing discussed so far precludes artificial consciousness from arising in inanimate objects:
Any system that is arranged and updated in a way which allows for the compression of information will support consciousness, be it implemented in windmills, beer cans or toilet rolls.
But consciousness isn’t present in a single unit, but in a system. For instance, consciousness isn’t a property of any particular brain cell, but the system of brain cells. The system has to be aware of self, and use that theory of selfhood to guide its processing.
The consciousness of the system therefore resides in its capacity to understand (i.e. compress) what it senses, thereby identifying itself as an entity separate to its environment.
Creating consciousness
Since the compression conjecture defines consciousness as compression, we can say that two systems that perform the same compression are said to be identical. Therefore, we can represent consciousness as a program that receives some input and performs some compression.
To test whether two systems are equal, we could devise a Turing machine that ensures two systems have the same input-output relationship. However, Rice’s Theorem states that any non-trivial semantic property of a language which is recognized by a Turing machine is undecidable. Which is to say that we cannot say whether two consciousness systems are identical:
Since no system can recognize an equivalent system from within itself, developing a complete theory of consciousness is not possible: the more precisely a theory attempts to define the conscious structure of the brain, the less feasible it will be to validate it.
Turing test
Turing test requires a system to trick a person into thinking its real. But the test puts no restriction on the system itself. For instance, a significantly large lookup table may be able to convince many.
The breakthroughs of NPL models like GPT-3 is primarily due to the growth in size of the network. The depth of compression may not have improved much, but rather the ability to store information.
A more intelligent Turing test should put restrictions on the system to ensure a depth of compression and understanding by the system:
If a computer system is as intelligent as a human, then it should be capable of compressing language to the same extent as a human.
Final thoughts
To me, the most convincing idea of the compression hypothesis is the idea that consciousness is something that evolved to help our decision making and survival. Comprehending the world and acting accordingly without a sense of self would be unfathomably hard, at least within the constraints of animals.
The idea that there is no self is also echoed in Buddhism and some research. The book Why Buddhism is True goes into more details about the idea of self as understood by Buddhist doctrine and the science behind it.
We can look at human extreme’s to validate whether self can be sufficiently separated from our experience.
Consider The Burning Monk (warning: images are very disturbing)
In June of 1963, Vietnamese Mahayana Buddhist monk Thích Quang Duc burned himself to death at a busy intersection in Saigon…
As the marchers formed a circle around him, Duc calmly sat down in the traditional Buddhist meditative lotus position on the cushion. A colleague emptied the contents of the petrol container over Duc head. Duc rotated a string of wooden prayer beads and recited the words Nam mô A di đà Phật (“homage to Amitābha Buddha“) before striking a match and dropping it on himself. Flames consumed his robes and flesh, and black oily smoke emanated from his burning body.
The remarkable thing about the event was that the monk sat in meditation the entire time. Through years of practice and meditation, he was able to internalize the idea of selflessness such that he was seemingly unaffected by the flames. This would be difficult to imagine if there were some objective sense of self fundamental to our existence.
Overall, I found the compression hypothesis fascinating. I would recommend reading the entire paper as it goes into more detail about what was discussed in this post.