In 30 years we will have an alternative to death: being a ghost in a machine.
In a recent article by Christof Koch and Giulio Tononi [1], the authors argue that in 30 years we will be able to upload our mind to a computer. Actually, we can start building our “mindfiles” already using services like Lifenaut.
Despite of the optimism of this claim, Koch and Tononi recall us that we don’t know yet what consciousness is. They believe that consciousness will be artificially created eventually; however, it might not be the same sort of consciousness as we think.
The first assumption used as the base of the argumentation is that consciousness is produced in the brain by the natural world, and therefore it is controlled by the laws of physics, chemistry, and biology. The activity in the corticothalamic system seems to be a key factor in the production of consciousness. Nevertheless, other functions and brain areas (even those that are characteristic of human beings) are not necessary for the presence of consciousness. Even interaction with the environment could not be necessary for the existence of consciousness (provided that such interaction has existed before). In other words, we can have an entirely inner conscious experience.
Conscious machines of the future wouldn’t need to have emotions, or even attention, working and episodic memory, self-reflection, or language in order to have subjective experience (authors refer to the phenomenal dimension of consciousness. I also see some resemblance to what Antonio Damasio calls Core Consciousness). For Koch and Tononi, the key of inner experience is in the amount of integrated information that a machine (or a biological organism) can generate.
While human brain constitutes a single integrated system with a large number of states, current machines don’t fulfill these two properties. According to Koch and Tononi, the level of consciousness of an entity is dependent on how much integrated information it can generate. A specific measure of the amount of integrated information generated by a system can be calculated applying the IIT (integrated information theory of consciousness), and the associated Φ measure.
The authors propose to use IIT for a machine consciousness test. One test would be to ask the machine to describe a scene in a discriminating way, extracting scene key features (a better Turing Test). If the machine does as well as humans at describing the image it should be considered conscious.
[1] Can Machines Be Conscious? Christof Koch and Giulio Tononi. IEEE Spectrum’s Special Report: The Singularity. June 2008.