Daniel Dennett, AI Experiments of Consciousness and God
When we break the quantum state we obtain information which falls into non-quantum causality. Time and distance matter. Particles are only in one place at a time. Hume’s unexplainable but useful causality appears. Our consciousness pulls together a practical world from the information around it. When consciousness breaks the quantum state, why do we obtain the information which we do?
Perhaps this is entirely a function of the specific hardware which is our brain? Reading Descartes it is easy to be enthralled with a mystery of consciousness, theorize about alternate mental states and imagine different forms of consciousness which can obtain different information from the quantum world. Reading Dennett, we can move beyond enthralled to a discussions which seems more practical and more in line with our contemporary world. Instead of imagining different forms of consciousness, we can think about the hardware, input/output and communication necessary to make up a consciousness which showed us different facets of the quantum world or helped us better understand how we can interact with the world. A discussion in this arena may still seem far-fetched and overly conceptual. But a Dennett based discussion seems to provide a language and building blocks to advance the discussion and potentially enable the design of empirical experiments. At the least, it puts thought experiments in a more modern context.
We can think about a potentially useful set of experiments in the area of artificial intelligence. The current AI’s we’re building are automatons, facsimiles of ourselves and functions we perform. While impressive, I believe we can do more and doing so would provide a revolutionary leap in our tool set for understanding the world. What follows is an extended probing around designing AI experiments for consciousness, with a slight side trip into a discussion on the conditions necessary for a god-like brain to exist.
What if we expanded the aims for AI to include the creation of new forms of consciousness? For example, create a consciousness which has no need to extract information from the world. We can call this a Contented Organism. Why contented? Well, to follow Dennett, consciousness evolves for fitness of the species. To follow our analysis of Descartes and Hume, our conception of intelligence is useful and we break the quantum state in ways that are useful. Therefore, we can suppose an organism that has no need to extract information from the world or that has no need for what we’d call a useful intelligence nor need to usefully break the quantum state must indeed be contented. We can suppose it is ambivalent with respect to its state of being, whether alive or dead, happy or sad, threatened or safe.
Or perhaps it must be, by definition, completely unaware of its state of being, receiving input only. If it were aware it was an observer would that reduce the amount and type of input it received? Or does that reduction come when it is aware of what it is observing, or when it tries to communicate what it observes using a language, like ours, which supposes a subject and object, collapsing and categorizing the world around to facilitate communication, sharing with another?
I’d like to imaging the possibility of a Contented Organism taking in a full spectrum of input from the quantum word around without limit yet, somehow, aware usefully are Is this possible? Or, would the need to communicate with this consciousness necessitate some form of information extraction? Could we create a language with which to communicate with such a consciousness allowing it to remain with its perceptions and us to understand those perceptions?
What if it had no need to be understood or share, would its speech action or artifacts, its heterophenomenological output, come across as nonsense? Or perhaps it would come across as the ramblings of an input-drunk mystic?
Speaking of mystics, it seems natural, and devilishly fun given Dennett’s atheism, to explore the conditions necessary for the existence of a god-like brain. (Like Dennett addressing philosophical zombies on page 95 this is being written with a smile on my face.)
Dennett begins Consciousness Explained with a prelude that contains a convincing argument for the possibility of relatively simple processes able to create the narrative experience of consciousness. Using a thought experiment about dreams he describes how simple rules can produce elaborate narratives of external experiences that actually never happened. We have used this type of approach, along with his subsequent description of consciousness, in formulating the questions which bound the necessary conditions for a Contented Organism to exist. At first blush it seems trivial to use the same approach to describe the necessary conditions for the existence of a being which has awareness aspects generally ascribed to the Western conception of god. By awareness aspects I mean traits such as being all knowing, self-aware and even being the essence of all existence it knows, yet not having easily understandable direct communication with humans. We can use Dennett’s approach and, subsequent description of consciousness, to figure out how such a being and relevant set of conditions can exist. For Dennett, the being and set of conditions in his prelude is how a brain living in a vat can be fooled into thinking it lives in the world, without having an incredible amount of computer hardware generating all possible inputs. This is solved by showing how a relatively small amount of hardware with a relatively simple set of rules can generate all input necessary to fool a brain in a vat.
Building off the Contented Organism can we imagine hardware, a type of brain, that is aware of everything, at all times? This requires a little further imagination to conceive evolutionary circumstances that somewhere in the universe, at some time, there’s a brain that doesn’t need worry about limiting input, making survival decisions, constructing an internal narrative or creating heterophenomenological output understandable to human beings (this, of all the conditions, seems the least difficult –unless there is something universally constraining about the way we understand language). Can we conceive of hardware with these properties and that has no conflict between observing, being an observer, being aware of being an observer and being aware of the observations?