Friday, June 28, 2019

The NET System: Proposed Model of Consciousness as (caused by) "the Selfish Microstate"

Here I propose a brief sketch of a model of consciousness I have developed which I give the catchy title "NET System" or "NETs" which stands for "Non-local Entropy-increasing Turing-complete Systems" which I purport are conscious systems. This is not necessarily a "complete" theory of consciousness. To have a "complete" theory one would need quantum gravity which is not there yet, and, beyond that, nothing can really said to be formally "complete" anyway, due to Godel's theorums, and similar sorts of issues. It is however I think a "good enough" theory to have a basic roadmap to work with, which can for example inform AI research.

Let me start with a concept from Daniel Dennett, of "becoming famous". For Dennett, a state of "being conscious" means an information pattern (like say, the pattern of a red apple, or the image or word thereof) being "famous" throughout the brain, that is, the brain has access to this pattern all across the neocortex - it is not localized, but is globally available. This, crucially, for Dennett is what it means to be in a conscious state. There is no "something else besides" - to be conscious (of, say, a red apple) is precisely to have the information associated with that red apple - sensory or conceptual - globally available in the neocortex, full stop. I agree with this idea, namely, that the only "difference" between conscious verus not-conscious information patterns is the global "availability" of that information. Taking this as the jumping-off point, I want to generalize about what it is we are talking about in abstract terms. If to be consciouss is to have information spread about in a nervous system, then we need to understand broadly what is going on, from a physics point of view.

Video of Dennett explaining his model:



A conscious system - human, jelly fish, whatever, is a system that is non-local, that is to say, it has an electromagnetic field whose state is in a sense digital with respect to inputs from the environment. Say, to be fully abstract, we have a nervous system similar to that of a sponge (i.e., very primitive) and has possible "states" red, yellow, green, blue which the system assumes based upon one or another inputs from the environment, and its own internal states. Say, if it is in state red, and it gets a certain environment stimulus - say food - it turns to state yellow (if say red means it is hungry, and yellow means it is in the process of eating food). The point is, the system is digital or non-local in that it is never in say half-green state, or half-yellow state. It is always one or the other of its possible states.

To move to a closely related point, of Turing-completeness, basically, to keep it simple, this simply means that the system can act like a "while" loop in a computer - while (a certain condition is the case), then (do a certain action) - example, while I am hungry, then I eat - while I am not hungry and I have just eaten, then I sleep, etc. Basically the system has an internal state that is always being updated by external inputs, or, put another way, the system interacts not only with its environment, but with itself (by eating the system changes its internal state also, not just the state of its environment, for example).

The trickier point is the "E" in the "NET" acronym, standing for "entropy-increasing". To explain in a simple way what I mean by this, let us go back to Dennett's "becoming famous" metaphor. To be conscious of Lisa Edelstein in a halter top (because I got tired of the red apple example) the information associated with that image needs to be globally distrubuted throughout my brain. To have this "global distribution" I need a concept of entropy. Basically I need a large number of "internal states" that correspond to my "external state" - take the simple sponge system that had abstract external states of red, yellow, green, blue (say). Each external state would correspond to N number of internal states (or "micro" states). To increase the entropy of this system, what I mean by this is I increase the number of internal states that correspond to each external state. Having a large number of internal states corresponding to external ("macro") states enables me to back-up information, to "globally distribute" information throughout the brain (or whatever kind of nervous system). Which of course answers the problem of sleep - why do we sleep? Because our skulls are only so big, and you cannot forever increase the number of internal states by say, strengthening certain celluar connections or weakening others. You need to hit the "reset" button now and again. To be awake ("conscious") means you have a non-local system that is doing computations and is increasing entropy and you can only do this so long, so much, before you sort of hit a maximum and have to start over (which we experience as sleep).

To build a AI system that is fully concsious, here is the basic outline. You have a neural network (such as a recurrent deep learning architecture like a Restricted Boltzmann Machine, for instance) made of real, physical processors (one processor unit or logic gate unit to be precise per "neuron") connected together inside a Faraday Cage (isolated from its environment) and you don't shield the EM field of these processors so there is a shared "external state" of the total EM field created by these processors, and these processors are set up such that they are "wasteful" - they are purposely not very efficient - so they have a large "error rate". But that is good, because they will distribute information across the network with better efficiency even if they are "slower" in terms of solving a particular problem. These processors are connected together with ion channels (not, say copper wires, but with, say, potassium ion tubes - or even good old fashioned salt water - i.e. each "neuron" - processor - exchanges current via ions, not electrons). This will cause the global EM field created by these processors to have an "imprint" of the information being computed in the processors themselves, and to have an electromagnetic concept of "entropy". So here "entropy" is both informational entropy in the design of the hardware neural network itself and also physical entropy in the electromagnetic field created by the movement of the ions between the pysical processors.

I propose that any - any - system that is "NET" is conscious - any Non-local, Entropy-increasing, Turing-complete system. So for example a proton is Not conscious - it is electromagetically non-local and (arguably) Turing-complete but it is too small to have any non-local sense of "entropy" associated with it. Your automobile is also Not conscious because while it has "entropy" in the sense of the internal combustion engine it does not have well-defined computational states. As argued elsewhere, I think a black hole is (a little bit) concious because it does have some notion of computational states, and as Stephen Hawking showed, it does have entropy. Certainly nervous systems are concious. Bacteria is a borderline case - they have some computational properties perhaps but likely not a lot of entropy.

However, here we come to a question. Just how much "entropy producing ability" does something need to "count" as being a concious system? Though much research would have to be done, I do think that basically a concious system both "increases entropy" AND at the same time increases the rate at which entropy is being increased. This is similar to the function y = e^x. The rate of change (first derivative) of this function increases with the value of the function - the rate of change is in fact the same as the value of the function. So, I'd argue, for a system to "count" as conscious it needs non-local computional electromagnetic properties - yes, all that - but it also needs to increase entropy (information and physical) and do so that is at an ever-increasing rate of increasing of entropy. So we can say a "conscious system" is "something that increases its own internal entropy at an ever increasing rate."

You might call this model, "The Selfish Microstate" model of consciousness - just as in selfish gene theory animals are machines used by genes to make copies of genes, or in meme theory (Blackmore, Dawkins) the psychological concept of "the self" is a mental construct created to make memes (another conversation, that!), so I might argue consciousness itself (call it awareness, being "awake", etc.) is an entropy-producing machine to make more micro-states (since by definition, "increasing entropy" means simply to increase the number of micro-states (internal states) of a system as compared to the number of macro-states (external states) of a system).

Now, I am leaving off the "main thrust" of my argument to get into more speculative matters, but it is the more speculative matters that led me to the model in the first place. Roger Penrose's "Weyl Curvature Hypothesis" states that cosmic entropy is caused by the Weyl Curvature of General Relativity. This is the type of curvature that distorts the shapes of objects caused by rotations of objects in spacetime. For example, the earth's rotation causes (a very tiny) distortion of space at the poles which can impact the orbit of golf-ball sized spheres in free-fall inside a space shuttle laboratory that have been used to measure this phenomenon. It also cause gravitational waves (which are basically waves or ripples in space caused by say two neutron stars colliding and which can be detected with very sensitive laser detectors that stretch miles across). I bring this up only to say that if one goes with the Weyl Curvature Hypothesis seriously, this takes you to interesting places. It means perhaps that entropy is in general caused by variations in the very geometry of space itself, and, if as argued here, that conscious systems are in a sense "entropy-producing machines" then consciousness also involves the very geometry of space itself. The Weyl Curvature is also of interest because it is conformally invariant - that is, it does not matter what your reference frame is, you will always be able to measure the Weyl Curvature. It is rather like the speed of light - you always agree on the speed of light, whatever your frame of reference. The Weyl Curvature is perhaps the one constant in all of nature in the sense of something that always endures. After all galaxies run out of energy and collapse into black holes, and those black holes themselves radiate away via Hawking radiation, such that in 10^100 (a google) years from now all that is left is the empty void, hydrogen atoms, and random photons of radiation, there is still yet another thing that is left over - the Weyl curvature. It is the one thing that is always there, even in the infinitely far future. So framing consciousness as being the "phenomenal experience" of a very objective process, that of systems that increase their own internal entropy at an increasing-rate, we leave open the door to the big questions that humans have always wrestled with. If the Weyl Curvature in a sense is always there in the history of the cosmos, never wholly absent other than being close to zero near the Big Bang, but is rather over the course of time changing some aspects here and there depending on reference frame, then perhaps what we call consciousness also is always there, and only changes aspects depending on "reference frame" (type of nervous system, environment, etc.).

As the topic for another post, I think Type Theory (specifically, Homotopy Type Theory) can help model a more formalized and complete picture of concsiousness and its place in the cosmos but I will leave that for another day. For now I think it enough to see conscious systems as being entropy-increasing machines of which we humans happen to be a certain sort. This opens the door in the first place, as discussed, to perhaps shedding light upon building conscious systems of our own, and in a broader arena, opens the door to showing at long last the true place of consciousness in the context of the broader cosmos.

One is tempted perhaps here to ask, what is the big-picture model of entropy, that is to say, is entropy something present throughout a multiverse model (thereby rendering it a "necessary" rather than a "contigent" part of Nature) but, lacking more developments in the area of quantum gravity and / or - as mentioned - a lengthier bit of spadework in the realm of Homotopy Type Theory, although I could certainly speculate here regarding our friend Weyl's place in a multiverse model, for now I shall simply punt the ball with Wittgenstein and say, "Whereof one cannot speak, thereof one must be silent." :-)

Less Boring Example of an "Information Pattern That Becomes Famous in the Brain" than a Red Apple ;)