Monday, February 15, 2010

A Materialist Model for Consciousness (II)




A starting point for the materialist model is patterned after the “Zimbo” described first by Prof. Daniel Dennett in his Consciousness Explained. To make his concept practicable, we begin by asking what basic type of logic gate would make it feasible? At the very least, millions of such gates (or gate combinations) would be needed, and they’d have to allow for self-awareness. By contrast, the lowest conscious entity or Zombie lacks this capacity. It is for all practical purposes an automaton (like some “pastors” who write the same screeds over and over again)

The putative platform we require is that the brain is a "von Neumann machine". This is a computational device that uses parallel architecture and memory stores that enable interior processing. It can also ‘modify parts of its instructions’ in accordance with the results of processing.[1] Best of all, if the brain is a von Neumann machine, it can be imitated by other virtual machines or computing software.[2] This is indeed much of the current basis for research into ‘strong AI’, or artificial intelligence.

The ideal sort of component, to add to those for the basic Turing machine and/or mechanical (or human) ‘zombie’- would be something including internal and external feedback. That is, combine sensory (external) information, with various data and internal information and then process it to provide appropriate responses.

A possible component to achieve this capacity is called a ‘XOR’ or exclusive –OR gate. And, we’d need a lot of these in tandem, just as we would the sort of switching components shown earlier. (At all times, I’m only showing the building blocks here, not the complete designs!) Anyway, the device that might work is shown in Figure 1.

The gates depicted are not really specifically identified, but for most purposes we would want them to be NAND gates, that is AND gates (see example earlier) with an inverter at the output. The key point is that each combined output, be it X1Y1 or X2Y2 is fed back as a novel input. (Note that Y2 is designated as the complementary value of whatever Y1 is). We assume here that one output, say X1, arises from internal data already stored while X2 arises from some mix of outside data and other internal data kept in hidden registers somewhere. (Which could be likened to a human’s ‘subconscious’). In each case, outputs are produced that are fed back into the system providing it with the capability of self-recursiveness and a degree of self-regulation.

While not at the fully conscious human level, there’s an amazing amount the ‘zimbo’ is capable of, so it shouldn’t be dismissed too hastily. For example, a complex zimbo features a number of programs or sub-modules running concurrently. Consider a chess-playing module [C]. We identify the following essential sub-modules as integral it: foresight F(C) or the ability to see ahead some number of moves, M(C) mathematical-spatial recognition. Thus, for chess-playing: [C] = F(C) + M(C) at minimum. By extension if [H] is the total ‘program’ for a human zimbo, a number of sub-modules: P1, P2, P3 etc. can be assigned such that:

[H] = P1 + P2 + P3 + P4 +.......

and which exhaust all possible forms of human behavior and cognitive manifestation. Thus, as a conglomerate of virtual programs the aggregate human is itself a ‘virtual machine’! By definition it is governed by one or more parallel processes (e.g. P1, P2) such that:

P1-> [1, 0, 0, 1.............N]
P2-> [0, 1, 1, 1.............M]

such that EFFECT [E] <=> P1(N), P2(M)

Here the processes P1 and P2 are hidden from scrutiny and represent transfers of data bits in the parallel architecture (between its ‘accumulators’ and ‘registers’, and manipulated by switching elements such as already described). These go up to some number of (N,M) parallel steps where N = M generally. The effect is recursive (feedback loop programmed in) so that the program has the capability to adjust to input parameters and alter the effect (output) accordingly.

This view, of course, is already that of the 'Strong AI' (Artificial Intelligence) school of computing. The difference between my modified version (below) and the standard strong AI version, is that my further incorporation of quantum mechanics enables the much more effective basis of a quantum computer[3]. That means that rather than limit storage to bits, one can work with qubits (truncated for quantum bits) where the superposition of a combined data element (1 + 0) applies:

U = U(1) + U(0)


the storage capacity dramatically expanding as a result.

In general, for any given n-bit combination – with n a whole number, a qubit register can accommodate 2 to the nth power total combinations at one time. Thus, 16 combinations could be held in memory for 4-bits, 32 for 5-bits, and so on. This change marks an exponential (two to the 'n' or 2^n) increase over any classical counterpart. Since, human brains typically can hold the equivalent in memory of whole libraries, it seems that qubit processing is at least worth consideration to get beyond zimbo-hood

Does this imply that the end result will define how all humans operate and include everything about them? Not at all. Like Dennett, I suspect the vast bulk of humanity, perhaps 99%, operates at the ‘Zimbo’ level 99% of the time. I’d also argue most people would be hard pressed to find any test to discriminate a ‘Zimbo’ from amongst their real life acquaintances, friends and even spouse, or significant other. ‘Zimbos’, incidentally, are fully capable of imitating amorous or altruistic behaviors, having seen them on television or being otherwise programmed! Whatever the Zimbo says, it can’t be assumed to issue from an authentic being. He, she or it may merely be parroting behaviors learned on assorted media.

Now, before jumping ahead, we also need a solid physical basis by which to postulate or justify quantum mechanics, and by extension quantum computing to enable full consciousness.
Given what we already saw concerning synapses, we shouldn’t be overly surprised to learn that many observers believe the synaptic cleft dimension (of about 200-300 nm)is exactly the scale at which the Heisenberg Uncertainty Principle[4] would be expected to operate. That necessarily implies quantum mechanics. Further, the synapse coupling is nonlinear, which provides for the possibility of quantum chaos (see previous chapter). Physicist Henry Stapp, for his part, has also pointedly noted that uncertainty principle limitations applied to calcium ion capture near synapses shows they (calcium ions) must be represented by a probability function.[5]

More specifically, the dimension of the associated calcium ion wavepacket scales many times larger than the calcium ion itself. This nullifies the use of classical trajectories or classical mechanics to trace the path of the ions. What about things like component gates, OR gate, NOT gate etc. such as we saw earlier? Is there any role for these in the quantum picture? Actually, there is. For example, the NOT gate can be represented by what is called a unitary matrix, or Pauli spin matrix-operator σ_x[6].

It will basically act to ‘flip’ the Boolean state of a single bit. The Pauli spin matrices also have real eigenvalues[7] within a confined range. This meets a primary application requirement for feed forward networks, in describing synapse function.

In the depiction being formulated here, I assume in line with modern neuroscience, that brain dynamics and function is contingent upon the neuron and its connections to synapses. Thus we want networks with the prescriptions given – i.e. quantum units that invoke Pauli spin operators as effective gates, junctions, along with connecting these to each other through a multitude of neural sub-networks. The diagram (Fig. 2) illustrates the two types of concept to be interwoven: biological networks (left side) and an associated quantum vector superposition in terms of wavefunctions (right).

A neuron in sub-complex 'A' either fires or not. The 'firing' and 'not firing' can be designated as two different quantum states identified by the numbers 1 and 2. When we combine them together in a vector sum diagram, we obtain the superposition.

U (n ( A] = U (n1 ( A1] + U(n1 ( A2)]

where the wave function (left side) applies to the collective of neurons in 'A', and takes into account all the calcium wavepackets that factored into the process. What if one has 1,000 neurons, each to be described by the states 1 and 2? In principle, one can obtain the vector sum as shown in the above equation for all of the neuronal sub-complex A, and combine it with all the other vector sums for the sub-complexes B, C, D and E in an optimized path. The resulting aggregate vector sum represents the superposition of all subsidiary wave states and possibilities in a single probability density function. This allows the (theoretical) computation of the density function, as well as distinct probability amplitudes for the various sub-complexes.

As we’ve seen, application of the Heisenberg Uncertainty Principle to Ca+2 ions (at body temperature) discloses the associated wave packet dimension increases to many times the size of the ion itself. Any classical Newtonian mechanics is therefore inapplicable and irrelevant. Worse, use of such – say for the ions’ trajectory, certainly ensures an erroneous result. Thus we represent the ion uptake superposition as a separate contributor to the aggregate assembly:

U (n ( A, B…E) = SIGMA_ ijklm {U(Ca+2) [n_i(A), n_j(B)…n_m(E)]}

wherein all possible states are taken into account. The total of these taken in concert enables a quantum computer modality to be adopted for a von Neumann-style consciousness. In quantum neural networks it is accepted that the real world brain generation of consciousness is more along the lines of a quantum computer-transducer than a simple collective of switches. As S. Auyang has observed, consciousness is more than a ‘binary relation between a Cartesian subject and object’.

In practical terms, the very act of converting to the quantum or qubit-register platform expands available memory and flexibility. At the same time, it remains very firmly in the strong AI regime, since no ‘infinite number’ of computational steps are expected. As shown earlier, the n,m steps for the parallel processes P1 and P2 are finite.

The end result is that one has a ‘Turing machine’ but with unimaginable creative and computational as well as analytical potential. Moreover, there is a very large probabilistic component that offers the potential for surmounting the ‘zimbo’ as the ultimate form. This means that humans are not simply lumbering beasts at the mercy of various deterministic components like ‘Big Blue’ the famous IBM chess program-computer. The probabilistic outcomes surrounding the synapses, and adjacent cells, discloses that novel elements of choice, cognition and recognition can and do enter the picture.

Next: Extending the model to Human thought and reason.

[1] Pierce, John R.:1980, An Introduction to Information Theory: Systems, Signals and Noise, Dover Publications, pp. 221-22.
[2] Dennett, Daniel, op. cit., p. 217.
[3] See, e.g.:Di Vincenzo, D.: Quantum Computation, in Science, Vol. 270, 13 October, 1995, p. 255.
[4] This states that one cannot know both the position (x) and momentum (p) of an electron, for example, to arbitrary precision. If you know position exactly you know nothing of the other. In one dimension: d p d x < h.
[5] Stapp, Henry, P.: 1993, Mind, Matter and Quantum Mechanics, Springer-Verlag, p. 42.
[6] Pauli spin matrices are: σ_x = (0,1¦1,0); σ_y = (0,-i¦i, 0); σ_z = (1, 0¦0, -1). (Note each left pair is a matrix 'top' and each right pair a matrix 'bottom' - since they are usually written in a rectangular array form. )

[7] In general, if M is a square matrix of (n x n) rows, columns and x is a column vector, then M*x = mx denotes a matrix eigenvalue equation. One solution, x= 0, is a null vector for all finite x. In general, for any M one can obtain n homogeneous linear equations in x. These have non-trivial solutions if the characteristic determinant is zero, viz. Det(M – xI) = 0 where I is the matrix identity element. Any such determinant can be expanded, i.e. by minors and cofactors, to give a polynomial of nth degree in x. When one solves the polynomial equation, the n roots are the eigenvalues.

No comments:

Post a Comment