Well, one WSJ letter writer named Paul Malocha appears to want to, writing in a letter four days ago:
"Neurons provide the material for human consciousness but cannot create it. A rational soul is necessary. Without the latter you have a bunch of data and processes only...Of course, the materialist philosophy behind this quest (for thinking machines) rejects that there is such a thing as a soul, rather holding that consciousness is simply an emergent property of electric meat."
But it is vastly more than "electric meat! And that's where Mr. Malocha goes astray and so allows supernatural babble like "souls" to enter the picture. In fact, the departure from the crude materialism that Malocha invokes is made possible by the introduction of quantum mechanics into neural processing and brain dynamics. We have to thank physicist Henry Stapp (Mind, Matter and Quantum Mechanics, Springer-Verlag, p. 42) who first posited that that uncertainty principle limitations applied to calcium ion capture near synapses shows they (calcium ions) must be represented by a probability function. The latter is described by quantum mechanics, so entails a wave form (de Broglie waves) - not just matter- and we move beyond "electric meat".
More specifically, the dimension of the associated calcium ion wavepacket scales many times larger than the calcium ion itself. This nullifies the use of classical trajectories or classical mechanics to trace the path of the ions. It thereby opens the door fully to quantum mechanics. Ultimately (using additional work by physicist David Bohm) one can arrive at a "quantum potential", defined:
VQ= {-ħ2/ 2m} [Ñ R]2 / R
for a wave function, U = R exp(iS/ħ)
where R,S are real.
But let's back up a bit to units we call neuronal assemblies, and super assemblies. These are amenable to a particular type of neural network processing that relies on "clustering algorithms" and for which quantum dots can be located at critical junctures in the network to minimize input variations and assure specific outputs. To fix ideas, I direct readers to parts A, B and C of Fig. 1.
Part (A) shows a neuronal super-assembly, part (B) a biological and quantum analog to one center of neuronal action, and part (C) the workable neural network known as the radial basis function. Part (A) shows a neuronal super-assembly, part (B) a biological and quantum analog to one center of neuronal action, and part (C) the workable neural network known as the radial basis function.
A neuronal assembly can also be described by connections to some N-synapses (where N> 100,000). This is shown simplified in the left side of the same diagram(B), where the neuron A now has 4 rays: AB, AC, AD, AE representing four connections of the same neuron to four different synapses. Bear in mind the neuron A has any number of quantum wave state possibilities associated with it: firing or not firing, action-potential or no action potential and perhaps one million others. In a higher-order neuronal assembly one might have 1,000 neurons each connected to 100,000 synapses for which an extremely complicated figure would result!
Consider now the the quantum aspect. A neuron 'A' either fires or not. The 'firing' and 'not firing' can be designated as two different quantum states identified by the numbers 1 and 2. When we combine them together in a vector sum diagram, we obtain the superposition. where the wave function (left side) applies to the neuron 'A'. What if one has 1,000 neurons, each to be described by the states 1 and 2? In principle, one can obtain the vector sum as shown in the above equation for neuron A and combine it with all the other vector sums (using the quantum superposition principle) for each of the 999 other neurons. The resulting vector sum represents the superposition of all subsidiary wave states and possibilities - at least those for the 'firing-not firing' states! The end product of this is a neuronal 'assembly' or aggregate of neurons (and associated synapses, axons, dendrites, etc.) which performs some function or acts like a circuit (for example a simple reflex arc, as when one burns a finger and automatically removes it from the source).
At a higher level of the hierarchy, one refers to the neuronal super-assembly(A) or 'super-circuit’ within which considerations such as networks, and optimization of paths as well as 'adjacency and order' take precedence. Each solid black circle denotes a neuronal assembly such as described above. These are intimately connected, each to the other, to perform some specific function. (For example in the brain region known as the limbic system, there is a structure called the hippocampus, within which an agglomerate of neuronal assemblies is dedicated to maximizing retention of events as memory.)
We then ask: What form or type of operational network can best exploit these properties to attain the desired function we want? That is, to be able to shut down or minimize negative inputs that contain information that amplifies unwarranted fears? The diagram (C) gives the clue: it will be a network for which a clustering algorithm is optimized, since this same clustering emulates the neuronal assemblies.
At a higher level of the hierarchy, one refers to the neuronal super-assembly(A) or 'super-circuit’ within which considerations such as networks, and optimization of paths as well as 'adjacency and order' take precedence. Each solid black circle denotes a neuronal assembly such as described above. These are intimately connected, each to the other, to perform some specific function. (For example in the brain region known as the limbic system, there is a structure called the hippocampus, within which an agglomerate of neuronal assemblies is dedicated to maximizing retention of events as memory.)
We then ask: What form or type of operational network can best exploit these properties to attain the desired function we want? That is, to be able to shut down or minimize negative inputs that contain information that amplifies unwarranted fears? The diagram (C) gives the clue: it will be a network for which a clustering algorithm is optimized, since this same clustering emulates the neuronal assemblies.
In the clustering algorithm view(C), the center for action (the m's) are going to be neurons, as occur in neuronal assemblies. As each cluster center m(i) is moved closer to the x(n) because the relevant equation minimizes the error vector, [x(n) - m(i)]. The relevant equation here is:
m(i)' = m(i) + a{x - m(i)}
where a is the "learning rate" or in this case, the information processing (flow and refresh) rate in the network. The iterative aspect of the equation, emphasizes the fact that the algorithm is applicable for clustering the input vectors in real-time as they are acquired sequentially. Thus, the quantum gates will have an ongoing active role and continually update inputs as well as outputs.
What about the RBF network itself? This is shown in Fig. 2.
m(i)' = m(i) + a{x - m(i)}
where a is the "learning rate" or in this case, the information processing (flow and refresh) rate in the network. The iterative aspect of the equation, emphasizes the fact that the algorithm is applicable for clustering the input vectors in real-time as they are acquired sequentially. Thus, the quantum gates will have an ongoing active role and continually update inputs as well as outputs.
What about the RBF network itself? This is shown in Fig. 2.
Cursory inspection shows some similarities to the previous considered (numerically vetted) MLP network but also key differences. For example: the hidden layer j = 0, 1, 2 ....p has no bias inputs such as X(0) for the MLP. Both hidden layer operators transit to Y(k) and Y(m) which in turn feed into one bias point per cluster at Phi(0).
The output of a neuron in the output layer is then defined according to: y(k=j) = f(jth output of j = 0, 1, 2.....p neuron of hidden layer) which can be controlled by placing a quantum gate (likely NOT gate using the function for the Pauli spin matrix σ(z) = (1, 0¦0, -1), where as before, the left pair is a matrix 'top' and the right pair a matrix 'bottom' )
In effect, with correct placement of quantum dot NOT gates in the hidden layer, it will be possible to alter an input (0, 1) to an output (0,-1) say at the bias point Phi(0). That switching effect will then transfer to the output layer, inverting the outputs for the k = 1, 2,.....m neurons principally for the points y(1), y(k=j) and y(m). The key advantage of the RBF over an MLP network is that the RBF leads to better decision boundaries and improved classification of the information from the hidden layer. (Which might be very critical since not all information going to the amygdala is negative. Some is available evolutionarily, for example, to arouse an alert for a real threat!)
A remaining question or issue, is whether or not one quantum NOT gate is required for each neuron in the hidden layer. I am going to argue 'no', given the fact that quantum superposition superposition of data elements can apply such that: U = U(1) + U(0), where 1 and 0 denotes bits entrained to move along as information to the outputs y(1), y(k=j) and y(m). In principle, then, it is feasible that each quantum gate can - according to its qubit (not bit) capacity, match up to many more neuron bits in transit within the hidden layer.
For example, an ordinary (computer) register only holds one of eight possible 3-bit combinations at a time, say: 001, or 010, or 011. By contrast, a qubit register (or quantum gate acting as one) could accommodate all eight possible 3-bit combinations: 000, 001, 010, 100, 110, 101, 011, and 111. In general, for any given n-bit combination, with n a whole number, a qubit register could accommodate 2 to the nth power total combinations at one time. Thus, 16 combinations could be held in memory for 4-bits, 32 for 5-bits, and so on. This change marks an exponential (two to the 'n') increase over any classical counterpart.
The task now will be to check the workabilty of the RBF network by using a simulation code that can encode qubit capacity and be integral with the key network equations for the RBF network. What might be one way we know we're on the right track? Well, apart from getting the outputs we want at (1), y(k=j) and y(m), we also would like to see or ensure that the error function remains Gaussian, and doesn't suddenly change. For example, it can be shown that the relevant error function for a given neuron designated for the output bias is:
Phi(n) = exp {[x(n) - m(i)] 2/ 2s(n) 2 where [x(n) - m(i)] is the Euclidean distance between x(n) and m(i) and which should not exceed the distance for which the Heisenberg Uncertainty Principle is applicable, e.g. ~ 300 nm, and s(n) is the standard deviation applicable to all neurons in the hidden layer subject to quantum gate effects. (It is also sometimes taken as the width of the Gaussian associated with the cluster algorithm prototype.).
In effect, with correct placement of quantum dot NOT gates in the hidden layer, it will be possible to alter an input (0, 1) to an output (0,-1) say at the bias point Phi(0). That switching effect will then transfer to the output layer, inverting the outputs for the k = 1, 2,.....m neurons principally for the points y(1), y(k=j) and y(m). The key advantage of the RBF over an MLP network is that the RBF leads to better decision boundaries and improved classification of the information from the hidden layer. (Which might be very critical since not all information going to the amygdala is negative. Some is available evolutionarily, for example, to arouse an alert for a real threat!)
A remaining question or issue, is whether or not one quantum NOT gate is required for each neuron in the hidden layer. I am going to argue 'no', given the fact that quantum superposition superposition of data elements can apply such that: U = U(1) + U(0), where 1 and 0 denotes bits entrained to move along as information to the outputs y(1), y(k=j) and y(m). In principle, then, it is feasible that each quantum gate can - according to its qubit (not bit) capacity, match up to many more neuron bits in transit within the hidden layer.
For example, an ordinary (computer) register only holds one of eight possible 3-bit combinations at a time, say: 001, or 010, or 011. By contrast, a qubit register (or quantum gate acting as one) could accommodate all eight possible 3-bit combinations: 000, 001, 010, 100, 110, 101, 011, and 111. In general, for any given n-bit combination, with n a whole number, a qubit register could accommodate 2 to the nth power total combinations at one time. Thus, 16 combinations could be held in memory for 4-bits, 32 for 5-bits, and so on. This change marks an exponential (two to the 'n') increase over any classical counterpart.
The task now will be to check the workabilty of the RBF network by using a simulation code that can encode qubit capacity and be integral with the key network equations for the RBF network. What might be one way we know we're on the right track? Well, apart from getting the outputs we want at (1), y(k=j) and y(m), we also would like to see or ensure that the error function remains Gaussian, and doesn't suddenly change. For example, it can be shown that the relevant error function for a given neuron designated for the output bias is:
Phi(n) = exp {[x(n) - m(i)] 2/ 2s(n) 2 where [x(n) - m(i)] is the Euclidean distance between x(n) and m(i) and which should not exceed the distance for which the Heisenberg Uncertainty Principle is applicable, e.g. ~ 300 nm, and s(n) is the standard deviation applicable to all neurons in the hidden layer subject to quantum gate effects. (It is also sometimes taken as the width of the Gaussian associated with the cluster algorithm prototype.).
All of the preceding is consistent with the "holarchy" cited by Mensan Christina Anne Knight (see my July 12 post) in her 'Systems Approach to the Afterlife' and first used by Arthur Koestler in The Ghost in the Machine . As she points out and Koestler notes, this embodies the "interdependence, interrelations and interaction of subsystems which results in a form of self-organization from which a larger system is an emergent product".
In other words, consciousness is not an epiphenomenon of the brain but an "emergent product". This is assured by the novel role of QM and especially the uncertainty principle which allows the sort of creative behavior and thought we associated with a highly functioning consciousness. Hence, no "soul" is needed for consciousness to operate. Moreover, it has the capacity to be fully rational in its capacity and operation, Donald Trump being the exception, obviously. And the consequences at death, again, Ms. Knight's words:
"If the conscious mind is located at the apex of this holarchic system , then the collapse of the substructure beneath it has terminal consequences."
Thus,
"The conscious mind becomes just another impermanent system subject to physical laws. It does not survive death because it cannot exist outside the holarchic structure from which it emerged"
Thus,
"The conscious mind becomes just another impermanent system subject to physical laws. It does not survive death because it cannot exist outside the holarchic structure from which it emerged"
It would appear then that the WSJ letter writer underestimates the power of Materialism (in its quantum based form - called Monistic physicalism) to account for consciousness, as well as rendering any "soul" redundant.
No comments:
Post a Comment