We conclude this instalment with a workable neural network that can actually accommodate quantum dot components to act as input switches and enable changing the electro-chemical impulse signals arriving at critical neurons in the amygdala of the brain. As we saw, this is the "fear center" and the central clearing house for all supernatural detritus that can wreak havoc on vulnerable psyches. If we can therefore neutralize it, or even partially compromise it (by mixing signals or weakening their influence at the outputs) we might finally put the unnecessary fear of supernatural drivel behind humanity. Humans have more than enough to occupy their worry hours, what with terror attacks, financial meltdowns, diseases, hurricanes, oil spills, and earthquakes without adding the phantasmagorias of "Satan", "Demons", "Hell" and other rubbish that belongs in the existential-ontological garbage can. (Along with the pseudo-sources that give rise to them)
The change in efficacy arrives via invoking neuronal assemblies and super-assemblies which are amenable to a particular type of neural network processing that relies on "clustering algorithms" and for which quantum dots can be located at critical junctures in the network to minimize input variations and assure specific outputs. To fix ideas, I direct readers to Figure 2 and parts A, B and C. Part (A) shows a neuronal super-assembly, part (B) a biological and quantum analog to one center of neuronal action, and part (C) the workable neural network known as the radial basis function.
A neuronal assembly can also be described by connections to some N-synapses (where N> 100,000). This is shown simplified in the left side of the same diagram(B), where the neuron A now has 4 rays: AB, AC, AD, AE representing four connections of the same neuron to four different synapses. Bear in mind the neuron A has any number of quantum wave state possibilities associated with it: firing or not firing, action-potential or no action potential and perhaps one million others. In a higher-order neuronal assembly one might have 1,000 neurons each connected to 100,000 synapses for which an extremely complicated figure would result!
Consider now the the quantum aspect. A neuron 'A' either fires or not. The 'firing' and 'not firing' can be designated as two different quantum states identified by the numbers 1 and 2. When we combine them together in a vector sum diagram, we obtain the superposition. where the wave function (left side) applies to the neuron 'A'. What if one has 1,000 neurons, each to be described by the states 1 and 2? In principle, one can obtain the vector sum as shown in the above equation for neuron A and combine it with all the other vector sums (using the quantum superposition principle) for each of the 999 other neurons. The resulting vector sum represents the superposition of all subsidiary wave states and possibilities - at least those for the 'firing-not firing' states! The end product of this is a neuronal 'assembly' or aggregate of neurons (and associated synapses, axons, dendrites, etc.) which performs some function or acts like a circuit (for example a simple reflex arc, as when one burns a finger and automatically removes it from the source).
At a higher level of the hierarchy, one refers to the neuronal super-assembly(A) or 'super-circuit’ within which considerations such as networks, and optimization of paths as well as 'adjacency and order' take precedence. Each solid black circle denotes a neuronal assembly such as described above. These are intimately connected, each to the other, to perform some specific function. (For example in the brain region known as the limbic system, there is a structure called the hippocampus, within which an agglomerate of neuronal assemblies is dedicated to maximizing retention of events as memory.)
We then ask: What form or type of operational network can best exploit these properties to attain the desired function we want? That is, to be able to shut down or minimize negative inputs that contain information that amplifies unwarranted fears? The diagram (C) gives the clue: it will be a network for which a clustering algorithm is optimized, since this same clustering emulates the neuronal assemblies.
In the clustering algorithm view(C), the center for action (the m's) are going to be neurons, as occur in neuronal assemblies. As each cluster center m(i) is moved closer to the x(n) because the relevant equation minimizes the error vector, [x(n) - m(i)]. The relevant equation here is:
m(i)' = m(i) + a{x - m(i)} where a is the "learning rate" or in this case, the information processing (flow and refresh) rate in the network. The iterative aspect of the equation, emphasizes the fact that the algorithm is applicable for clustering the input vectors in real-time as they are acquired sequentially. Thus, the quantum gates will have an ongoing active role and continually update inputs as well as outputs.
What about the RBF network itself? This is shown in Fig. 3. Cursory inspection shows some similarities to the previous considered (numerically vetted) MLP network but also key differences. For example: the hidden layer j = 0, 1, 2 ....p has no bias inputs such as X(0) for the MLP. Both hidden layer operators transit to Y(k) and Y(m) which in turn feed into one bias point per cluster at Phi(0).
The change in efficacy arrives via invoking neuronal assemblies and super-assemblies which are amenable to a particular type of neural network processing that relies on "clustering algorithms" and for which quantum dots can be located at critical junctures in the network to minimize input variations and assure specific outputs. To fix ideas, I direct readers to Figure 2 and parts A, B and C. Part (A) shows a neuronal super-assembly, part (B) a biological and quantum analog to one center of neuronal action, and part (C) the workable neural network known as the radial basis function.
A neuronal assembly can also be described by connections to some N-synapses (where N> 100,000). This is shown simplified in the left side of the same diagram(B), where the neuron A now has 4 rays: AB, AC, AD, AE representing four connections of the same neuron to four different synapses. Bear in mind the neuron A has any number of quantum wave state possibilities associated with it: firing or not firing, action-potential or no action potential and perhaps one million others. In a higher-order neuronal assembly one might have 1,000 neurons each connected to 100,000 synapses for which an extremely complicated figure would result!
Consider now the the quantum aspect. A neuron 'A' either fires or not. The 'firing' and 'not firing' can be designated as two different quantum states identified by the numbers 1 and 2. When we combine them together in a vector sum diagram, we obtain the superposition. where the wave function (left side) applies to the neuron 'A'. What if one has 1,000 neurons, each to be described by the states 1 and 2? In principle, one can obtain the vector sum as shown in the above equation for neuron A and combine it with all the other vector sums (using the quantum superposition principle) for each of the 999 other neurons. The resulting vector sum represents the superposition of all subsidiary wave states and possibilities - at least those for the 'firing-not firing' states! The end product of this is a neuronal 'assembly' or aggregate of neurons (and associated synapses, axons, dendrites, etc.) which performs some function or acts like a circuit (for example a simple reflex arc, as when one burns a finger and automatically removes it from the source).
At a higher level of the hierarchy, one refers to the neuronal super-assembly(A) or 'super-circuit’ within which considerations such as networks, and optimization of paths as well as 'adjacency and order' take precedence. Each solid black circle denotes a neuronal assembly such as described above. These are intimately connected, each to the other, to perform some specific function. (For example in the brain region known as the limbic system, there is a structure called the hippocampus, within which an agglomerate of neuronal assemblies is dedicated to maximizing retention of events as memory.)
We then ask: What form or type of operational network can best exploit these properties to attain the desired function we want? That is, to be able to shut down or minimize negative inputs that contain information that amplifies unwarranted fears? The diagram (C) gives the clue: it will be a network for which a clustering algorithm is optimized, since this same clustering emulates the neuronal assemblies.
In the clustering algorithm view(C), the center for action (the m's) are going to be neurons, as occur in neuronal assemblies. As each cluster center m(i) is moved closer to the x(n) because the relevant equation minimizes the error vector, [x(n) - m(i)]. The relevant equation here is:
m(i)' = m(i) + a{x - m(i)} where a is the "learning rate" or in this case, the information processing (flow and refresh) rate in the network. The iterative aspect of the equation, emphasizes the fact that the algorithm is applicable for clustering the input vectors in real-time as they are acquired sequentially. Thus, the quantum gates will have an ongoing active role and continually update inputs as well as outputs.
What about the RBF network itself? This is shown in Fig. 3. Cursory inspection shows some similarities to the previous considered (numerically vetted) MLP network but also key differences. For example: the hidden layer j = 0, 1, 2 ....p has no bias inputs such as X(0) for the MLP. Both hidden layer operators transit to Y(k) and Y(m) which in turn feed into one bias point per cluster at Phi(0).
The output of a neuron in the output layer is then defined according to: y(k=j) = f(jth output of j = 0, 1, 2.....p neuron of hidden layer) which can be controlled by placing a quantum gate (likely NOT gate using the function for the Pauli spin matrix σ(z) = (1, 0¦0, -1), where as before, the left pair is a matrix 'top' and the right pair a matrix 'bottom' )
In effect, with correct placement of quantum dot NOT gates in the hidden layer, it will be possible to alter an input (0, 1) to an output (0,-1) say at the bias point Phi(0). That switching effect will then transfer to the output layer, inverting the outputs for the k = 1, 2,.....m neurons principally for the points y(1), y(k=j) and y(m). The key advantage of the RBF over an MLP network is that the RBF leads to better decision boundaries and improved classification of the information from the hidden layer. (Which might be very critical since not all information going to the amygdala is negative. Some is available evolutionarily, for example, to arouse an alert for a real threat!)
A remaining question or issue, is whether or not one quantum NOT gate is required for each neuron in the hidden layer. I am going to argue 'no', given the fact that quantum superposition superposition of data elements can apply such that: U = U(1) + U(0), where 1 and 0 denotes bits entrained to move along as information to the outputs y(1), y(k=j) and y(m). In principle, then, it is feasible that each quantum gate can - according to its qubit (not bit) capacity, match up to many more neuron bits in transit within the hidden layer.
For example, an ordinary (computer) register only holds one of eight possible 3-bit combinations at a time, say: 001, or 010, or 011. By contrast, a qubit register (or quantum gate acting as one) could accommodate all eight possible 3-bit combinations: 000, 001, 010, 100, 110, 101, 011, and 111. In general, for any given n-bit combination, with n a whole number, a qubit register could accommodate 2 to the nth power total combinations at one time. Thus, 16 combinations could be held in memory for 4-bits, 32 for 5-bits, and so on. This change marks an exponential (two to the 'n') increase over any classical counterpart.
The task now will be to check the workabilty of the RBF network by using a simulation code that can encode qubit capacity and be integral with the key network equations for the RBF network. What might be one way we know we're on the right track? Well, apart from getting the outputs we want at (1), y(k=j) and y(m), we also would like to see or ensure that the error function remains Gaussian, and doesn't suddenly change. For example, it can be shown that the relevant error function for a given neuron designated for the output bias is:
Phi(n) = exp {[x(n) - m(i)]^2/ 2s(n)^2 where [x(n) - m(i)] is the Euclidean distance between x(n) and m(i) and which should not exceed the distance for which the Heisenberg Uncertinaty Principle is applicable, e.g. ~ 300 nm, and s(n) is the standard deviation applicable to all neurons in the hidden layer subject to quantum gate effects. (It is also sometimes taken as the width of the Gaussian associated with the cluster algorithm prototype.).
If the error divergence was excessive, it would suggest we are killing too much information on the axon pathways, tossing the "baby" (alerts to real threats that are useful) with the "bathwater"(unreal and baseless threats of "demons", "Hell" etc.). Stay tuned, as we will have more concrete help for the troubled minds afflicted by Demon and Hell phantasmagorias sooner than later!
In effect, with correct placement of quantum dot NOT gates in the hidden layer, it will be possible to alter an input (0, 1) to an output (0,-1) say at the bias point Phi(0). That switching effect will then transfer to the output layer, inverting the outputs for the k = 1, 2,.....m neurons principally for the points y(1), y(k=j) and y(m). The key advantage of the RBF over an MLP network is that the RBF leads to better decision boundaries and improved classification of the information from the hidden layer. (Which might be very critical since not all information going to the amygdala is negative. Some is available evolutionarily, for example, to arouse an alert for a real threat!)
A remaining question or issue, is whether or not one quantum NOT gate is required for each neuron in the hidden layer. I am going to argue 'no', given the fact that quantum superposition superposition of data elements can apply such that: U = U(1) + U(0), where 1 and 0 denotes bits entrained to move along as information to the outputs y(1), y(k=j) and y(m). In principle, then, it is feasible that each quantum gate can - according to its qubit (not bit) capacity, match up to many more neuron bits in transit within the hidden layer.
For example, an ordinary (computer) register only holds one of eight possible 3-bit combinations at a time, say: 001, or 010, or 011. By contrast, a qubit register (or quantum gate acting as one) could accommodate all eight possible 3-bit combinations: 000, 001, 010, 100, 110, 101, 011, and 111. In general, for any given n-bit combination, with n a whole number, a qubit register could accommodate 2 to the nth power total combinations at one time. Thus, 16 combinations could be held in memory for 4-bits, 32 for 5-bits, and so on. This change marks an exponential (two to the 'n') increase over any classical counterpart.
The task now will be to check the workabilty of the RBF network by using a simulation code that can encode qubit capacity and be integral with the key network equations for the RBF network. What might be one way we know we're on the right track? Well, apart from getting the outputs we want at (1), y(k=j) and y(m), we also would like to see or ensure that the error function remains Gaussian, and doesn't suddenly change. For example, it can be shown that the relevant error function for a given neuron designated for the output bias is:
Phi(n) = exp {[x(n) - m(i)]^2/ 2s(n)^2 where [x(n) - m(i)] is the Euclidean distance between x(n) and m(i) and which should not exceed the distance for which the Heisenberg Uncertinaty Principle is applicable, e.g. ~ 300 nm, and s(n) is the standard deviation applicable to all neurons in the hidden layer subject to quantum gate effects. (It is also sometimes taken as the width of the Gaussian associated with the cluster algorithm prototype.).
If the error divergence was excessive, it would suggest we are killing too much information on the axon pathways, tossing the "baby" (alerts to real threats that are useful) with the "bathwater"(unreal and baseless threats of "demons", "Hell" etc.). Stay tuned, as we will have more concrete help for the troubled minds afflicted by Demon and Hell phantasmagorias sooner than later!
1 comment:
This is a terrific piece and I believe it's workable. I also believe the technology alredy exists to do just what you have described.
Since about 2000 we've known cybernetic neural interfaces are possible and have actually been implanted in rats etc. and caused them to change their behavior. I believe the same can be done for humans.
I hope it's done sooner than later, so Fundamentalists won't be able to play their fear card anymore.
Post a Comment