Tuesday, January 14, 2025

David Bohm's Stochastic Interpretation Of QM - Continuing The Celebration of the International Year of Quantum Science

 We continue, in this celebration of the International Year of Quantum Science & Technology, in examining David Bohm's contribution - now known as the Stochastic Interpretation of Quantum Mechanics.  This is also Part 2, dealing more specifically with Bohm's work and arguably what provided the motivation to attack Bohm in the middle years of the 20th century.

 To refresh memories, Louis de Broglie, in his Ph.D. thesis (1926), had postulated the existence of what he called “matter waves”.  In effect, he postulated that particles of mass have wave properties and actually can be deemed waves.  Hence, every material particle (electron, proton etc.) has associated with it a de Broglie wave with a wavelength defined by a simple mathematical equation:

lD = h/ mv

where h is Planck’s constant, and m the mass, v the particle velocity.

    Only later (1927) did we learn that de Broglie waves (also called ‘B-waves’) had actually been proven to exist, from the results of the now famous Davisson-Germer experiment (setup below):


      The experiment consisted of firing an electron beam from an electron gun directed to a piece of nickel crystal at normal incidence. An accident occurred in which air entered the chamber, producing an oxide film on the nickel surface.

 To remove the oxide, Davisson and Germer heated the specimen in a high temperature oven, not knowing that this affected the formerly polycrystalline structure of the nickel. The effect of the accidental heating was to form large single areas with crystal planes continuous over the width of the electron beam. To make a long story short, when the experiment re-commenced  the electrons were scattered by atoms which originated from crystal planes inside the nickel crystal, leaving patterns from which the de Broglie wavelength (l) could be calculated according to:

l  =  2 d sin (90o -   q /2)

   The point is that de Broglie waves represent the wave mirror image of matter.  David Bohm then incorporated this result into his own theory of quantum mechanics. David Bohm and his Birkbeck College colleague Basil J. Hiley[1], not only concurred with the physical reality of de Broglie waves (or B-waves) but also put forward that they were guided by a clock synchronism mechanism. This was set at a certain rest frequency, fo, and also for frequencies in non-rest frames.  This mechanism would provide a “phase locking” as if guided via “synchronous motors”.  Hence, the genesis of the term “pilot waves.” 

In the relativistic limit for photons:

                               f =   mo  c/ h

Changing to angular frequency (wo) to make the mechanism consistent with that proposed by Bohm and Hiley:

2p  fo  =   mo  c2 / h

 Replacing the Planck constant  (h) by ħ = h/ 2p , the Planck constant of action:

2p  fo  =   mo  c2 / ħ  

Then:

womo  c2 /  ħ 

Which is the “clock frequency”  in the photon rest frame.

 There is also an additional condition, known as the Bohr-Sommerfield condition, for the clock to remain in phase with the pilot wave:

p dx =  n ħ

Now, the momentum p =  mo  c , so that the integral becomes:

 2p x (mo c) = n ħ

 And the de Broglie wavelength emergence (lD = h/ p) is evident in the equation. In this sense, we have:  2p x  =  n (h/ mo  c)  =  n lD  

 Or the same expression (2p r =  n lD) for the standing waves in an atom.  In Bohm’s own development, the procession of B-waves is actually enfolded within a “packet” of P-waves:



The axis labeled E is actually the real part of the electric field component, Ez. The width of the p-wave packet is denoted by the spread:

Dk = p / (x – xo)

 Where xo  denotes the center point of the wave packet. In other words, if the center point is at  xo = 0, the packet width is just Dk = p / x. 

The wavelength, l = 2p / Dk then  is much less than the width of the packet. E.g. if Dk = p / x then l = 2p / (p / x) = 2x. so if x = 1 nm, then l = 2 nm and:

 Dk = p / x = p / (1 nm) = p  nm, but p (nm) > 2 nm.

 The maximum of the wave packet is approximated closely by the square of the amplitude:  [ Ez ] 2 =    4 sin2 Dk (x – xo) / (x – xo) 2

 We can check the limits of the preceding. Let xo = 0 then:

[ Ez ] 2 =    4 sin2 Dk x / x 2

 And: [ Ez ] 2 =      4 sin2 Dk /  x

 Conversely, let xo = x, then:  [ Ez ] 2 = 0 =  4 sin2 Dk (x – x) / (x – x) 2

   Thus, the p-wave packet ceases to exist as a discrete or localized entity and thereby loses its particle properties. This is undoubtedly what drove most of Bohm's opponents bonkers: "Wait!!  He just disposed of particles!"

Having shown the dominant wave aspect of matter, the next battle became that of “locality” vs. “nonlocality”.  This conflict in turn set up the battle between accepting quantum mechanics as a complete theory and rejecting it as incomplete.  Enter now (in 1935), Albert Einstein along with two colleagues, Boris Podolsky and Nathan Rosen.  The trio devised a thought experiment to try to show quantum mechanics was incomplete. This has since been called “the EPR experiment” based on the first initials of their names.  Einstein, Podolsky and Rosen (E-P-R). 

 They imagined a quantum system (helium atom at A) which could be ruptured such that two electrons were dispatched to two differing measurement devices, X1 and X2. 

X1  (+ ½ ) <-----(A)------>(- ½ ) X2


Each electron would carry a property called 'spin'. Since the helium atom itself had zero spin (the 2 electrons canceling each other out), this meant one would have spin (+ 1/2), the other (-1/2).  Thus, we manage to skirt the Heisenberg Principle, and obtain both spins simultaneously without one measurement disturbing the other. We gain completeness, but at a staggering cost. Because this simultaneous knowledge of the spins implies  that information would have had to propagate from one spin measuring device (on the left side) to the other (on the right side) instantaneously!  This was interpreted to mean faster-than-light communication, which violates special relativity. 

In effect, a paradox ensues: quantum theory attains completeness only at the expense of another fundamental physical theory - relativity. By this point, Einstein believed he finally had Bohr by the throat. Figuring Bohr might come up with some trick or sly explanation up his sleeve, Einstein went one better at the 6th Solvay Conference held in 1930, actually designing a thought experiment device that he was convinced would have Bohr in tears trying to find a solution.

                                                                      


Whenever the door flaps open, even for a split second, one photon escapes and the weight difference (between original box and after) can be computed using Einstein's mass-energy equation, e.g.: m = E/ c2. Thus, the difference is taken as follows:

Weight (before door opens) - weight (after )

(E.g.  with 1 photon of mass m = E/ c2   gone)

    Since the time for brief opening is known (Δ t) and the photon's mass can be deduced from the above weight difference, Einstein argued that one can in principle  find both the photon's energy and time of passage to any level of accuracy without any need for the energy-time uncertainty principle.

     In other words, the result could be found on a totally deterministic basis!      Bohr for his part nearly went crazy when he studied the device, and for hours worried there was no solution and maybe the wily determinist was correct after all. When Bohr did finally come upon the solution, he realized he'd hoisted the master on his own petard.

     The thing Einstein overlooked was that his very act of weighing the box translated to observing its displacement (say,
dr = r2 - r1)  within the Earth's gravitational field. But according to Einstein's general theory of relativity, clocks actually do run slower in gravitational fields (a phenomenon called 'gravitational time dilation') In this case, for the Earth, one would have the fractional difference in proper time, as a fraction of time passage t[i].  But this meant the Uncertainty principle had to be used with  Δ t  factored in.

 (I.e.  ΔE Δ t ³  h/2π)

  Years later, mathematician John S. Bell asked the question: 'What if the E-P-R experiment could actually be carried out? What sort of mathematical results would be achieved?' In a work referred to as "the most profound discovery in the history of science", Bell then proceeded to place the E-P-R experiment in a rigorous and quantifiable context, which could be checked by actual measurements.

    Bell formulated a thought experiment based on a design similar to that shown in the earlier EPR sketch. Again we have two electrons of differing spin flying off to two separate detectors  D1 and  D2:

D1 (+ ½ )<--*---[ o  ]----*-->(- ½ ) D2 


Bell made the basic assumption of locality (i.e. that no communication could occur between  the detector  D1 and detector  D2 faster than light speed). In what is now widely recognized as a seminal work of mathematical physics, he set out to show that a theory which upheld locality could reproduce the predictions of quantum mechanics. His result predicted that the above sum, S, had to be less than or equal to 2 (S   < 2). This result, so pedestrian on the surface, became known as the 'Bell Inequality'. Little known then, it would propel three quantum physicists (Alain Aspect, John F. Clauser and Anton Zeilinger) to the Nobel Prize 5 decades later.

By 1982 Alain Aspect and his colleagues at the University of Paris were determined to actually test Bell’s Inequality and the original E-P-R quantum system (used in the EPR thought experiment).  To that end the team set up an arrangement as sketched below:


Rather than electron spins - photon polarizations  (P1 and P2)  had to be detected and determined. These were observed with the photons emanating from a Krypton-Dye laser and arriving at two different analyzers, A1 and A2. The results of these remarkable experiments disclosed apparent and instantaneous connections between the photons at A1 and A2. Say, twenty successive detections are made then one obtains at the respective analyzers (where a ‘1’ denotes polarization detection with E vector up and ‘0’ denotes polarization detection with E vector  down:

A1:   1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0

A2:   0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1

 On inspection, there was found to be a 100% anti-correlation (i.e. 100% negative correlation) between the two and an apparent nonlocal connection. In practice, the experiment was set out so that four (not two - as shown) different orientation 'sets' were obtained for the analyzers. Each result is expressed as a mathematical (statistical) quantity known as a 'correlation coefficient'. The results from each of 4 orientations (I, II, III, IV)  were then added to yield a sum S:

S = (A1,A2)I + (A1,A2)II + (A1,A2,)III + (A1,A2)IV

   In his experiments, Aspect determined the sum with its attendant indeterminacy to be:   S = 2.70 ±  0.05. In so doing he experimentally validated Bell’s Inequality and in the process reduced the EPR Paradox to a simple misunderstanding of quantum mechanics in the authors' minds.

 Regarding the various violations of the Bell Inequality, David Bohm  considered an alternative quantum world, based on different orders of manifestation which he called explicate and implicate.  To Bohm, the readily observable order of the macrocosm had unfolded or explicated. That is, its host of apparently diverse objects and processes constituted a divergence from unified order. This is the order at which Einsteinian locality and determinism would have some relevance. (After all, Newtonian mechanics can also be used to make predictions about the motions of bodies - such as pool balls and artificial satellites).

  However, this plurality of objects (subatomic particles, planets, stars, galaxies) is ultimately enfolded in a higher dimensional implicate order. This order is hidden or unseen (hence 'implicate') and not perceived by lower dimensional beings.  To render this more concrete Bohm devised a number of excellent analogies.  For example, you walk into a room and see two television monitors, A and B. Each shows the image of a fish, one in lateral view, the other face-on. The sketch of the presentation is shown in the diagram below:

David Bohm's 'fish' experiment to portray multi-dimensional reality


The casual observer may simply deduce that he’s seeing two different fish, one on each screen, each in a separate fish tank. The observer then walks into an adjoining room where he confirms that each monitor is receiving input from two distinct camcorders trained on one aquarium, with only one fish inside it. One camcorder is aimed at the front of the aquarium, the other at the side. The fish is resting with its face to the front.  

At that point, the observer has recognized that at the higher (three)- dimensional) reality there is one fish, but viewed in two different (two dimensional) perspectives. By analogy, Bohm has suggested a similar error of perception applies to how we perceive the particulate (unfolded) world around us.  As Bohm describes his result[2]:

 In the implicate order we have to say the mind enfolds matter in general and therefore the body in particular. Similarly, the body enfolds not only the mind but also in some sense, the entire material universe.

 Bohm offered this in the hope of showing how we can be deceived into thinking the explicate, particle-dominated order of separation is the valid one. But it is actually only a virtual display, an artificial reference field for 3-dimensional brains.

 Such an error is costly - in terms of confining our attention to a limited realm of fragmentary illusion, instead of seeing beyond it.  For example, like the casual observer of the two TV monitor fish images, a casual observer of the Aspect experiment might conclude that the two photons (registered at separate detectors) are themselves separate. Bohm's implicate order prevails upon the observer to think instead of the photons to have always been connected as one whole - but perhaps in a higher dimension. 

 In the historic sense, David Bohm’s work provided a useful and verifiable perspective to what like-minded colleagues have said is also the basis of a higher dimensional holographic reality, an elevation above purely reductionist conceptions. Here, physicist Bernard d’Espagnat’s words (In Search Of Reality)  are certainly worthy of consideration [3]:

The experimental corroboration of nonseparability (i.e. nonlocality) quite obviously constitutes a strong argument against the hypothesis of objective realism as applied to microscopic objects, and even….against that of objectivist realism applied to macroscopic objects only.

Readers who wish to read his landmark book Wholeness and the Implicate Order can access it at the link below:

DavidBohm-WholenessAndTheImplicateOrder.pdf (gci.org.uk)

To see more contributions from the Physics Today Archives go to the link at:

https://physicstoday.org/quantum.

----------------------------------------------------------

Addendum: Bohm's Uncertainty Principle

Bohm is primarily concerned with the canonically conjugate field momentum, for which the associated coordinates, i.e. Dt,  Dfk  fluctuate at random. Thus, we have, according to Bohm:

p k = a (Dfk  /Dt)

 Where a is a constant of proportionality, and Dfk  is the fluctuation of the field coordinate. If then the field fluctuates in a random way the region over which it fluctuates is;

(d  Dfk) 2  = b (Dt)

Taking the square root of both sides yields:

(d  Dfk)   = b 1/2   (Dt)1/2 

 Bohm notes that p k   also fluctuates at random over the given range so:

 d p k =  a b 1/2 /   (Dt)1/2 

 Combining all the preceding results one finally gets a relation reflective of the Heisenberg principle, but time independent:  d p k   (d  Dfk  ) = ab

 This is analogous to Heisenberg’s principle, cf.

dp d<  ħ

Where the product ab  plays the same role as ħ


[1] Bohm, and Hiley: 1980: Foundations of Physics,  (10) 1001-1016.

[2] Bohm, op. cit., 209

[3] d’Espagnat. p. 143.




No comments: