Showing posts with label Paul Dirac. Show all posts
Showing posts with label Paul Dirac. Show all posts

Tuesday, May 19, 2020

Revisiting The Concept Of Parallel Universes


The issue of whether there exist parallel universes has been around for some time, and become more or less a stock theme for a lot of science fiction, some good, most bad. But what do we mean by the term "parallel universe" and is it possible one might ever detect one? Is a parallel universe the same as a "universe" based on the "Many worlds" quantum mechanical interpretation of Hugh Everett III?

Let's take the first last, because it's the easiest to deal with. It also eliminates one potential source of confusion (and unfortunate conflation) right off the bat. In fact, Everett's "Many Worlds" interpretation was devised specifically as an alternative to the Copenhagen Interpretation of QM - in order to physically make sense of the principle of superposition in QM. According to this principle, before an observation is actually made to establish a determinate state, the object or particle exists in a multitude of different (quantal) states simultaneously.

As to the more exact definition of a "state" this was first given by Paul Dirac in his monograph Quantum Mechanics ('The Principle of Superposition', p. 11):

"A state of a system may be defined as a state of undisturbed motion that is restricted by as many conditions or data as are theoretically possible without mutual interference or contradiction"

This definition itself needs some clarification. By "undisturbed motion" Dirac meant the state is pure and hence no observations are being made upon it such that the state experiences interference effects to displace it. In the Copenhagen Interpretation, "disturbance" of mutually defined variables (say x, p or position and momentum) occurs when: [x, p] = -i h/ 2π, where h is the Planck constant. Thus, an undisturbed state must yield: [x, p] = 0. Another way of putting this is that in the latter case the 2 variables commute, and in the former they do not. Hence, Dirac's setting of upper limits in the last part of his definition, specifying as many conditions that "are theoretically possible without mutual interference".

Now, we need to fix ideas further to grasp what the Copenhagen Interpretation (which Dirac held to) was all about. Let's say I fire electrons from a special "electron gun" at a screen some distance away. At first glance one might conclude, reasonably and intuitively, that the electron motion behaves like an apple's if tossed at a wall. That is, there is always one electron which is on a single predictable path, following stages 1, 2, 3 and so on over time, toward the screen. This is a reasonable, common-sense sort of expectation but alas, all wrong!

According to the Copenhagen Interpretation of quantum theory the instant the electron leaves the "gun" it takes a large number of differing paths to reach the screen. Each path differs only in phase (but the differing phases determine the states), and has the same amplitude as each of its counterparts, so there is no preference. How does the electron differ from the apple? It takes all paths to the screen, the apple takes only one (at a time) to the wall. And the electron exhibits phases (as a wave) while the apple doesn't. The electron's wave function can be expressed:

 = y(1) + y(2) + y(3) + .................(N)

Here the total wavefunction for the electron is on the left hand side of the equation, while its resolved wave amplitude states (superposition of states) are on the right-hand side. If they are energy states, they would correspond to all possible electron energies from the lowest (1) to the highest or Nth state (N). There is absolutely no way of knowing which single state the electron has until it reaches the screen and an observation is made, say with a special detector (D).

What bothered Everett and others was the Copenhagenites' claim that all such differing states existed simultaneously in the same observational domain for a given observer. And then, all but one of the states magically disappeared (referred to as "wave function collapse") when the actual observation was recorded.

To Everett it all seemed too contrived and artificial. What if...he asked himself...instead of explaining the superposition of states this way, one instead used the basis of "many worlds"? Not literal worlds, but rather "worlds" separated from each other denoting specific quantum states, in this case, for the electron?

Thus, instead of thinking of all quantum states (prior to observation) as co-existing in one phase space representation (by which I mean differing phase coordinates could be assigned to each electron) one could think of each phase attached to another "world", a quantum world. For the total duration T of time before the observation was made, all these "worlds" existed at the same time, and then ....on observation....the "choice" for one became reality .....however, in other quantum worlds those other choices might materialize.

Personally, I suspect Everett had a much more significant reason than superficial aesthetics to devise the "Many Worlds" interpretation. That is, it eliminated the troublesome issue of observer disturbance of observations so peculiar to the Copenhagen Interpretation. The core problem or basis is best summarized in Dirac's own words (op. cit,. p. 4):

"If a system is small, we cannot observe it without producing a serious disturbance and hence we cannot expect to find any causal connections between the results of our observations"

In other words, the observation itself disrupts causation. For if the state is "interfered" with such that the observables don't commute (e.g. don't yield [x, p]= 0) then one can't logically connect states in a causal sequence. To quote Dirac (ibid.): "Causality applies only to a system which is left undisturbed".

The other issue that likely bugged Everett, propelling him towards "Many worlds", was the incessant Copenhagenites' debate over what level of consciousness was required for a given observer to collapse a wave function. Perhaps this was best epitomized by Richard Schlegel in his Superposition and Interaction:Coherence in Physics (1980, University of Chicago Press, p. 178,) who refers to the opinion expressed once by Prof. Eugene Wigner (at a conference) that "the consciousness of a dog would effect the projection into a single state whereas that of an amoeba would not."

So, in this sense, "Many worlds" provided welcome relief.

However, it must not be confused with the "parallel universes" to which I will now turn, which I regard as actual PHYSICAL cosmi likely incepted from the selfsame primordial vacuum state (via inflation) as our own universe. Thus, an actual primordial vacuum - not a human observer or consciousness making observational choices- is the source of the real parallel universes. Thus, all putative parallel universes plausibly emerged from the primordial vacuum the way ours did, e.g. from the Big Bang.

Regarding the agency of inflation current standard theories propose inflation starting at about 10-35 s and doubling over a period of anywhere from  10-43 to  10-35 s after the initial inception. Estimates are that at least 85 such 'doublings' would be required to arrive at the phase where entropy rather than field resident energy dominates. The initial size (radius) of our universe would have been likely less than a proton's - maybe 1 fermi (fm) or  10-15 m, by the time the doubling process began. By the time it ended (after 90 'doublings') it would have been around 1.25 x 10-12 m. This is roughly eight times the distance of Earth from the Sun. In effect, the role of inflation is to give cosmic expansion a huge head start or boost, without which our universe would be much smaller. Other parallel universes emerging around the same time might have been larger or smaller depending upon their specific values for their fundamental physical constants (e.g. alpha, the "fine structure constant", h - the Planck constant, G, and eta the permittivity of free space).

In the graphic below I show an "idealized multiverse" replete with parallel universes, each occupying geodesics specified under a coordinate φ, and separated by uniform angular measure Θ from adjacent universes.




 The whole represents a 5-dimensional manifold in a toroidal topology. The topological space of the hypertoroid cosmos can therefore be represented by the global state space, a product of absolute hypertorus coordinate time (Θ) and 'all-space'(φ):GL = Θ X φ

Now, I reiterate that this is an idealized model which assumes that N-cosmi were incepted at equal intervals of time - as manifested by the equal spacing in the longitude angle Θ.


in principle, we don't know a priori how "close" in complex time another parallel universe may be to our own. When one uses the assumption of "equal time intervals" between inceptions in our idealized multiverse, one isn't stating what those times are, and so they could be minuscule - and the smallest time unit imaginable is the unit tau, τ. (About  10-43 s  and note Θ = f(τ).)

If we specify such an exact parallel universe time displacement we might be able to show how one parallel universe can be "mapped" topologically onto an adjacent one. As an example, let two parallel universes be distinguished by a 1-τ difference in fundamental time parameter, viz. [1 + 2τ] and [1 + 3τ], then we would require for connection, a mapping such that:

(Universe 'A'): f:X -> X = f(Θ,φ) = (Θ, 2φ)

(Universe 'B'): f:X -> X = f(Θ,φ) = (Θ, 3φ)

which means the absolute coordinate φ is mapped onto itself 2 times for [Universe A] and mapped onto itself 3 times for [Universe B]. Clearly, there’ll be coincidences for which: f(Θ,2φ) = f(Θ,3φ) wherein the two universes will 'interweave' a number of times.

For example, such interweaving will occur when φ = π/2 in [A] and φ = π/3 in [B]. The total set or system of multiple points obtained in this way is called a Synchronous temporal matrix. The distinguishing feature of this matrix is that once a single point is encountered, it is probable that others will as well. If one hyperspace transformation can occur linking parallel universes, A and B, then conceivably more such transformations can occur, linking A and C, D and E etc.

What if both absolute toroidal coordinates (Θ,φ) map into themselves the same number of times? Say, something like:

f:X -> = f(Θ, φ) = (2Θ, 2φ): Universe A

f:X -> = f(Θ, φ) = (3Θ, 3φ): Universe B

For example, given the previous conditions for coordinate φ, now let 2Θ = 3Θ for discrete values of Θ (e.g. 2π). For all multiples of 2π, the same toroidal cosmos will be experienced - if the absolute time coordinates are equal (e.g. π/2 = φ in A, and π/3 = φ in B) then we will have: Universe A = Universe B.

What does this equality mean? I conjecture that it implies a briefly inter-phased chaotic state prevails in both A and B where the fundamental physical constants are not fixed (in a future blog I will appeal to quantum chaos to describe this). For all intents and purposes it is as if a "portal" of sorts exists between them, though that doesn't mean it'd be accessible to humans. We say that there exists "an interpenetration of different parallel universes" but not necessarily entailing transfer of bodies from one to the other. Note that though the physical state spaces (e.g. with constants h, G, e/m, etc. )may be alike, they can still differ in dimensionality! And we cannot disregard fractal dimensionality.

IF one has this condition, THEN it is feasible that the proposed David Deutsch experiment (See: The Fabric of Reality) to detect the interphasing of a parallel universe can be carried out, and the penetration of our universe by a parallel one validated. If the topological condition above has not been met, then we expect the Deutsch experiment will render a negative result, but this doesn't mean the parallel universe theory is necessarily invalid- only that the specific topological condition hasn't been met! (Absence of evidence here is not evidence of absence).

Wednesday, October 3, 2018

Yes! The Humanities Are In Decline And It Exposes A Moral Vacuum In Our Cuture

In her recent WSJ op-ed ('The Humanities' Decline Makes Us Morally Obtuse') English Professor Paula  Marantz Cohen writes:

"Few people seem to be able to reconcile two overlapping truths - that someone can have a valid grievance in one context and be guilty of some version of the same thing in another. I see this as a failure of education. By 'education' I do not mean the workshops that teach us what not to say or to avoid offending others. That is training, not education."

Which is absolutely true. As an example, one can consider the case of  Asia Argento, Italian actress and one of Harvey Weinstein's original accusers. But who- in a radically divergent context - paid out $380,000 late last year to Jimmy Bennett who had accused her of assault. (See e.g. TIME, Oct. 1, p. 31).    As the piece goes on to note:

"The news came after Argento delivered  a rousing speech at the Cannes Film Festival in support of #MeToo and change in the industry."

But incredibly, after the Bennett revelations many mistakenly then took Ms. Argento to be a villainess, which is typical of the less (humanities) educated individuals Prof. Cohen writes about. These include those deficient in humanities exposure - which can range from classic literature to ethics, to metaphysics and philosophy-  as well as people who innately hate the humanities as inferior to science and technology.  As Prof. Cohen puts it:

"The humanities teach understanding but they also teach humility, that we may be wrong and our enemies may be right, that the past can be criticized without our necessarily feeling superior to it. That people's professed motives are not the whole story, and that the division of the world into oppressors and victims is a simplistic fairy tale."

For my own part, I think of how much more restricted my writing would be without the exposure to the humanities I had at Loyola University, for example. Those years and courses -  in everything from English lit, to metaphysics and ethics as well as comparative religion -  set the stage for my much larger framework of experience and education. This was beyond the standard STEM subjects one takes in the course of specialization for one's chosen major.  Were I to have been solely restricted to the STEM subjects, I'd never be writing about ethics, economics, religion, history now.  This blog Brane Space would be vastly limited to just math and science topics.

It strikes at the core of what Prof. Cohen is getting at that too many scientists even today confine themselves strictly to their specialty areas and seldom - if ever -  venture outside them.  So terrified  are they of being criticized for stepping out of their assigned academic cubby holes. It kind of reminds me of the Albert Bartlett Physics Today essay from 2004 in which he expressed the fear any physicists have of writing about the population explosion our planet faces.

The core problem as described by Prof. Cohen seems to be (ibid.):

"Science, engineering and finance may be hard but literature, history and philosophy are complex - impossible to resolve with a yes - or no, right or wrong answer. This is precisely what constitutes their importance as a tool for living. Metaphysics takes its name from the idea it goes beyond 'hard' science into the realm of moral and intellectual speculation, where no empirical proof is feasible."

This is a spot-on observation and also one that explains why most physicists, for example, would never write a book such as 'Beyond Atheism, Beyond God' - which I wrote five years ago.  Why not? Because in that book my Loyola Liberal Arts- Metaphysics- Ethics persona emerged, in particular in those chapters dealing with quantum  mechanical conjectures, consciousness and how these affect human ethics and even religious propensities. (Explaining also why most physicists - though admitted atheists or agnostics - would never write any atheist or agnostic texts, as I did.)

In like manner, most physicists would not write a book such as 'The JFK Assassination - The Final Analysis'.  Why not?   Likely because most physicists either are not that invested in recent history, or not confident enough to write a 650-plus page book on one defining historical  event (like the JFK assassination) or more inclined not to veer out of their 'yes-no' cubby holes for specific research.    Informed history thus drove my writing of the book, but also the application of Newtonian physics in multiple areas - such as the kill shot, not to mention echo correlation analysis for the acoustic impulses.   In other words, I had no problem applying scientific principles where I found them to be warranted.

What we are talking about then is a process whereby one transcends the realm of binary-leaning physical reductionism to more complex analysis based on effective critical thinking.   As I've noted in previous posts, this is precisely the benefit of a strong liberal arts education, leading to the ability to evaluate the validity of information - social, historical, religious or ethical - and the credibility of the sources underlying these.  Thereby, one is enabled to not only distinguish false physics (e.g. perpetual motion machines) from the genuine form, but false,  revisionist history from the more genuine entity, and fake news from real news.

This is why Prof. Cohen's ending words are so important:

"If we want our nation to heal and thrive, we just put the study of literature, history and philosophy back at the center of our curricula and require that students study complex works - not just difficult ones."

To the last point, that means being able to read and understand Sartre's 'Being And Nothingness' -  and not just Sir Arthur Eddington's 'Space,  Time and Gravitation',  or Paul Dirac's 'Principles of Quantum Mechanics'.  The truly well-educated person should be able to do both!

See also:

Thursday, June 23, 2016

Is Michio Kaku A Pseudo Physicist For Accepting A "Universal Intelligence"?


Michio Kaku: A real physicist - or a pseudoscientist?

In a scathing commentary ('The Dangerous Growth of Pseudo -Physics',)  appearing in a recent (May, p. 10) issue of Physics Today, Prof. Sadri Hassani of Illinois State University rails against pseudoscience that is "rapidly growing" and has now made its way into physics - the most refined and majestic of the sciences apart from mathematics.

In his 1 3/4 page piece he cites a number of examples including:

- The 2014 publication of a "manifesto for a post materialist science" published in a journal entitled Explore "which elevates parapsychology and near death experience to the rank of quantum theory"

- The popularity of the book 'The Tao Of Physics' by Fritjof Capra, which purports to establish a parallel between Eastern mysticism and modern physics

- The enthusiasm for the (1979) book 'The Dancing Wu Li Masters' by Gary Zukav which hints at "conscious" or "intelligent" photons which know where they are in classical two-slit experiments.

- The misrepresentation of energy as "non-material" to apply to nonmaterial spirits and souls.

In response to the last the author asserts (p. 11):

"There is no instance in nature in which mass transforms into energy (or vice versa) without some material particles carrying that energy".  There is no connection between soul-matter equivalence of mysticism and energy-mass equivalence of modern physics".

Earlier, Hassani points out that two primary assumptions of quantum theory (QT) have "been the main source of much confusion and abuse since the inception of non-relativistic QT"  These are:

1) That the square of the absolute value of the wave function y  is the  probability of the state of a system, i.e.

P = ½y  y *½

2) The superposition principle: If there are several paths available to the system the total y  is the appropriately weighted sum of the y s   for each path.


However, I'd suggest that Paul Dirac's original definition of superposition ('The Principles of Quantum Mechanics' Clarendon Press Oxford, 1957,  p. 4) might have more to do with incessant perversions toward metaphysics, i.e.

"If a system is small, we cannot observe it without producing a serious disturbance and hence we cannot expect to find any causal connections between the results of our observations"

Thus, the entire notion of "observer disturbed" systems was born. Without adequate training in QT, however, too many extrapolated this to macroscopic systems when technically it only applied to quantum ones. Thereby ignoring Dirac's initial emphasis "IF the system is small".

These misconceptions  were then carried into whackadoodle land when one read, as in Richard Schlegel's monograph Superposition and Interaction:Coherence in Physics (1980, University of Chicago Press, p. 178,) about  the opinion expressed once by Prof. Eugene Wigner (at a QM conference) that "the consciousness of a dog would effect the projection into a single state whereas that of an amoeba would not."

So hell, if a dog like a French Poodle could "effect the projection into a single state" why not a human who observes LeBron James closely enough in an NBA game to make his jump shot bounce off the hoop at the last moment?  Again, the reason is that basketballs are macro sized objects, as opposed to electrons, protons, etc.

To combat these aberrations of mysticism filtering into modern physics Hassani recommends  reading assignments in high school and college physics courses "to make students aware of pseudoscientific nonsense". He also advocates the liberal use of rational wiki as a resource, e.g.

http://rationalwiki.org


Finally, we come to the recent claims of  Michio Kaku, a theoretical physicist at the City College of New York (CUNY) and co-founder of String Field Theory.  He seriously  believes that the  theoretical particles known as “primitive semi-radius tachyons” are physical evidence that the universe was created by a higher intelligence.

Kaku, after analyzing the behavior of these sub-atomic particles - which can move faster than the speed of light and have the ability to “unstick” space and matter, has concluded that the universe is a “Matrix” governed by laws and principles that could only have been designed by an intelligent being.
According to an article published in the Geophilosophical Association of Anthropological and Cultural Studies.

 “I have concluded that we are in a world made by rules created by an intelligence. Believe me, everything that we call chance today won’t make sense anymore,”

He went on:

To me it is clear that we exist in a plan which is governed by rules that were created, shaped by a universal intelligence and not by chance.”

This is an interesting development given that Prof. Kaku only two years ago advocated a mechanistic model of mind, e.g.

Where I pointed out in criticism:
 
"where Kaku runs off the rails, as I did,   is in using this ridiculously simple reductionist metaphor to suggest human self-consciousness can result from an indefinitely large macro-assembly of logic elements.  But again, this is what a reductionist would do since he's trapped in a box of his own making, where his very dependence on component reality means he's unable to conceive of emergence.  It is actually emergence - contingent on the brain as a quantum mechanical entity - that leads to a full model of human consciousness"
 
My point here is that Kaku must have radically altered his take since if one invokes a "universal intelligence" it must embody some kind of consciousness.   This is a conclusion I also arrived at in my 2013 book, 'Beyond Atheism, Beyond God', but by a different route - considering Bell's theorem, quantum nonlocality, and the quantum potential. A synopsis of some of my arguments can be found in this post from 2011:

http://brane-space.blogspot.com/2011/04/quantum-mind-isnt-same-as-quantum.html
 
 The key answer to the question of whether Kaku (or myself) would be regarded as a "pseudophysicist" then would appear to depend on whether one accepts monistic or nonlocal physicalism. The second, as quantum physicist Henry Stapp showed ('Mind, Matter and Quantum Mechanics', 1983) allows for quantum theory to be applied to brain function. The first, based on "particles only"  reductionism, would not.

In the latter case, one would concur with the late Victor Stenger's take ('God and the Folly of Faith', p. 155)  that:

"It does not matter whether you are trying to measure a particle property or a wave property. You always measure particles. Here is the point that most people fail to understand: Quantum mechanics is just a statistical theory like statistical mechanics, fundamentally reducible to particle behavior."

 On the other hand, if one subscribes to nonlocal physicalism he will agree with J.S. Bell (Foundations of Physics, (12,) .989 )

"Although Y is a real field it does not show up immediately in the results of a ‘single measurement’, but only in the statistics of many such results. It is the de Broglie –Bohm variable X that shows up immediately each time."

The danger of too rigidly adhering to Stenger's reductonist viewpoint was articulated by Bernard d'Espagnat ('In Search of Reality'):

"If scientism were correct, or more precisely, if the view of the world it proposes so forcefully, that of a world ultimately consisting of myriads of small localized objects merely endowed with quasi-local properties were correct, then such an evolution of our mentality would admittedly be excellent. It is always good for man to know the truth! But on the other hand, if the ultimate vision of the world which scientism proposes is false, if its conceptual bases are mistaken, then this development is – on the contrary –quite unfortunate."


Friday, April 29, 2016

Max Tegmark's Multiverse Types - And "Alternate" Universes (Tegmark's Type 3)





















One idealized model of a Type II Multiverse with two localization angles defined.

Much of the discussion concerning the Multiverse has been muddied because of lack of clarity about what it means. Let us concede that for many years humans conceived of only one manifestation of the whole or 'universe' (the Milky Way galaxy itself was at one time conflated with 'universe' ) and it has taken the push of modern physics to acknowledge this grand assembly may not be the final statement of physical reality. Thus, by way of several theories - which we will get into. - one comes into the conceptual purview of the Multiverse - composed of perhaps an infinity of universes with differing properties, cosmological constants etc.

One person who has tried to provide categories and clarity is Max Tegmark of M.I.T. He has suggested a fourfold classification scheme, but only three of them are relatively comprehensible to most ordinary folk without advanced physics backgrounds. (It is those I will deal with in this post.)

Type 1:

The simplest or Type 1 Multiverse is essentially an infinite extension of the acknowledged universe. Our most advanced telescopes like the Hubble can only see to a certain limit given the finite speed of light (c = 300,000 km/ sec)  which means our vision is confined to a limited radius. This is called the "Hubble radius" and is generally equal to the age of the cosmos translated into distance or 13.8 billion light years.

Thus, if light takes 13.8 billion years to travel to the maximum distance we can actually see (assuming space is static) that turns out to be 13,8 billion LIGHT YEARS. (One light year being the distance light travels in one year.)

In fact, this is a simplification because space or rather space-time isn't static.  Because of its expansion immediately following the Big Bang the actual radius of the cosmos is 42 billion light years or some 28.2 billion LY greater than the telescopic limit.   Assuming physical reality, i.e. the universe, exists beyond the actual Hubble radius then all permitted arrangements may exist - and in infinite numbers.

In effect one finds separate "cosmi", cut off from each other by their own individual Hubble radii. These would be like separate compartments or "bubbles" cut off from each other.  The key point is that the laws of physics in one "bubble" are the same in those in all the others because in the end the universe - despite the disparate "bubbles- is one entity.


Type 2:

While the Type 1 version is based on the cosmological principle, so the laws of physics are the same in all the separate "bubbles" with their own Hubble radii, in this Type 2 case they can vary from one universe to another. The value of G, the Newtonian gravitational constant may be G as we know it (6.7 x 10-11 Nm2/kg2) in our universe, but 1.1G in another in the Type 2 Multiverse, and 0.98G in another. The result would be separate universes remarkably different from each other.

As I noted in previous blog posts, the genesis of the Type 2 Multiverse is distinct from the Type 3 which is really Hugh Everett's "Many worlds" quantum-based theory (which we will get to.) In the Type 2 all the universes in the Multiverse were spawned as a result of cosmic inflation immediately following the Big Bang.

Regarding inflation, most current standard theories propose inflation starting at about  10-35 s  and doubling over a period of anywhere from 10-43 to 10-35 s after the initial inception. Estimates are that at least 85 such 'doublings' would be required to arrive at the phase where entropy rather than field resident energy dominated. The initial size (radius) of our universe would have been likely less than a proton's - maybe 1 fermi (fm) or 10-15 m, by the time the doubling process began. By the time it ended (after 90 'doublings') it would have been around 1.25 x 1012  m. This is roughly eight times the distance of Earth from the Sun.

In effect, the role of inflation is to give cosmic expansion a huge head start or boost, without which our universe would be much smaller. If such an "inflationary field" could spawn our universe it could spawn many others (up to an infinite number).  Further, there is no reason why these offshoot universes from inflation should have the same laws of physics as any of the others.

This is a delightful conclusion since it disposes at once of the "specialness" of the cosmos that too many invoke as a cosmological argument to demand a deity or "Creator".  However, if universes are commonplace, and the physical laws that govern each vary, then the need for a "human-friendly" creator vanishes. It is no longer a fluke that one universe has just the right conditions for life if gazillions of them don't.

Type 2 universes, then, aptly deal with the annoying fine tuning problem that religionists endlessly invoke.


Type 3

The Type 3 "Multiverse" is in reality a product of Hugh Everett's Many worlds quantum interpretation, which was devised to counter the Copenhagen Interpretation's strange ramifications. In the Copenhagen Interpretation, any observer's consciousness is theoretically capable of "collapsing" the wave function, yielding one and only one eigenstate or final observation, i.e. observed state. Everett, to his credit, argued that rather than dealing with one wave function for whatever observed entity (particle, universe, cat in a box - subject to release of cyanide if a cesium atom decays triggering the release device) one might let ALL possible outcomes occur.

In this case, the universe is constantly undergoing a kind of multiple "fission" of reality into umpteen daughter universes where different events unfold from the one we're in. To fix ideas, in one of them Lee Oswald is a published Professor of History at Tulane, not an accused assassin. In the same or other universe, LBJ's plan to have JFK killed is exposed before the executive action and the SOB is tried for treason. In another the Challenger disaster never occurs, it goes off perfectly because NASA took the time to solve the O-ring problem. In yet another, there is no Indonesian tsunami that killed 200,000 in December, 2004 - but there is a massive ocean asteroid strike that kills just as many in SE Asia. You see what I mean?

Here's the catch: All those other universes are inaccessible to those of us in this universe. Hence, for THAT particular universe any given observer picked at random will see only ONE outcome - his own, i.e. from his history- events record. If he observes the outcome of LBJ being hung or shot for treason, he will not observe the outcome in ours where Lee Oswald was framed and LBJ got away with the crime of the century. To put it in the context of Everett's Many Worlds interpretation, the wave function will appear to have collapsed, say  for LBJ's treason and punishment- but that sole wave function collapse (to the exclusion of all other possibilities) is not really what happened. In other ("alternate")  universes other outcomes would have occurred - such as in ours where Oswald is found guilty in absentia and Johnson's Warren Commission fiction and fraud is promoted by a feckless political and media community.

Let's go back to why Everett's "Many Worlds" interpretation was devised specifically as an alternative to the Copenhagen Interpretation of QM - in order to physically make sense of the principle of superposition in QM. According to this principle, before an observation is actually made to establish a determinate state, the object or particle exists in a multitude of different (quantal) states simultaneously.

As to the more exact definition of a "state" this was first given by Paul Dirac in his monograph Quantum Mechanics ('The Principle of Superposition', p. 11):

"A state of a system may be defined as a state of undisturbed motion that is restricted by as many conditions or data as are theoretically possible without mutual interference or contradiction"

This definition itself needs some clarification. By "undisturbed motion" Dirac meant the state is pure and hence no observations are being made upon it such that the state experiences interference effects to displace or collapse it. In the Copenhagen Interpretation, "disturbance" of mutually defined variables (say x, p or position and momentum) occurs when: [x, p] = -i h/ 2π, where h is the Planck constant) leads inexorably to wave function collapse. Thus, an undisturbed state must yield: [x, p] = 0. Another way of putting this is that in the latter case the 2 variables commute, and in the former they do not.

Again, in Copenhagen, the key to getting from [x,p] = - i h/ 2π to [x,p] = 0 is the presence of an observer capable of collapsing the relevant wave function for the system observed. But the problem  is that peculiar considerations enter. For example, a major irritation is the incessant Copenhagenites' debate over the level of consciousness required for a given observer to collapse a wave function. Perhaps this was best epitomized in Richard Schlegel's  Superposition and Interaction:Coherence in Physics (1980, University of Chicago Press, p. 178,) referring to the opinion expressed once by Prof. Eugene Wigner (at a conference) that "the consciousness of a dog would effect the projection into a single state whereas that of an amoeba would not."

So, in this sense, "Many worlds" provided welcome relief from metaphysical conjectures.

What bothered Everett and others was the Copenhagenites' claim that all such differing states existed simultaneously in the same observational domain for a given observer. Then,  on observation, all but one of the states magically disappeared (referred to as "wave function collapse") when the actual observation was recorded.

To Everett it all seemed too contrived and artificial. What if...he asked himself...instead of explaining the superposition of states this way, one instead thought of all quantum states (prior to observation) as co-existing in one phase space representation . Then one could think of each phase attached to another "world", a quantum world. For the total duration (T) of time before the observation was made, all these "worlds" existed at the same time, and then - on observation - the "choice" for one became reality. However, in other quantum worlds those other choices might materialize, as per the examples given above.

In a way, then, Everett's "Many worlds" interpretation  is actually a theory of alternate universes, at least at the level of potential quantum states. It is more compellingly described this way than as a third type  Multiverse, in my opinion. Especially the Type 2 comprising actual physical universes incepted from the selfsame primordial vacuum state (via inflation) as our own universe. Thus, an actual primordial vacuum - not a human observer or consciousness making observational choices- is the source of the real set of universes. Thus, all putative parallel universes plausibly emerged from the primordial vacuum the way ours did, e.g. from the Big Bang.

In the graphic, I show an "idealized Type 2 multiverse" with an infinite set of members, each specified under a coordinate φ, and separated by uniform angular measure Θ from two adjacent universes. The whole represents a 5-dimensional manifold in a toroidal topology. The topological space of the hypertoroid cosmos can therefore be represented by the global state space, a product of absolute hypertorus coordinate time (Θ) and 'all-space'(φ):GL = Θ X φ

I repeat this is an idealized model which assumes that N-cosmi were incepted at equal intervals of time - as manifested by the equal spacing in Θ.

In principle, we don't know a priori how "close" (e.g. in complex time)  another universe may be to our own. When one uses the assumption of "equal time intervals" between inceptions in our idealized multiverse, one isn't stating what those times are, and so they could be minuscule - and the smallest time unit imaginable is the unit tau, τ. (About 10-43 s, and note Θ = f(τ).)

If we specify an exact parallel time displacement we might be able to show how one universe can be "mapped" topologically onto an adjacent one. As an example, let two parallel universes be distinguished by a 1-τ difference in fundamental time parameter, viz. [1 + 2τ] and [1 + 3τ], then we would require for connection, a mapping such that:

(Universe 'A'): f:X -> X = f(Θ,φ) = (Θ, 2φ)

(Universe 'B'): f:X -> X = f(Θ,φ) = (Θ, 3φ)

which means the absolute coordinate φ is mapped onto itself 2 times for [Universe A] and mapped onto itself 3 times for [Universe B]. Clearly, there’ll be coincidences for which: f(Θ,2φ) = f(Θ,3φ) wherein the two universes will 'interweave' a number of times.

For example, such interweaving will occur when φ = π/2 in [A] and φ = π/3 in [B]. The total set or system of multiple points obtained in this way is called a Synchronous temporal matrix. The distinguishing feature of this matrix is that once a single point is encountered, it is probable that others will as well. If one hyperspace transformation can occur linking adjacent universes, A and B, then conceivably more such transformations can occur, linking A and C, D and E etc.

What if both absolute toroidal coordinates (Θ,φ) map into themselves the same number of times? Say, something like:

f:X -> = f(Θ, φ) = (2Θ, 2φ): Universe A

f:X -> = f(Θ, φ) = (3Θ, 3φ): Universe B

For example, given the previous conditions for coordinate φ, now let 2Θ = 3Θ for discrete values of Θ (e.g. 2π). For all multiples of 2π, the same toroidal cosmos will be experienced - if the absolute time coordinates are equal (e.g. π/2 = φ in A, and π/3 = φ in B) then we will have: Universe A = Universe B.

This isn't necessarily poppycock.  Stephen Freeney of Imperial College, London has surmised that two adjacent universes in a Type 2 Multiverse could conceivably 'butt up' against each other and leave "imprints" in each other's space. He reasons that these imprints would likely show up in the cosmic microwave background radiation, generating 'splotches' in the radiation field or differing energy density signatures. As yet no such signal has been found, but in truth we may not yet possess the instruments needed to identify such signatures.

Another experiment proposed to test one's conviction in Everett type worlds is best called "quantum Russian roulette" and is only to be undertaken by the most cocksure quantum physicists. (Say like that lot that makes pronouncements on the JFK assassination simply because they have a QM background.)  The experiment is analogous to the one for Schrodinger's cat - with the experimenter inside a sealed off room connected to a cyanide injector with release of a gas capsule  governed by the decay of a radioactive isotope.

 In some futures the guy will be killed, in others he will remain alive. But since  -from his point of view - he is only aware of being alive he will only perceive that he survives. Hence, he does survive.

So far there have been no takers to carry this one out.

Tuesday, September 2, 2014

An Introduction to Quantum Mechanics (3)


(Continued from previous section)

4. The Wave-Particle Duality & Heisenberg Microscope


     We now look in somewhat more detail at wave-particle duality as it arises in quantum mechanics.  In the particle interpretation, electrons  fired from a device such as an electron gun would not all follow the same path since the trajectory of an electron – unlike a missile- can’t be predicted from its initial state. We consider here the case of electron diffraction, whereby (based on Fig. 7) electrons are emitted from an electron gun and pass through a slit toward a detector or photographic plate onto which a diffraction pattern appears. This pattern will also coincide with an intensity distribution such as shown in Fig. 10.

In effect, the intensity distribution basically describes the probability for an individual particle (electron) to strike each of several areas designated on the photographic film. This discloses a fundamental indeterminacy that has no counterpart in Newtonian mechanics. Now, consider an electron striking at some angle q, such as indicated:

                                                                             

Fig. 10: Showing electron diffraction and intensity pattern on screen.

We have, from the quantities shown:

p y/ p x = tan q  or   p y =  p x  q  (in limit of small q)

Therefore, the y-component of momentum can be as large as:

p y =  p x  (l/ a)

Where a denotes the slit width. The narrower the dimension of a the broader the diffraction pattern, and the greater D p. From Louis de Broglie’s matter wave hypothesis (already introduced into the Bohr atom, as we saw, cf. Fig. 7) :   lD = h/ p x

Therefore:

p y =  p x  (h/ p x  a)  = h/ a  or: p y a =  h

But ‘a’ represents uncertainty in electron position vertically (D y), i.e. as it passes through the slit. We can reduce D py  only by narrowing the slit width a and vice versa. Thus we get:

p y a =   D py  D y »  h

Which is one form of the Heisenberg Uncertainty Principle which states that the momentum of a quantum particle and its positions cannot simultaneously be known to the same arbitrary precision. One corollary is that to detect a particle any given detector must interact with it thereby altering the motion of the particle.

     This view is no longer taken in any literal way because we understand that the quantum measurements are statistical in nature and hence a particular measurement is the result of a vast statistical assembly. Paul Dirac, in his book Quantum Mechanics, defined the “principle of superposition” thusly[1]:

“A state of a system may be defined as a state of undisturbed motion that is restricted by as many conditions or data as are theoretically possible without mutual interference or contradiction"

     But let’s examine this in more detail. By “undisturbed motion” Dirac meant the state is pure and hence no extraneous observations are being made such that the state experiences interference effects to displace or disturb it. In the Copenhagen Interpretation, “disturbance” of mutually defined variables, say x, p or position and momentum, occurs only if:

[x, p] = -i h =  -i h/ 2p


If it were the case that [x, p] = 0, one would say the variables “commute” and hence there’s no interference. If the condition doesn’t hold, then interference exists. Hence, Dirac’s setting of an upper limit in the last portion of his definition, specifying as many conditions as theoretically possible “without mutually interfering interference.” This state is undisturbed.  We have a statistical perspective!
























Fig. 11: Sketch of Heisenberg Microscope and key parameters.

    It is important to see from the preceding, how the Heisenberg Uncertainty Principle arises not just from an ad hoc assumption, but from the limits (or “tolerance thresholds”) of explicit quantities (e.g. p, x), when considered in the quantum limit. Hence, the model of the Heisenberg “microscope” provides a useful (although not practical, since it can’t actually be constructed) means of deriving the statistical principle of superposition based on an observational ansatz.

    Consider a measurement made to determine the instantaneous position of an electron by means of a microscope. In such a measurement the electron must be illuminated, because it is actually the light quanta (photon) scattered by the electron that the observer sees. The resolving power of the microscope determines the ultimate accuracy with which the electron can be located.  This resolving power is known to be approximately:

 l/ 2 sin q

Where  l is the wavelength of the scattered light and q is the half-angle subtended by the objective lens of the microscope.  Then:

Δx   =  l/ 2 sin q

In order to be collected by the lens, the photon must be scattered through any range of angle from -q to q. In effect, the electron’s momentum values range from:

+ h sin q/ l   to   -   h sin q/ l  

Then the uncertainty in the momentum is given by:

D px  =  [ h sin q/ l  -  (-   h sin q/ l)]    =   2 h sin q/ l  

Then the Heisenberg Uncertainty Principle product is:

D px   Δx   =    (2 h sin q/ l )  l/ 2 sin q   = h


4. Probability density and Expectation Values

Earlier we saw:


P = ½y (1s) y (1s) *½

Which is the probability density and a quantity we can actually measure, e.g. for the 1s state of hydrogen. Then this needs to be generalized to apply to more than one case.

Since the electron locations can’t be computed from Newtonian mechanics but more plausibly based on an analogous probability density to what  we saw above, then we can generalize and write:

P ab  =     ò ba   y(x) 2  dx

Where x is the state under consideration and this system is 1-dimensional with the probability assessed from a to b.  Note that we define the normalization condition as:

 ò ba  y2  dx   =  1

Normalization is simply a condition stating that the particle exists at some point at all times. Thus if we had:


ò ba   y2  dx   =  0

The probability would not exist. The probability condition then allows us to specify the probability of observing a particle even though we cannot specify the position. The normalization then gives the probability of finding the particle in the range a <  x  < b, say in one dimension.

The wave function, y(x) satisfies the Schrodinger equation. For the simple one dimensional case we can write:

d2 y/dx2 + F(x)   y = 0

Though the wave function y(x) it self is not a measurable quantity, other measurable quantities such as the energy E and momentum of the particle can be derived from it. Also, if the wave function is known it is possible to compute the average position of the particle, known as the expectation value:

  =       ò¥-¥   x y(x) 2  dx

This expression implies the particle is in a definite state so that the probability density is time –independent.

Example Problem: Consider the 1D quantum system shown, and a particle confined therein:

With maximum dimension L in direction +x. Find: a) the probability P ab  the particle is between x = 0 and x  = L, b) the expectation value and (c) show the energy for the particle can be quantized according to:
E n = (h2/ 8m L2) n2

Let the wave function be:
y(x)  = Ö2/ ÖL    sin (kx)


Solution:

We rewrite the wave function as:

y(x)  =   Ö2/ ÖL   [sin (px/L)]   where k = p/L

Then:

P ab  =     ò ba   y(x) 2  dx =   ò L 0   (Ö2/ ÖL )2   sin2 (px/L) dx

=    2/ L ò L 0    ½ [1 - cos (px/L)] dx

(Let q = px/L  and use:  sin2 q= ½ (1 – cos 2q))

P ab  =     2/ L [½  ò L 0    dx  - ò L 0   cos (px/L)] dx

P ab  =      1 -  1/p  sin (2px/L)] L 0   = 1 -  1/p  sin (2p)


But : sin (2p) = 0 so P ab  =      1  

The expectation value is:

  =       ò ¥-¥   x y(x) 2  dx

=  2/ L [ ò  L 0   x sin (px/L)2] dx

= L2/ 4  -  [x sin (2px/L)/ 4p/L - cos (2px/L)/8(p2/L2)]

= 2/L  (L2 /4)  =  L/ 2

Finding the energy:  We have the Schrodinger equation:


dy2/dx2  + K2 y = 0

where K = 
Ö [2mE] / ħ

If we examine the sketch below:













We see plots of the wave function y(x)   vs. position x (far left),  and of the probability density (middle) and the energy levels. Since we have represented the wave function by a sinusoidal function then it follows that the allowed wavelengths are those for which the length L is equal to an integral number of half wavelengths, or:

L = nl/ 2

These allowed states are called stationary states and represent standing waves (analogous to the ones seen earlier for the Bohr atom). Thus, the wavelengths of the particle are restricted by the condition:

l  = 2L/ n

Then the magnitude of the momentum p is also restricted to specific values (e.g. using p = h/ l) [2] such that:

p = h/ l =  h/ 2L/ n = nh/ 2L

The energy associated with the particle is then:

E = 1/2 mv2  =  p2/ 2m  = (nh/ 2L)2 / 2m

E= ( h2/ 8mL2) n2


(n= 1, 2, 3 etc.)

Thus the energy is quantized with the energy of the lowest energy state corresponding to n =1 so:

E1= ( h2/ 8mL2)

This least energy that the particle can have is called the “zero point energy” and means the particle can never be at rest.

Note that the above energy result can also obtained through the use of differential equations, e.g.

The probability density can be extended to 3 dimensions by writing:


P   =     ò ¥- ¥  y(x) 2  dV

The quantized energy will  then be (for a 3D box, for which dV = dx dy dz):

E= ( h2/ 8mL2)[ n x 2  +  n y 2    + n z 2  ]


Problems:

1)For a 1D box, let one electron inside have the wave function:

y(x)  =   Ö2/ ÖL   [sin (2px/L)]  

Find the probability of locating the electron between x = 0 and x = L/4.

2)Use the uncertainty principle to estimate the uncertainty in momentum for a particle in a 1D box. Estimate the ground state energy using this means and compare it to the actual ground state energy.

3) The wave function for a particle confined to moving in a 1D box is given by:

y(x)  =   A [sin (npx/L)]  

Use the normalization condition on y(x)   to show the constant A is given by:  A =  Ö2/ ÖL  

4) It is known from quantum mechanics that a particle in a one dimensional potential well (such as shown in the diagram) can exist in a number of energy states. Imagine an electron confined between the boundaries x and x +  Dx, where Dx is 0.5 Angstroms.

Approximately, what is the uncertainty in the x-component of  the  momentum of the electron?

5)(a) Consider a free particle confined between two impenetrable walls at x and x + L.  What is the probability according to classical physics that the particle will be found between x and x + L/3 if no other information is given?

b) What is the probability according to quantum mechanics that the particle in its lowest energy state will be found between x and x + L/3?


c) What is the probability according to quantum mechanics that the particle in the second lowest energy state will be found between x and x + L/3?



[1] Dirac, P.A.M.: 1941, Quantum Mechanics, Oxford University Press, 11.
[2] Recall from Planck’s law: E = hc/ l and p = Ö [2mE].