## Tuesday, July 17, 2012

### Qbism and why Quantum Physicists Hate It

Consider standard quantum mechanics QM (for which the Copenhagen Interpretation applies) and operators are in one-to-one correspondence with the closed subspaces of the Hilbert space, H. Then if P is a projection, its range is closed, and any closed subspace is the range of a unique projection. If u is any unit vector, then  P= [ Pu]2   is the expected value of the corresponding observable in the state represented by u. Since this is {0,1}- (binary) valued, we can interpret this as the probability that a measurement of the observable will produce the "affirmative" answer 1. In particular, the affirmative answer will have probability 1 if and only if Pu = u; that is, u lies in the range of P.

Now in the QBist or Quantum Bayesian view, this is not so. The quantum state assignments are relative to the one who makes them. So if I as a quantum Bayesian assert that: P = [ Pu]2  is the expected value of the corresponding observable in the state represented by u = 0 and the result is {0,0}, then P = 0.
Thus as N. David Mermin recently observed (Physics Today, July, p. 8):

“QBism eliminates the notorious measurement problem ….an agent unproblematically changes her probability assignments discontinuously whenever new experiences lead her to change her beliefs. It is just the same for her quantum state assignments. The change in either case is not in the physical system the agent is considering. Rather, it is in the quantum state the agent chooses by which to encapsulate her expectations.”

In other words, a large component of subjectivity enters the picture. But this is initiated, as Mermin points out, by the very choice of Bayesian probabilities ab initio. To fix ideas, the typical quantum physicist has a confirmed “frequentist” concept of probability. Hence, a probability is determined by the frequency with which event E appears in some ensemble K of events, all of which have been identically prepared in the same system.

For example, consider a system of ten coins, each of which is equally weighted, balanced to yield- for ten separate tosses:

H H T H T H H T T T

Then out of this ensemble of ten fair tosses, T appears as many times as H which is 5, so

P = E/ K = 5/ 10 = 0.5

Thus, the objective observer will assign a probability of P = 50% to a single event embodying any such coin toss and this will reflect the observer’s belief the event will occur at least half the time.

On the other hand, for the Bayesian the probability is not inherent in a similar system of events but is projected via different agents who may have different beliefs based on pre-supposition. Say Agent X has the pre-supposition a particular series of coin tosses is weighted toward tails to appear, for whatever reason, then his expectation will be: P(T) = 0.6 perhaps.

As Mermin points out (ibid.):

“This personalist Bayesian view of probability is widely held, though not by many physicists”

Of course, this implies a definite opposition between physics and other fields which employ Bayesian statistics.

Now, in formal quantum mechanics, arrival of states, probabilities is contingent on information, knowledge. For example, knowledge may be obtained using an Aspect-type device such (as depicted below) which acts to disperse the individual atomic "magnets" (net-spin atoms) and send them in pairs (always in pairs) to D1 and D2 simultaneously. The question is, what spin is detected by each detector at the instant of observation?:

D1 (+½ ) <-------------[D]------------->(- ½ )D2

The knowledge or information arrived at is correlations or anti-correlations, for the spin of an atom, say helium, captured at detectors D1 or D2.

If a  ½ spin appears simultaneously we have correlation, otherwise anti-correlation.

Prior to the observation (actual detection), neither spin value can be known according to the Heisenberg Uncertainty Principle of Quantum Mechanics. That is, while the atomic magnets are in transit - from device to either detector - there is no definite information concerning which spin is going where. The reason has to do with what is called the superposition of states. To fix ideas, consider the whole atomic magnet in the device, before being ejected. If it’s a helium atom, then there’ll be one up spin and one down spin and we can write for simplicity:

U = å i {(ups)i + (downs)i}

The obscurantist claim that the outcome "depends on perspective" is rubbish, since it is the observation that determines the outcome, and there is only one.

In the orthodox (and most conservative) interpretation of quantum theory, there can be no separation of observed (e.g. spin) state until an observation or measurement is made. Until that instant (of detection) the states are in a superposition, as described above. There’s nothing mysterious or strange about this as it follows entirely from the mathematics. More importantly, the fact of superposition imposes on all quantum phenomena an inescapable ‘black box’. In other words, no information other than statistical can be extracted before observation.

The late physicist Heinz Pagels, for example, has referred (in his excellent book, The Cosmic Code) to quantum measurement theory as an ‘information theory’ and noted the entire quantum world is embedded into what we observers can know about it. Obviously such knowledge is obtainable exclusively from observational or experiment results. Since only one apparatus is used, like I've shown there are no "differing perspectives" only one - at the instant of observation.

However, in QBism or quantum Bayesianism this gets chucked: Keep the set up for the experiment as shown and change observers for each sequence of say N trials. QBism asserts that each switch introduces a different probable outcome based on each observer’s belief about the system and how he makes state assignments.

Worse, in QBism there’a a bifurcation between the world –universe in which an agent or observer lives and her experience of it. This disconcerting aspect arises, according to Mermin:

“From a failure to realize that like probabilities, like quantum states, like experience itself….the split belongs to the observer.”

Each has its own split. Worse, an uncontrolled complementarity of experience enters with respect to any other observers. If “Judy” has experiences and observations that are macroscopic (i.e. related to the large scale world of planets, stars etc.), “Roy” will experience microscopic reality (the world of atoms, electrons and Higgs bosons!) To quote Mermin:

“Each split is between an object (the world) and a subject (an agent’s irreducible awareness of his or her own experience):

Mermin also makes the point, which I tend to agree with, that “ambiguities only arise if one fails to acknowledge that the splits reside not in the objective world but at the boundaries between that world and the experiences of the various agents who use quantum mechanics.”

Fair enough, but a couple questions: 1) Does this mean that the ‘Many worlds’ interpretation is now kaput? And 2) Does QBism indicate an objective difference between the microtubules of agents, with said entities evidently tied to consciousness and hence “irreducible awareness” ?

At least QBism does take care of one riddle, first posed by Einstein: Can a quantum wavefunction be collapsed by the observations of a mouse?

QBism answers ‘NO!’ since – according to Mermin – “the mouse lacks the mental facility to use quantum mechanics to update its state assignments on the basis of its subsequent experience?”

Hmmmmmm…But what if it’s a genetically engineered mouse, with the DNA of a human like Einstein spliced into its own?

Igor Merling said...

Hello,

While I am an engineer, I haven't had physics training above that mandated for a graduate electronics engineer.

I am wondering about a modified Schrödinger's cat thought experiment and how classical vs. QBism models would look at it's result. It is my personal feeling that QBism provides a slightly better answer, but I want your opinion.

In this new Schrödinger's cat experiment we have the same setup, but two observers, A and B, both physicists (to avoid the mouse argument), but unable to speak with one another.
The first observer, A, falls asleep just before the 1 hour time mark when they wanted to open the box.
The second observer, B, opens the box momentarily while A is asleep, and notices the state of the cat. He quickly closes the box, at which point A is awake again.

Now let's consider the experiment.

Observer A thinks that the box was never opened and thus the wave function has not collapsed.

Observer B knows that the function has collapsed since he has observed the cat and knows what happened to it.
So, has the wave function collapsed or not? Two knowledgeable physicists will claim differently.
Apparently the collapse happens in the brain of the observer and is not an actual property of the physical system being examined.
Which one of the two quantum models provides a better explanation to this "paradox"?

Copernicus said...

Hello, and thanks for your inriguing thought experiment. I would tend to go with the Copenhagen Interpretation that the observed dual wave states collapse (superposition of 'live cat' + 'dead cat') yields one of those states and it must then be the same for both A and B, irrespective of whether A was sleeping just before B observed it.

In other words, it was the observer (B) who incepted wave packet collapse yielding the determined single state - and so this observed, final state is the same for both observers.

My problem with the Qbist approach is that it assigns or allows way too much power to human observers, or any observers. It almost borders on an anthropocentric perspective not remarkably different from the well known nonsense embedded in the 'Anthropic Principle' i.e. that the cosmos' constants are 8uniquely adjusted for emergence of humans.

John Cowan said...

Of course it is the same for both observers! The cat is a classical system (which is another way of saying its d value is intractably large), so there is a legitimate hidden variable: it was either alive or dead all along. The difference is that observer A can only calculate its state, since he is in ignorance of it, whereas B knows by measurement what the state is. When you ask each observer, therefore, what he thinks the odds of the cat being alive are, observer A replies "1:1", whereas observer B says "1:0", or certainty. No surprises here — we know what we know, and what we don't know, we can only estimate.

There's a similar flavor of paradox, but no actual paradox, in this story: A man who has never seen or heard of the ocean, but is familiar with lakes, reaches the beach for the first time. You ask him to estimate the probability that the water will rise to cover the beach he is standing on. He naturally replies "Zero". Now by Bayesian principles, this prior is in the numerator, so the computed probability never rises above zero, no matter how much evidence is collected, whereas the true probability is one. How can this be?

The answer is that a prior of zero doesn't mean ignorance: it means certainty that something can't happen. If you ask me to assign a prior to the sun rising in the west tomorrow, I will say zero, not because I have never heard of such a thing, but because I have extremely strong reasons not to believe it. If I saw the sun rising over the Palisades (I live in NYC), you could never convince me that I wasn't drunk, or hallucinating, or the victim of a cosmic light show, or that New Jersey had suffered a nuclear bombardment. (If the earth tipped over or reversed direction, I wouldn't survive the event to be an observer.)