Sunday, May 4, 2014

QBism - Remains a Quantum Stepchild



















Two years ago, I exposed the problems inherent in what is called "Quantum Bayesianism" or QBism, for short, noting that  "a large component of subjectivity enters the picture though this is initiated, as N. David Mermin points out (Physics Today, July,  2012, p. 8) , by the very choice of Bayesian probabilities ab initio.

I noted that, contrary to this the typical quantum physicist has a confirmed “frequentist” concept of probability (based on confidence that this concept is consistent with the "law of large numbers" and the classical probability firmly vetted by Richard von Mises, for example). Hence, a probability is determined by the frequency with which event E appears in some ensemble K of events, all of which have been identically prepared in the same system.

For example, consider a system of ten coins, each of which is equally weighted, balanced to yield- for ten separate tosses:

H H T H T H H T T T

Then out of this ensemble of ten fair tosses, T appears as many times as H which is 5, so

P = E/ K = 5/ 10 = 0.5

Thus, the objective observer will assign a probability of P = 50% to a single event embodying any such coin toss and this will reflect the observer’s belief the event will occur at least half the time. 

One commenter pointed out how the Bayesian differs from this, noting "

" A man who has never seen or heard of the ocean, but is familiar with lakes, reaches the beach for the first time. You ask him to estimate the probability that the water will rise to cover the beach he is standing on. He naturally replies "Zero". Now by Bayesian principles, this prior is in the numerator, so the computed probability never rises above zero, no matter how much evidence is collected, whereas the true probability is one. How can this be?

The answer is that a prior of zero doesn't mean ignorance: it means certainty that something can't happen
."

But why should "one man" with negligible experience of water encroaching on a beach or beaches stand out? That is the complaint of most frequentists.  We elect instead to assert probability from the vast constellation of experience assembled as an aggregate for umpteen observers who note that encroachment on the beach can and does occur and at regular intervals, i.e. high tide. However, this is not the case for the next example, i.e.

"If you ask me to assign a prior to the sun rising in the west tomorrow, I will say zero, not because I have never heard of such a thing, but because I have extremely strong reasons not to believe it."

Actually, there are no reasons to believe it, period. So it will always be zero and the frequentist will concur because of the twenty quadrillion observations made of solar rising -setting thus far by humans none has controverted it. But more importantly they never will because the nature of the Earth's rotation is from west to east not east to west and this is in terms of an angular momentum defined from the point of its formation over 4 billion years ago.

It is interesting in light of the above to consider some other reactions to the Qbists' claim that their Bayesian method is valid. Most of these reactions are to an article on physicsforum:

(1)

"The discussion of QBism poses epistemological, and semantic problems for the reader. The subtitle- It's All ln Your Mind- is a tautology. Any theory or interpretation of observed physical phenomena is in the mind, a product of the imagination, or logical deduction, or some other mental process. Heisenberg ( The Physical Principles of the Quantum Theory), [n discussing the uncertainty principle, cautioned that human language permits the construction of sentences that have no content since they imply no experimentally observable consequences , even though they may conjure up a mental picture. He particularly cautioned against the use of the term, real  in relation to such statements"


(2)
" In spite of the tendency in Mr. von Burgers' article to overplay the virtues of QBism relative to other formulations, as an additional way to contemplate quantum mechanics, it has potential value, As Feynman ( The Character of Physical Law) stated, any good theoretical physicist knows six or seven theoretical representations for exactly the same physics. One or another of these may be the most advantageous way of contemplating how to extend the theory into new domains and discover new laws. Time will tell.

 

My view is exactly the same as Matt Leifer:
http://mattleifer.info/2011/11/20/ca...statistically/

He divides interpretations into three types:

1. Wavefunctions are epistemic and there is some underlying ontic state. Quantum mechanics is the statistical theory of these ontic states in analogy with Liouville mechanics.


2. Wavefunctions are epistemic, but there is no deeper underlying reality.

3. Wavefunctions are ontic (there may also be additional ontic degrees of freedom, which is an important distinction but not relevant to the present discussion).


 
Well, maybe I'm just too much biased by my training as a physicist to make sense of the whole Baysian interpretation of probabilities. In my opinion this has nothing to do with quantum theory but with any kind of probabilistic statement. It is also good to distinguish some simple categories of content of a physical theory."

(3)
A physical theory, if stated in a complete way like QT, is first some mathematical "game of our minds". There is a well-defined set of axioms or postulates which give a formal set of rules which establishes how to calculate abstract things. In QT that's the "state of the system", given by a self-adjoint positive semidefinite trace-class operator on a (rigged) Hilbert space, an "algebra of observables", represented by self-adjoint operators, and a Hamiltonian among the observables that defines the dynamics of the system. That's just the formal rules of the game. It's just a mathematical universe, you can make statements (prove theorems), do calculations. I think this part is totally free of interpretational issues, because no connection to the "real world" (understood as reproducible objective observations) has been made yet.

Now comes the difficult part, namely this connection with the real world, i.e., with reproducible obejective observations in nature. In my opinion, the only consistent interpretation is the Minimal Statistical Interpretation, which is basically defined by Born's Rule, saying that for a given preparation of a system in a quantum state, represented by the Statistical Operator R^the probability (density) to measure a complete set of compatible operators is given by:

 

P( A1,……An[ R^)  = (A1,……..An)[R^] (A1, ………An)

 

Where (A1, ………An)

is a (generalized) common eigenvector normalized to 1 (a δ-distribution) of the self-adjoint operators representing the complete set of compatible operators.

Now the interpretation is shifted to the interpretation of probabilities. QT makes no other predictions about the outcome of measurements than these probabilities, and now we have to think about the meaning of probabilities. It's clear that probability theory also is given as a axiomatic set of rules (e.g., the Kolmogorov axioms), which is unproblematic since its just a mathematical abstraction. The question now is, how to interpret probabilities in the sense of physical experiments. Physics is about the test of hypothesis about real-world experiments and thus we must make this connection between probabilities and outcomes of such real-world measurements. I don't see, how else you can define this connection than by repeating the measurement on a sufficiently large ensemble of identically and independently prepared experimental setups. The larger the ensemble the higher the statistical significance for proving or disproving the predicted probabilities for the outcome of measurements."


(4)

"The Bayesian view, for me, is just a play with words, trying to give a physically meaningful interpretation of probability for a single event. In practice, however, you cannot prove anything about a probabilistic statement with only looking at a single event. If I predict a probability of 10% chance of rain tomorrow, and then the fact whether it rains or doesn't rain on the next day doesn't tell anything about the validity of my probabilistic prediction. The only thing one can say is that for many days with the weather conditions of today on average it will rain in 10% of all cases on the next day; no more no less. Whether it will rain or not on one specific date cannot be predicted by giving a probability.

So for the practice of physics the Bayesian view of probabilities is simply pointless, because it doesn't tell anything about the outcome real experiments. “



(5)

" The law of large numbers is rigorously provable from the axioms pf probability.

What it says is if a trial (experiment or whatever) is repeated a large number of times, independently under identical conditions, then the proportion of times that any specified outcome occurs approximately equals the probability of the event's occurrence on any particular trial; the larger the number of repetitions, the better the approximation tends to be.

This guarantees a sufficiently large, but finite, number of trials exists (i.e an ensemble) that for all practical purposes contains the outcomes in proportion to its probability.

It seems pretty straightforward to me, but each to his own I suppose. This is applied math after all. I remember when I was doing my degree I used to get upset at careless stuff like treating dx as a small first order quantity - which of course it isn't - but things are often simpler doing that. Still the criticisms I raised then are perfectly valid and its only in a rigorous treatment they disappear - but things become a lot more difficult. If that's what appeals be my quest - I like to think I have come to terms with such things these days. As one of my statistics professors said to me - and I think it was the final straw that cured me of this sort of stuff - I can show you books where all the questions you ask are fully answered - but you wouldn't read them. He then gave me this deep tome on the theory of statistical inference - and guess what - he was right.


 
The meaning of such things lies in a rigorous development of probability. That's how it is proved and rests on ideas like almost surely convergence and convergence in probability."

(6)

"If you really want to investigate it without getting into some pretty hairy and advanced pure math then Fellers classic is a good place to start:
http://ruangbacafmipa.staff.ub.ac.id...iam-Feller.pdf

As you will see there is a lot of stuff that leads up to the proof of the law of large numbers and in volume 1 Feller does not give the proof of the very important Strong Law Of Large Numbers - he only gives the proof of the Weak Law - you have to go to volume 2 for that - and the level of math in that volume rises quite a bit. "


(7)

"What is interpreted as Bayesian is only the wave function of a closed system - that means, that of the whole universe. There is also the wave function we work with in everyday quantum mechanics. But this is only an effective wave function, It is defined, as in dBB  (de Broglie - Bohm) theory, from the global wave function and the configuration of the environment, that means, mainly from the macroscopic measurement results of the devices used for the preparation of the particular quantum state.

Thus, because the configuration of the environment is ontic, the effective wave function is also defined by these ontic variables, thus, is essentially ontic. Therefore, no contradiction with the PBR theorem.

With this alternative in mind, I criticize QBism as following the wrong direction: Away from realism, away from nontrivial hypotheses about more fundamental theories. But this is what is IMHO the most important thing why it is important for scientists to think about interpretations at all. For computations, the minimal interpretation is sufficient. But it will never serve as a guide to find a more fundamenal theory. "
--------------


My own problem is that in standard quantum mechanics QM (for which the Copenhagen Interpretation applies) and operators are in one-to-one correspondence with the closed subspaces of the Hilbert space, H. Then if P is a projection, its range is closed, and any closed subspace is the range of a unique projection. If u is any unit vector, then  P= [ Pu]2   is the expected value of the corresponding observable in the state represented by u. Since this is {0,1}- (binary) valued, we can interpret this as the probability that a measurement of the observable will produce the "affirmative" answer 1. In particular, the affirmative answer will have probability 1 if and only if Pu = u; that is, u lies in the range of P. Meanwhile, the QBist or Quantum Bayesian view, this is not so. The quantum state assignments are relative to the one who makes them. So if I as a quantum Bayesian assert that: P = [ Pu]2  is the expected value of the corresponding observable in the state represented by u = 0 and the result is {0,0}, then P = 0.


The debate rages on, but so far, the Qbists have failed to convince most of us that their imaginary QM is ready to supplant our version.

See also:

http://philsci-archive.pitt.edu/9803/1/comments_on_QBism_-_final.pdf




No comments:

Post a Comment