Monday, October 6, 2025

William Lane Craig Bumbles Again In His Statistical (Cosmology) Arguments To Prove A Deity

 

                                Wesleyan Theologian William Lane Craig - at it again.


We had first met Wesleyan theologian William Lane Craig back in June, 2011, when he tried to use Bayesian statistics to prove the Resurrection, i.e.

Blissfully unaware that statistical or probabilistic arguments are incapable of "proving" a claimed objective reality. According to Craig in his latest iteration, both the strength of the gravitational force and the weak nuclear force had to be "fine - tuned" to one part in ten to the power 100 - else life could not have been possible.

This is insipid nonsense since no physical measurements can possibly have such precision. Also, long before the cosmological constant (L) was determined, cosmologists had already estimated the upper bound at   10 -120 in the same units  (m-2). An alternate computation using quantum mechanics, however, obtained a value   » 1, a huge discrepancy.

Why the difference? Well, because quantum physics must factor in quantum particles. The problem is that not all the contributing quantum particles are known in respect of their contributions to the vacuum of space.  We know there is about 5 times more dark matter than ordinary matter, but that comes in different forms with some particles producing positive contributions, other negative. For example, there is baryonic and non-baryonic dark matter.

The former includes protons and neutrons while the latter includes electrons and neutrinos.  The non-baryonic dark matter further breaks down into cold dark matter and hot dark matter.  The terms not so much indicative of current temperatures as the phase of the early universe at which each 'decoupled' from the hot radiation background following the Big Bang. Cold dark matter particles tend to have larger mass and among the candidates considered are: gravitinos, magnetic monopoles and primordial black holes.

The negative and positive contributions each affect   differently but we know the negative and positive contributions sum to infinity, and infinity minus infinity is something our current math can't handle without making extra assumptions. Physicists solve the impasse by cutting off associated energy calculations when quantum mechanics interferes with general relativity.  Hence, all such calculations are highly speculative.  Transl.  We do not take them literally.

However, Craig appears to take them very literally else he'd not claim fine tuning to one part in ten to the power 100 for both the force of gravitational attraction and weak nuclear force. Where did Craig get this idea? Likely, from Paul Davies book: 'The Accidental Universe', p. 107,  which itself bumbles via use of a nonsensical 'quantum'  cosmological constant (Lq). Davies, incidentally, did not use or reference his 'creation' in any future works.

So Craig is basing his fine-tuning effort on a 'macguffin' constant from which emerges an  incomprehensible 'infinity'.  This would be bad enough but then is compounded in his 2008 book 'Reasonable Faith' by confusing cosmic inflation with the cosmological constant.

Craig's desperate efforts remind me of Sir Arthur Eddington's own venture into numerical babble. Sir Arthur once arrived at a value for the fine structure constant of a   = 1/136 by taking the ratio of two "naturally occurring units of action". ('Great Ideas and Theories of Modern Cosmology', 1961, p. 178).  He chose one unit of action as the quantum for radiation, or  ħ  =   h/ 2p  and the second as the action for elementary particles, or e2 / c.   Then, taking:

{e2 / c}/  ħ  =  1/ 136

And holy moly, we're almost at a! (Which has an actual measured value of a  = 0.007297352569, or around 1/137.036)

Curiously, Eddington wasn't bothered by the divergence (from even the crude value a   = 1/137 )  , and just introduced a "fudge factor". This "was for obscure reasons that are difficult to understand". Perhaps, in the end, he was simply mesmerized by a kind of 'numerology' as Craig is mesmerized by his.

Eddington also came up with a quadratic equation: 
10x2  +    136x +  1 = 0, 

 linking his  fine structure result with the mass ratio of the proton to electron, i.e. in terms of the ratio of its two roots. From there, Eddington parlayed his fine structure and other pure number results into a kind of "universal theory" linking every aspect of the cosmos in a kind of romantic quest. Much like Kepler before him, with his "harmonic geometry"  in which the five Pythagorean regular polyhedra dictate the structure of the universe and reflect God's plan through geometry.

We shouldn't be too hard on Sir Arthur  (or Johannes Kepler) as he wasn't the first scientist to be taken in by numerical relationships, "harmonic" ratios, and "precision" theoretics. Nor will he likely be the last.  Even today we behold "scientists" seriously working on the so-called "anthropic principle". This  nonsense is based on the fallacy (due to a misunderstanding of physics units, dimensions) that there is an implicit "fine tuning". This in turn depends on a putative "fine precision" - but that is almost always based on the choice of units.

The bottom line takeaway from all this?  Theologians ought not depend on fine tuning to prove a supernatural creator.  Better yet, stay out of cosmology and physics, period. 

See Also:


And:



And:

No comments:

Post a Comment