Showing posts with label continual creation. Show all posts
Showing posts with label continual creation. Show all posts

Thursday, July 31, 2014

Re-Evaluating Our Cosmological Models: Why Now?



No photo description available.
In my 1964 Science Fair project, entitled 'The Structure of the Universe'  (which was given a feature look in the Miami Herald) I got many things wrong. The reason wasn't to do with errors, but in using the existing base of cosmological data and information to construct my model. Chief among these was the theory of continual creation which had been proposed by Fred Hoyle and Hermann Bondi.. It proposed that a hydrogen atom was ‘created’ in the universe on the basis of the perfect cosmological principle. A quantitative rate for the input-creation advanced by Jayant Narlikar ('The Structure of the Universe', Oxford Univ. Press, 1977) was:

4.5 x 10-45  kg m-3 s-1

This was taken to be the rate of new matter created per second within a cube - which is expanding at the rate H, where H is Hubble's constant. Then, one second later the side dimension of the cube will have increased by (1 + H)  and its volume will have increased to (1 + H)3 .  In this way, new matter is created within the 1 s interval with new mass: M = 3H r.

 And so,  though the universe was indeed expanding, it didn’t change its appearance. So its density must remain the same.  (The additional space created by the expansion must therefore have the same density of matter, r )   In addition, because of the principle of “continual creation”, the universe had no beginning and no end.  Thereby I was able to construct a model based on a matter and anti-matter universe (one with positive curvature the other, negative)  in a state of "equilibrium" with matter destroyed via annihilation equal to the new matter created via continuous creation.

It was a beautiful model which garnered top awards, but alas only months away from becoming passé.  This transpired when the first  evidence for the Big Bang emerged. This was thanks to Nobel-winning work by Arno Penzias and Robert Wilson. The experience showed me (as it did Fred Hoyle and Hermann Bondi) that our perspective on the universe and especially models, can change with just one major new discovery.

I based a lot of my model on the validity of the perfect cosmological principle which maintained that the universe was the same in space as well as time, and the same physical laws that apply on Earth applied everywhere else. In other words, our solar system and planet are nowhere special. Two sub-assumptions of the principle are that: 1)  the universe is homogeneous, i.e. looks the same for all locations, and 2) the universe is isotropic, appearing the same in all directions.

But back in the 1960s we still didn't know of the existence of cosmic voids. Those had to wait five decades for their discovery. Voids have roughly 1/10 the matter density of galaxy clusters (like our Local Group) but account for nearly 60 percent of the volume of the visible universe, thereby introducing inhomogeneity.

Even before the void discovery, there was the discovery of relic structures of the Big Bang by George Smoot and his collaborators at the University of California at Berkeley, in 1992. The investigation made use of data obtained from NASA's Cosmic Background Explorer (COBE) satellite. The data exposed very small temperature differentials (dT), from which density variations could be deduced. (In principle the temperature variations of the form dT/T are taken as a proxy for density fluctuations (dr / r)  in the early universe). These variations were also  found consistent with the postulated characteristics of an inflationary cosmos, as opposed to an always uniformly expanding cosmos. Indeed, an inflationary phase would feature an exponential rate of expansion by way of doublings over very small time periods.

What is the problem? It has remained trying to model a homogeneous universe despite data and findings that show the universe is inhomogeneous.  To quote astrophysicist Thomas Buchert (New Scientist, June, p. 33):

"To model such a complex structure with a homogeneous solution is a bold idealization."

Of course, cosmologists haven't been deterred. They merely resort to what's called modeling via  "statistical homogeneity" which means upping the scale for examining the cosmos to one wherein the inhomogeneities are radically reduced or vanish.  For example, on the scale of 400 million light years, voids and galaxy clusters average out into uniformity. But is this 'kosher'? Probably not because we have no real visualization of the cosmos on such scales.

Not yet mentioned are dark matter and dark energy, especially how the latter overturns our conceptions of cosmic order, see e.g.

Dark energy has also been found to be linked to the accelerated expansion of the cosmos, e.g.
Even more interesting, the cosmos' inhomogeneity contributes separately to the acceleration. Thus as more mass has clumped into galaxy clusters, the cosmic voids have grown causing the universe to expand more rapidly in those regions. The result is an accelerating effect similar to that attributed to dark energy but without any remote hint of it. (See e.g. The Journal of Cosmology and Astroparticle Physics, Vol. 10, p. 043).

What does all this mean for our cosmic perspective and cosmological models? Headaches! It means we may have to ditch the simplistic idealizations that pander to order, uniformity and aesthetics and instead come up with some ugly alternatives that violate our temperaments. For example, the whole Einsteinian notion of space-time is predicated on a continuum in which the entities are conjoined. But....if space expands at much faster rates in certain places then one must accept that clocks will tick at different speeds too.

As incredible as that sounds, it doesn't come near the ultimate conclusion: that if this is so it means the very age of the universe (which we now give as 13.8 billion years) is not a constant and instead will depend upon where the measurement is made. If you measure within a void you will get one answer, and in a galaxy cluster another. (According to one recent theory, it implies the age of the universe would be measured to be up to 18.6b years old where the low density of matter "means the clock has ticked particularly fast", New Scientist, op. cit. )

But which is better? To live with our idealistic fantasies of order and uniformity of space-time, or to live in reality and know the actual truth of how the universe operates?   Bear in mind the entire history of our science has been overturning sundry pet concepts of the universe, and especially our place within it.

Now may be the time for cosmologists to put on their big boy pants and devise theories which, although they may try the orderly temperament, are much closer to reality!

Monday, June 16, 2014

Belief and Theory: Exact Parallels to Religion and Science? (Part 1)
















In ’Port of Call’ one of the Newsletters of Intertel, Thomas A. Nelson, Sr. presents an essay which attempts to parse science and religion in terms of theory postulation and belief, respectively.

 Here is how Nelson articulates the differences, beginning with his summary of science:

Science is bounded by a methodology that is both rigorous and almost ritual, and maybe more than ‘almost’ ritual, The method of science invites attack, disproof, or reinforcement – whatever the busy activities of its adherents want to bring to it. The proponent of a theory is usually the most rigorous examiner of it. Theorists are open-minded: they must be to devise new theories and destroy old ones”.

 
The primary problem I have with this is that in general, the destroyers of the old theories are not the same as those who devise new ones. Hence, the old theorists are not nearly so “open minded” because allowing a newcomer to destroy their original theory may well mean the sacrifice of decades of work. A secondary complaint is referring to the methodology of science as “ritual” or “almost ritual”. But this is exactly the take I’d expect from someone who doesn’t grasp how methodology varies from discipline to discipline. Sure there is a basic template, but also lots of room for variation. An astronomer, for example, can’t conduct experiments so one can’t hold him to the same methodology as a lab physicist!

 Anyway, one of the classic cases of old vs. new theories is in astronomy-cosmology: the clash between the Big Bang and the Steady State theory. The latter, by Fred Hoyle and Hermann Bondi, proposed that a hydrogen atom is ‘created’ in the universe on the basis of the perfect cosmological principle. A quantitative rate for the input-creation of new mass has been advanced by Jayant Narlikar ('The Structure of the Universe', Oxford Univ. Press, 1977) as:
 
4.5 x 10-45  kg m-3 s-1
 
This is taken to be the rate of new matter created per second within a cube - which is expanding at the rate H, where H is Hubble's constant. Then, 1 second later the side dimension of the cube will have increased by (1 + H)  and its volume will have increased to (1 + H)3 .  In this way, new matter is created within the 1 s interval with new mass: M = 3H r.
 
 And so,  though the universe is indeed expanding, it doesn’t change its appearance. So its density must remain the same.  (The additional space created by the expansion must therefore have the same density of matter, r )   In addition, because of the principle of “continual creation” it has no beginning and no end.

For many of us in the early 1960s, including yours truly who built a science fair project around it, it was the most satisfying theory one could have. It removed the gnarly issue of ‘beginnings’ especially – and so disposed of religionist fairy tales in one fell swoop.

But it was not to be. The Big Bang exploded on the cosmology scene by the mid -1960s and with the discovery of the 2.7K microwave background radiation by Penzias and Wilson, essentially signed the death certificate of the steady state theory.

But did the ‘old’ theorists go quietly into that good night? Hell no! For decades, Hoyle and his colleague Jayant Narlikar,  did their best to tweak the theory to try to make it competitive with the Big Bang. Despite valiant efforts, it was never enough.  But I would say this stubbornness discloses Hoyle was not “open minded”. (He was in other areas, where his own theories weren’t challenged so severely, e.g. as in the 'panspermia' hypothesis proposed with Chandra Wickramasingh.)

 
On the topic of religion, Nelson writes:

Religion exists as a prescription of elaborate rituals but has no methodology. The belief in religion is undertaken by decision, either deliberate or compelled. Its sustenance is driven by fear . Attacks on belief are not permitted and the attack itself and the attacker are received by believers with hostility.  No experimentation or observation is permitted. A belief consists only of conclusions defended by a barricade of emotion.”

 
In general, this is an apt description, though Nelson appears to forget – or perhaps has ignored – that many religionists invoke the Bible for “proxy” experiments and physical evidence. For example, I already mentioned in an earlier blog my class notes in Theology from Loyola which referenced “demoniacs”, e.g.
Notes on Demoniacs from Loyola University, 1964.
 
The notes in question, for example, cited assorted cases (mainly from the New Testament) that vitiated rationalist arguments based on the proxy evidence from the NT. These included:

1)     “Demoniacs always acted differently toward Jesus than the ‘regular’ sick did.”

2)     Jesus may have “cured” the sick but he had to “heal” the demoniac.

3)     Jesus himself inculcated demoniac belief, e.g. describing in detail the habits of demons who possess men (Matt: 12: 43-45) as well as the methodology to cast them out (Matt. 17: 17-20)

4)     Extraordinary physical strength and superhuman knowledge are manifested by demoniacs which sick people do not show.

 Not to be a heartless cynic and unbeliever, but one need only invoke the über retort: that for  all those cases, words were changed (or translations ) and descriptions were inserted into the good Book by either the original writers or by copyists. This is the general thesis of the Jesus Seminar and scholars like Bart Ehrman ('Misquoting Jesus')

Also we can refer to Catholic historian, Rev. Thomas Bokenkotter who notes in his monograph ‘A Concise History of the Catholic Church’, (1979, p. 17):

The Gospels were not meant to be a historical or biographical account of Jesus. They were written to convert unbelievers to faith in Jesus as the Messiah, or God.”

 In other words, the earlier pagan tracts and myths were copied to try to fulfill a Church agenda, not to disclose any historical or biographical truth. Hence, based on this and the evidence that nearly all the scriptures were subject to manipulation, it is more justified to reject the notion of demoniacs than to accept them.
 
In this regard, one must conclude that the proxy evidence derived from so called "sacred" texts isn't evidence at all but is indeed more about condoning a supernatural belief that doesn't admit of any practical validation.  In this sense, Nelson is indeed correct that: No experimentation or observation is permitted.
 
But he also makes the claim that religion itself is more about the psychological states of believers than any kind of objective reality, or sacred truths. We will explore this aspect and others in Part 2.