In a recent issue (Oct.-Nov.) of the Intertel journal, INTEGRA, Ken Wear asks:
“Supposing the Big Bang theory is correct, what were the initial conditions that produced it?”
This can be approached in a more or less practical way by treating the ‘Big Bang’ as a solution to Einstein’s tensor (field) equations. (See, e.g. ‘Quantum Field Theory – A Modern Introduction’, by Michio Kaku, p. 643):
As per the 2.7K isotropic, microwave background radiation, we assume radial symmetry for the metric tensor – for which we adopt a Robertson-Walker form. This omits all angular dependence and leaves a function of form R(t) which sets the scale and defines an ‘effective radius’ of the universe.
We have:
ds^2 = dx^u g_uv dx^u = dt^2 – R^2(t) [ (dr^2/ 1 – kr^2) + r^2 d (S)^2]
where d(S)^2 is the solid angle differential and k = const.
Associate with this a fluid of average density rho(t) and internal pressure p(t)
The energy-momentum tensor becomes: T_o^0 = rho, T_I^I = -p
with all other components zero.
After inserting these into the Einstein field eqns.
(dR/dt /R)^2 = (8 pi)/ 3 (G_N rho) – k / R^2
whence:
(d^2R/ dt^2 )/ R = - 4 pi G_N (p + rho/3) + LAMBDA/ 3
After setting the cosmological constant (LAMBDA) = 0 and eliminating rho, one obtains as a solution for R (radius of universe as power law function).
R= (9/ 2GM)^1/3 [t ^2/3]
One can deduce from this (ibid, p. 645) that at the Planck energy of 10^19 GeV (giga -electron volts) of energy, the symmetries of gauge theory were still united in a single force. This is at a cosmic age of 10^-44 s.
This represents the closest approach of physics to the cosmic singularity (t = 0) but still defines the ‘Big Bang’ since the explosion is already underway and forces are still unified.
This continues as other symmetries ’break’ one by one, leading to the radiation dominated era. (Described by the Bose-Einstein distribution function, which perfectly applies to the expanding pure photon gas).
The fact that the ‘Big Bang’ can be obtained as a solution to one version of Einstein’s tensor equations, discloses that QM and GR equations certainly don’t ‘blow up’ and are impossible to use.
Mr. Wear then makes the assertion that it “would certainly be a violation of our concepts of cause and effect to say that suddenly, out of nothing….came this cataclysmic explosion”
But again, as I noted earlier, cause and effect notions are of little use. What we need instead are necessary and sufficient conditions for the event to occur - which by the way, is not an ‘explosion”! I refer Mr. Wear to ASTRONOMY magazine, May, 2007, ‘5 Things You Need To Know’, p. 31:
“The Big Bang wasn’t any kind of explosion. It was closer to an unfolding or creation of matter, energy, time and space itself. What would actually have been a much better name is ‘expanding universe theory’.”
As to how spontaneous cosmic inception can occur, this was referenced by T. Padmanabhan, 1983, ‘Universe Before Planck Time – A Quantum Gravity Model', in Physical Review D, Vol. 28, No. 4, p. 756.
To fix ideas, we are interested in first determining the gravitational action, and from this whether acausal determinism is more or less likely to apply. For any action S(g) if
S(g) < < h (the Planck constant)
where h = 6256 x 10^-34 J/s
we may be sure that classical causality is out the window and we are dealing with acausal determinism
If S(g) > > h
the converse holds.
To evaluate S(g) as Padmanabhan shows (op. cit.) , we need V the 4-volume of the space-time manifold for which we choose a de Sitter space, in the first approximation.
We have
S(g) = c^3/ (16 pi G) INT_V R(-g)^1/2 d^4x
where G is the gravitational constant, c is the speed of light, the integral (INT) is over the 4-volume V with the differential (d^4x) to match.
In the big bang model one takes V as the spatial volume enclosed by the particle horizon, and bounded by the time span (t) of the universe. Thus, at any epoch t for k = 0,
S(g) ~ t^1/2
The particle horizon is defined by
rS(g) = 2 ct
Einstein's gravitational equations (with cosmological term, for the sake of generality) are
R ( i k ) - (1 / 2) g ( i k ) R = T ( i k ) + lambda g ( i k )
where the ‘lambda’ denotes the cosmological constant. For de Sitter space it is equal to:
(n – 1)(n – 2)/ [2 a^2]
where a is a scale factor, and n denoted the dimension (4) of the volume under consideration. R(ik) is the Ricci tensor.
Now for S(g) ~ t^1/2, R (the scalar curvature of de Sitter space) = 0, so S(g) = 0
However, the above happens because the Einstein tensor (T_ik) has trace = 0 in the early universe. The ‘trace’ is the sum of the diagonal elements of a tensor, e.g.
Tr(M) = 0
where M =
[0 1 0 ]
[0 -1 0 ]
[0 0 1 ]
This means the limits must definitely be for acausal determinism, NOT classical – including classical causality.
Wear also alludes to a “sequence of oscillations” (ibid.) but this is egregious, since there will be no oscillations, as the universe is not only forever expanding but accelerating in its expansion.
Universes that re-collapse (decelerate), expand forever with zero limiting velocity (e.g. v uniform) or expand forever with positive limiting velocity (accelerate) are called in turn: 'closed' (can have curvature k = +1); 'critical' (k =0) or 'open' (can be k = -1), respectively
Now, to determine whether any F-R-W (Friedmann-Robertson-Walker) cosmological template leads to deceleration or not, we need to find the cosmic density parameter:
OMEGA = rho / rho_c
where the denominator refers to the critical density. Thus if:
rho > rho_ c
(c = critical)
then the cosmic density is able to reverse the expansion (e.g. decelerate it) and conceivably usher in a new cycle. (New Big bang etc.) The observations that help determine how large OMEGA is, come mainly from observing galaxy clusters in different directions in space and obtaining a density estimate from them.
Current data, e.g. from Boomerang and other satellite detectors shows that OMEGA ~ 0.3 or that:
rho = 0.3 (rho_c)
I.e. that rho < rho_ c, so there is no danger of the cosmos decelerating.
Precision measurements of the cosmic microwave background (CMB), including data from the Wilkinson Microwave Anisotropy Probe (WMAP), have recently provided further evidence for dark energy. The same is true of data from two extensive projects charting the large-scale distribution of galaxies - the Two-Degree Field (2DF) and Sloan Digital Sky Survey (SDSS).
The curves from other data with corrected apparent magnitude v. redshift (z) give different combinations of OMEAG_dark to OMEGA_matter over the range. However, only one of the graph combinations bests fits the data:
OMEGA_dark = 0.65 and OMEGA_matter = 0.35
Corresponding to an expansion accelerating for the last 6 million years- with much more dark energy involved (~ 0.65) than ordinary matter.
When the predictions of the different theoretical models are combined with the best measurements of the cosmic microwave background, galaxy clustering and supernova distances, we find that:
0.62 < OMEGA_dark < 0.76,
where OMEGA_dark = rho_dark/ rho_c, and -1.3 < w < -0.9.
In tandem, the numbers show unequivocally that dark energy is the acceleration agent, and in addition that dark energy comprises the lion’s share of what constitutes the cosmos (~ 73%).
In addition, all of this data is firmly backed up by earlier Boomerang (balloon) data that – when plotted on a power spectrum- discloses two adjacent ‘humps’ one a bit higher than the other. The “first acoustic peak” and the “second acoustic peak” fit uncannily to the sort of spherical harmonic function that describes a particular plasma condition. In this case, one that conforms to the supernova-derived values of OMEGA (d, m). (See: ‘Balloon Measurements of the Cosmic Microwave Background Strongly Favor a Flat Cosmos’, in Physics Today, July 2000, p. 7 and 'Supernovae, Dark Energy and the Accelerating Universe', by Saul Perlmutter, in Physics Today, April, 2003, p. 53)
Lastly, astronomers make no “claim” that galaxies are moving apart with increasing velocities. We have actual data that this is so, and it’s based on the basic physics of the Doppler effect.
-----------------------! L1 -------! L2----
---!----------!------------------
L1(o) L2(o)
Thus, in the above pictograph, lines L1(o) and L2(o) are the observed, redshifted (by some number of nanometers) spectral lines for some distant object such that:
v = cz
Where V denotes the velocity of recession, c is the speed of light, and z is the red shift
z = {L2(o)/ L2} - 1
Note again that L2 is the (lab-emission) standard line and L2(o) the observed line wavelength. If z > 0 we say the line is Doppler-effect redshift and receding.
To illustrate, say the hydrogen alpha line (emitted at 656.3 nm, e.g. L2 = 656.3 nm) is redshifted in some distant object to 666 nm (L2(o)). Then we have:
z = 1.015 – 1.000 = 0.015
This translates to a recessional velocity: v = (3 x 10 8 m/s) (0.015) = 4.5 x 10^6 m/s
As to Wear’s claim that it “may be difficult to place credence in such observations over a comparatively brief interval of time”, perhaps, but this “brief interval” is all we have to work with. What, will he dismiss all our painstakingly obtained date (including from the new CERN large hadron collider) because they were obtained over brief intervals? This isn’t the way a Realist works, but it is certainly the modus operandi for an Idealist.
2 comments:
recently i have been reading the book "big bang" by simon singh. Its fascinating.
Are there still doubts on whether the universe began like this?
"Are there still doubts on whether the universe began like this?"
Not really. The 2.7K microwave background radiation (for which Penzias and Wilson won the Nobel Prize) pretty well nails it.
The only doubts now are what transpired in the immediate wake. How long did early inflation occur? Was there a phase wherein antimatter may have predominated, and so forth.
Post a Comment