Tuesday, December 30, 2008
The Amazing Aurora
In Alaska in March, 2005, I took the above photo of an auroral display just outside Chena Hot Springs, Alaska. I was glad my wife was with me to behold the sight (her first time) since it later evolved to an exciting electronic scene with bursts of red and green interacting and changing positions.
How are these magnificent displays caused?
One can visualize the Earth as a giant spherical magnet, with magnetic
field lines extending from its north to south magnetic poles. These
magnetic field lines, have the property that any charged particles (+
protons, - electron or ions) that approach, will spiral along them.
The Earth itself, is "bathed" in the solar wind, a stream of high speed
charged particles that flows into space, originating from the Sun's
corona. (A hot, gaseous envelope that spews these particles out continuously – moreso when there is a violent explosion known as a Solar Flare)
Around the Earth the speed of these particles can reach 400- 500
km/second. (Because of its high temperature, over a million degrees, the
corona gas is *ionized* so must consist of charged particles, mainly (+)
protons, and (-) electrons).
During high solar activity (e.g. near sunspot maximums) a higher flux of
these charged particles inundates the solar wind, and the region around
the Earth.
The Earth's magnetic field traps these charged particles, and the highest
density is around the polar regions - which we refer to as the "auroral
ovals". In these regions, very high electric currents are set up, as the
charged particles start moving in unison about the magnetic field lines.
These currents can easily reach a few MILLION amperes.
As this discharge occurs, one or more outer electrons is stripped from the
atoms, for example from oxygen in the atmosphere - then RECOMBINES again -
to form new )e.g. oxygen) atoms.
With this RECOMBINATION - there is EMISSION of light, for a certain part
of the visible spectrum.
For example, in the case of recombination of oxygen atoms - their emitted
light is in the GREEN region of the spectrum. The aurora or northern
lights we see displays a kind of green curtain-like shimmering. The remarkable
red aurora is produced by emission at the 630 nm (nanometer) line of oxygen and at relatively high altitudes (e.g. 200-600 km) compared to green - which tends to form below 100 km and the oxygen line at 557 nm is excited.
Auroras can display as both diffuse and discrete. In the first case the shape is ill-defined and the aurora is believed to be formed from trapped particles originally in the magnetosphere which then propagate into the lower ionosphere via wave-particle interactions.
Thus, multiple colored auroras can be explained by emissions from
different atoms in the upper atmosphere, mainly in the region of the
magnetic poles. This is also why, of course, they are more often seen in
the vicinity of the N, S magnetic poles. (Though there have been reports
of N. lights being seen as far south as northern Florida, especially
during periods of esxceptional sunspot activity or LARGE SOLAR FLARES,
massive explosions on the Sun).
A great analogy has been given – by Syun Akasofu- comparing the aurora to images on a TV screen. In this case the (polar) upper atmosphere corresponds to the screen and the aurora to the image that would be projected on it, say for a TV. The electron beam in the TV (remember we are talking about the old-style cathode ray jobs!) corresponds the electron beam in the magnetosphere. In the conventional TV motions of the image are generated by the changing impact points of the electron beam on the screen. Similarly, with the aurora, its motions – such as moving sheets or curtains- are produced by moving impact points of the magnetospheric electron beams.
In gauging the power and intensity of auroras at different times, it is useful to remember that ultimately the aurora derives its power and potential from the Sun and specifically the charged particles of the solar wind. This is why the most spectacular displays are usually near sunspot maximum. Around those times the currents I noted earlier are “amped” up – no pun intended- to 10^6 A or more. To give an example, during a quiet Sun interval like we are in now the residual power for the magnetospheric generator is on the order of maybe a tenth of a megawatt. If we see a new cycle coming on and solar wind activated – we may get that power up to a million megawatts for a few hours.
If intense enough, such solar storms can herald the onset of enormous induction currents such as caused parts of the Ottawa grid to melt down in 1989.
But, as solar cycle 23 slowly ramps up, the aurora - of whatever color or shape- will be eagerly anticipated.
Monday, December 22, 2008
Star of Bethlehem? Or a MacGuffin?
At many lectures I've given one of the most asked questions has been: "Was there really a star of Behtlehem?"
This is a difficult question. In the preliminary pass for any sort of validating records, it entails assuming any or all of the ancient scriptures were true historical artifacts, not mere mythological escapism masquerading as such. For example, Matthew 2:1-1 notes such a "star" - but not one of the other quadriforms peeps a syllable about it. Why not? Why - if it was such a signal event (no pun intended) and an actual occurrence- did none of the other New Testament authors note it? This is disturbing and makes one recall the words of Catholic historian, the Rev. Thomas Bokenkotter, in his monograph ‘A Concise History of the Catholic Church’,(page 17):
“The Gospels were not meant to be a historical or biographical account of Jesus. They were written to convert unbelievers to faith in Jesus as the Messiah, or God.”
This is a shattering admission indeed, and from a historian of Christendom’s largest Church. It is a de facto admission that no historical support exists for any of the accounts in the New Testament, including Matthew's star. But for the sake of this article, let us assume there is something there, some faint signal amidst the noise.
We consider ordinary bright stars first. For an observer in Middle Eastern latitudes 2,000 years ago, there would have been at least ten visible at this time of year. Each one in a different direction, location of the Celestial sphere. Thus, no one star would be visible long, and certainly not at a fixed location or altitude such that it might provide a "search beacon".
The only other stellar candidate one might invoke is a nova or exploding star. Certainly the indredible brightness common to such entities would attract attention, but could any have occurred then and provided the basis for the Matthew citation? Interestingly, this very attirbute of attention -getter eliminates the nova theory from contention. Such a cataclysmic event could not have escaped notice yet there is no mention in any astronomical records of the time - including the Chinese who were already consummate star gazers.
an alternative explanation is that the object was a bright comet. An exceptionally brilliant comet was recorded in 45 B.C. but this is too far in advance of the probable Nativity date. Could such a comet have appeared suddenly and unpredictably around the time. Possibly, but it's doubtful such an event would have been associated with anything beneficent. Two thousand years ago comets were uniformly regarded by all cultures as omens of impending disaster, so we can rule them out.
The only other reasonable explanation is that the Magi witnessed an uncommon astronomical alignment of bright planets. One such candidate is the triple conjuncti9on of the planets Jupiter and Saturn in 7 B.C. A "triple conjunction" here means that Jupiter and Saturn appeared in close proximity no less than three times in succession.
One can spculate here that the Magi in preparing for their jounrey witnessed the first conjunction ca. May 29. A second event was observed on September 29 could have established that Jerusalem was in the general direction they needed to go. Finally, a third conjunction on Dec. 4 would presumably provided the final directional "fix", leading to Bethlehem some eight kilometers away.
The accuracy of the above speculations (and I reinforce that's all these are!) is subject to the dubious assumption that our present calendar is actually a bit off and Christ was actually born in 7 B.C. rather than the 1 B.C. usually quoted.
Given this, one is forced to concede that at the present time there is no comprehensive astronomical explanation which consistently explains all the details. The triple conjunction sounds like the best, assuming we are really in the year 2001 and not the end of 2008.
Perhaps the event must remain forever intangible and beyond the realm of any scientific investigation. Or, perhaps there never was such an object in the first place - and Matthew simply resorted to some elaborate poetic license.
This is a difficult question. In the preliminary pass for any sort of validating records, it entails assuming any or all of the ancient scriptures were true historical artifacts, not mere mythological escapism masquerading as such. For example, Matthew 2:1-1 notes such a "star" - but not one of the other quadriforms peeps a syllable about it. Why not? Why - if it was such a signal event (no pun intended) and an actual occurrence- did none of the other New Testament authors note it? This is disturbing and makes one recall the words of Catholic historian, the Rev. Thomas Bokenkotter, in his monograph ‘A Concise History of the Catholic Church’,(page 17):
“The Gospels were not meant to be a historical or biographical account of Jesus. They were written to convert unbelievers to faith in Jesus as the Messiah, or God.”
This is a shattering admission indeed, and from a historian of Christendom’s largest Church. It is a de facto admission that no historical support exists for any of the accounts in the New Testament, including Matthew's star. But for the sake of this article, let us assume there is something there, some faint signal amidst the noise.
We consider ordinary bright stars first. For an observer in Middle Eastern latitudes 2,000 years ago, there would have been at least ten visible at this time of year. Each one in a different direction, location of the Celestial sphere. Thus, no one star would be visible long, and certainly not at a fixed location or altitude such that it might provide a "search beacon".
The only other stellar candidate one might invoke is a nova or exploding star. Certainly the indredible brightness common to such entities would attract attention, but could any have occurred then and provided the basis for the Matthew citation? Interestingly, this very attirbute of attention -getter eliminates the nova theory from contention. Such a cataclysmic event could not have escaped notice yet there is no mention in any astronomical records of the time - including the Chinese who were already consummate star gazers.
an alternative explanation is that the object was a bright comet. An exceptionally brilliant comet was recorded in 45 B.C. but this is too far in advance of the probable Nativity date. Could such a comet have appeared suddenly and unpredictably around the time. Possibly, but it's doubtful such an event would have been associated with anything beneficent. Two thousand years ago comets were uniformly regarded by all cultures as omens of impending disaster, so we can rule them out.
The only other reasonable explanation is that the Magi witnessed an uncommon astronomical alignment of bright planets. One such candidate is the triple conjuncti9on of the planets Jupiter and Saturn in 7 B.C. A "triple conjunction" here means that Jupiter and Saturn appeared in close proximity no less than three times in succession.
One can spculate here that the Magi in preparing for their jounrey witnessed the first conjunction ca. May 29. A second event was observed on September 29 could have established that Jerusalem was in the general direction they needed to go. Finally, a third conjunction on Dec. 4 would presumably provided the final directional "fix", leading to Bethlehem some eight kilometers away.
The accuracy of the above speculations (and I reinforce that's all these are!) is subject to the dubious assumption that our present calendar is actually a bit off and Christ was actually born in 7 B.C. rather than the 1 B.C. usually quoted.
Given this, one is forced to concede that at the present time there is no comprehensive astronomical explanation which consistently explains all the details. The triple conjunction sounds like the best, assuming we are really in the year 2001 and not the end of 2008.
Perhaps the event must remain forever intangible and beyond the realm of any scientific investigation. Or, perhaps there never was such an object in the first place - and Matthew simply resorted to some elaborate poetic license.
Friday, December 19, 2008
Some Fun with Transpositions
Various aspects of math can provide hours of fun, and amusement. One of these entails transpositions. Some basics on transpositions (and even and odd permutations) first:
A transposition is a permutation which interchanges two numbers and leaves the others fixed. The inverse of a transposition T is equal to the transposition T itself, so that:
T2 = I (identity permutation, e.g the permutation such that I(i) = i for all i = 1,...n)
A permutation p of the integers {1, . . . n} is denoted by
[1 .. . .. n]
[p(1).. p(n)]
So that, for example:
[1 2 3]
[2 1 3]
denotes the permutation p such that p(1) = 2, p(2) = 1, and p(3) = 3.
Now, let's look at EVEN and ODD permutations:
Let P_n denote the polynomial of n variables x1, x2……xn which is the product of all the factors x_i … xj with i < j. That is,
P_n(x1, x2... . xn) = P(x_i .. x_j)
The symmetric group S(n) acts on the polynomial P_n by permuting the variables. For p C S(n) we have:
P_n( x_p(1), x_p(2). . .x_p(n)) = (sgn p) P_n (x_1, x_2…..x_n)
where sgn p = +/-1. If the sign is positive then p is called an even permutation, if the sign is negative then p is called an odd permutation. Thus: the product of two even or two odd permutations is even. The product of an even and an odd permutation is odd.
Back to transpositions!
We just saw:
[1 2 3]
[2 1 3]
The above permutation is actually a transposition 2 <-> 1 (leaving 3 fixed).
Now, let p' be the permutation:
[1 2 3]
[3 1 2]
Then pp' is the permutation such that:
pp'(1) = p(p'(1)) = p(3) = 3
pp'(2) = p(p'(2)) = p(1) = 2
pp'(3) = p(p'(3)) = p(2) = 1
It isn’t difficult to ascertain that: sgn (ps) = (sgn p) (sgn s)
so that we may write:
pp' =
[1 2 3]
[3 2 1]
Now, find the inverse p^-1 of the above. (Note: the inverse permutation, denoted by p^-1 is defined as the map: p^-1 : Zn -> Zn),
Since p'(1) = 3, then p ^-1(3) = 1
Since p'(2) = 1 then p^ -1(1) = 2
Since p'(3) = 2 then p^ -1(2) = 3
Therefore:
p^-1 =
[1 2 3]
[ 2 3 1]
Problem: Express
p =
[1 2 3 4]
[2 3 1 4]
as the product of transpositions, and determine the sign (+1 or -1) of the resulting end permutation.
Let T1 be the transposition 2 <-> 1 leaving 3, 4 fixed, so:
T1 p =
[1 2 3 4]
[1 3 2 4]
Let T2 be the transposition 2 <-> 3 leaving 1, 4 fixed, so:
T2 T1 p =
[1 2 3 4]
[1 2 3 4]
Then:
T2 T1 p = I (identity)
TWO transpositions (T1, T2) operated on p, so that the sign of the resulting permutation (to reach identity) is +1
The permutation is even.
A transposition is a permutation which interchanges two numbers and leaves the others fixed. The inverse of a transposition T is equal to the transposition T itself, so that:
T2 = I (identity permutation, e.g the permutation such that I(i) = i for all i = 1,...n)
A permutation p of the integers {1, . . . n} is denoted by
[1 .. . .. n]
[p(1).. p(n)]
So that, for example:
[1 2 3]
[2 1 3]
denotes the permutation p such that p(1) = 2, p(2) = 1, and p(3) = 3.
Now, let's look at EVEN and ODD permutations:
Let P_n denote the polynomial of n variables x1, x2……xn which is the product of all the factors x_i … xj with i < j. That is,
P_n(x1, x2... . xn) = P(x_i .. x_j)
The symmetric group S(n) acts on the polynomial P_n by permuting the variables. For p C S(n) we have:
P_n( x_p(1), x_p(2). . .x_p(n)) = (sgn p) P_n (x_1, x_2…..x_n)
where sgn p = +/-1. If the sign is positive then p is called an even permutation, if the sign is negative then p is called an odd permutation. Thus: the product of two even or two odd permutations is even. The product of an even and an odd permutation is odd.
Back to transpositions!
We just saw:
[1 2 3]
[2 1 3]
The above permutation is actually a transposition 2 <-> 1 (leaving 3 fixed).
Now, let p' be the permutation:
[1 2 3]
[3 1 2]
Then pp' is the permutation such that:
pp'(1) = p(p'(1)) = p(3) = 3
pp'(2) = p(p'(2)) = p(1) = 2
pp'(3) = p(p'(3)) = p(2) = 1
It isn’t difficult to ascertain that: sgn (ps) = (sgn p) (sgn s)
so that we may write:
pp' =
[1 2 3]
[3 2 1]
Now, find the inverse p^-1 of the above. (Note: the inverse permutation, denoted by p^-1 is defined as the map: p^-1 : Zn -> Zn),
Since p'(1) = 3, then p ^-1(3) = 1
Since p'(2) = 1 then p^ -1(1) = 2
Since p'(3) = 2 then p^ -1(2) = 3
Therefore:
p^-1 =
[1 2 3]
[ 2 3 1]
Problem: Express
p =
[1 2 3 4]
[2 3 1 4]
as the product of transpositions, and determine the sign (+1 or -1) of the resulting end permutation.
Let T1 be the transposition 2 <-> 1 leaving 3, 4 fixed, so:
T1 p =
[1 2 3 4]
[1 3 2 4]
Let T2 be the transposition 2 <-> 3 leaving 1, 4 fixed, so:
T2 T1 p =
[1 2 3 4]
[1 2 3 4]
Then:
T2 T1 p = I (identity)
TWO transpositions (T1, T2) operated on p, so that the sign of the resulting permutation (to reach identity) is +1
The permutation is even.
Monday, December 15, 2008
Initial Conditions of the Big Bang?
In a recent issue (Oct.-Nov.) of the Intertel journal, INTEGRA, Ken Wear asks:
“Supposing the Big Bang theory is correct, what were the initial conditions that produced it?”
This can be approached in a more or less practical way by treating the ‘Big Bang’ as a solution to Einstein’s tensor (field) equations. (See, e.g. ‘Quantum Field Theory – A Modern Introduction’, by Michio Kaku, p. 643):
As per the 2.7K isotropic, microwave background radiation, we assume radial symmetry for the metric tensor – for which we adopt a Robertson-Walker form. This omits all angular dependence and leaves a function of form R(t) which sets the scale and defines an ‘effective radius’ of the universe.
We have:
ds^2 = dx^u g_uv dx^u = dt^2 – R^2(t) [ (dr^2/ 1 – kr^2) + r^2 d (S)^2]
where d(S)^2 is the solid angle differential and k = const.
Associate with this a fluid of average density rho(t) and internal pressure p(t)
The energy-momentum tensor becomes: T_o^0 = rho, T_I^I = -p
with all other components zero.
After inserting these into the Einstein field eqns.
(dR/dt /R)^2 = (8 pi)/ 3 (G_N rho) – k / R^2
whence:
(d^2R/ dt^2 )/ R = - 4 pi G_N (p + rho/3) + LAMBDA/ 3
After setting the cosmological constant (LAMBDA) = 0 and eliminating rho, one obtains as a solution for R (radius of universe as power law function).
R= (9/ 2GM)^1/3 [t ^2/3]
One can deduce from this (ibid, p. 645) that at the Planck energy of 10^19 GeV (giga -electron volts) of energy, the symmetries of gauge theory were still united in a single force. This is at a cosmic age of 10^-44 s.
This represents the closest approach of physics to the cosmic singularity (t = 0) but still defines the ‘Big Bang’ since the explosion is already underway and forces are still unified.
This continues as other symmetries ’break’ one by one, leading to the radiation dominated era. (Described by the Bose-Einstein distribution function, which perfectly applies to the expanding pure photon gas).
The fact that the ‘Big Bang’ can be obtained as a solution to one version of Einstein’s tensor equations, discloses that QM and GR equations certainly don’t ‘blow up’ and are impossible to use.
Mr. Wear then makes the assertion that it “would certainly be a violation of our concepts of cause and effect to say that suddenly, out of nothing….came this cataclysmic explosion”
But again, as I noted earlier, cause and effect notions are of little use. What we need instead are necessary and sufficient conditions for the event to occur - which by the way, is not an ‘explosion”! I refer Mr. Wear to ASTRONOMY magazine, May, 2007, ‘5 Things You Need To Know’, p. 31:
“The Big Bang wasn’t any kind of explosion. It was closer to an unfolding or creation of matter, energy, time and space itself. What would actually have been a much better name is ‘expanding universe theory’.”
As to how spontaneous cosmic inception can occur, this was referenced by T. Padmanabhan, 1983, ‘Universe Before Planck Time – A Quantum Gravity Model', in Physical Review D, Vol. 28, No. 4, p. 756.
To fix ideas, we are interested in first determining the gravitational action, and from this whether acausal determinism is more or less likely to apply. For any action S(g) if
S(g) < < h (the Planck constant)
where h = 6256 x 10^-34 J/s
we may be sure that classical causality is out the window and we are dealing with acausal determinism
If S(g) > > h
the converse holds.
To evaluate S(g) as Padmanabhan shows (op. cit.) , we need V the 4-volume of the space-time manifold for which we choose a de Sitter space, in the first approximation.
We have
S(g) = c^3/ (16 pi G) INT_V R(-g)^1/2 d^4x
where G is the gravitational constant, c is the speed of light, the integral (INT) is over the 4-volume V with the differential (d^4x) to match.
In the big bang model one takes V as the spatial volume enclosed by the particle horizon, and bounded by the time span (t) of the universe. Thus, at any epoch t for k = 0,
S(g) ~ t^1/2
The particle horizon is defined by
rS(g) = 2 ct
Einstein's gravitational equations (with cosmological term, for the sake of generality) are
R ( i k ) - (1 / 2) g ( i k ) R = T ( i k ) + lambda g ( i k )
where the ‘lambda’ denotes the cosmological constant. For de Sitter space it is equal to:
(n – 1)(n – 2)/ [2 a^2]
where a is a scale factor, and n denoted the dimension (4) of the volume under consideration. R(ik) is the Ricci tensor.
Now for S(g) ~ t^1/2, R (the scalar curvature of de Sitter space) = 0, so S(g) = 0
However, the above happens because the Einstein tensor (T_ik) has trace = 0 in the early universe. The ‘trace’ is the sum of the diagonal elements of a tensor, e.g.
Tr(M) = 0
where M =
[0 1 0 ]
[0 -1 0 ]
[0 0 1 ]
This means the limits must definitely be for acausal determinism, NOT classical – including classical causality.
Wear also alludes to a “sequence of oscillations” (ibid.) but this is egregious, since there will be no oscillations, as the universe is not only forever expanding but accelerating in its expansion.
Universes that re-collapse (decelerate), expand forever with zero limiting velocity (e.g. v uniform) or expand forever with positive limiting velocity (accelerate) are called in turn: 'closed' (can have curvature k = +1); 'critical' (k =0) or 'open' (can be k = -1), respectively
Now, to determine whether any F-R-W (Friedmann-Robertson-Walker) cosmological template leads to deceleration or not, we need to find the cosmic density parameter:
OMEGA = rho / rho_c
where the denominator refers to the critical density. Thus if:
rho > rho_ c
(c = critical)
then the cosmic density is able to reverse the expansion (e.g. decelerate it) and conceivably usher in a new cycle. (New Big bang etc.) The observations that help determine how large OMEGA is, come mainly from observing galaxy clusters in different directions in space and obtaining a density estimate from them.
Current data, e.g. from Boomerang and other satellite detectors shows that OMEGA ~ 0.3 or that:
rho = 0.3 (rho_c)
I.e. that rho < rho_ c, so there is no danger of the cosmos decelerating.
Precision measurements of the cosmic microwave background (CMB), including data from the Wilkinson Microwave Anisotropy Probe (WMAP), have recently provided further evidence for dark energy. The same is true of data from two extensive projects charting the large-scale distribution of galaxies - the Two-Degree Field (2DF) and Sloan Digital Sky Survey (SDSS).
The curves from other data with corrected apparent magnitude v. redshift (z) give different combinations of OMEAG_dark to OMEGA_matter over the range. However, only one of the graph combinations bests fits the data:
OMEGA_dark = 0.65 and OMEGA_matter = 0.35
Corresponding to an expansion accelerating for the last 6 million years- with much more dark energy involved (~ 0.65) than ordinary matter.
When the predictions of the different theoretical models are combined with the best measurements of the cosmic microwave background, galaxy clustering and supernova distances, we find that:
0.62 < OMEGA_dark < 0.76,
where OMEGA_dark = rho_dark/ rho_c, and -1.3 < w < -0.9.
In tandem, the numbers show unequivocally that dark energy is the acceleration agent, and in addition that dark energy comprises the lion’s share of what constitutes the cosmos (~ 73%).
In addition, all of this data is firmly backed up by earlier Boomerang (balloon) data that – when plotted on a power spectrum- discloses two adjacent ‘humps’ one a bit higher than the other. The “first acoustic peak” and the “second acoustic peak” fit uncannily to the sort of spherical harmonic function that describes a particular plasma condition. In this case, one that conforms to the supernova-derived values of OMEGA (d, m). (See: ‘Balloon Measurements of the Cosmic Microwave Background Strongly Favor a Flat Cosmos’, in Physics Today, July 2000, p. 7 and 'Supernovae, Dark Energy and the Accelerating Universe', by Saul Perlmutter, in Physics Today, April, 2003, p. 53)
Lastly, astronomers make no “claim” that galaxies are moving apart with increasing velocities. We have actual data that this is so, and it’s based on the basic physics of the Doppler effect.
-----------------------! L1 -------! L2----
---!----------!------------------
L1(o) L2(o)
Thus, in the above pictograph, lines L1(o) and L2(o) are the observed, redshifted (by some number of nanometers) spectral lines for some distant object such that:
v = cz
Where V denotes the velocity of recession, c is the speed of light, and z is the red shift
z = {L2(o)/ L2} - 1
Note again that L2 is the (lab-emission) standard line and L2(o) the observed line wavelength. If z > 0 we say the line is Doppler-effect redshift and receding.
To illustrate, say the hydrogen alpha line (emitted at 656.3 nm, e.g. L2 = 656.3 nm) is redshifted in some distant object to 666 nm (L2(o)). Then we have:
z = 1.015 – 1.000 = 0.015
This translates to a recessional velocity: v = (3 x 10 8 m/s) (0.015) = 4.5 x 10^6 m/s
As to Wear’s claim that it “may be difficult to place credence in such observations over a comparatively brief interval of time”, perhaps, but this “brief interval” is all we have to work with. What, will he dismiss all our painstakingly obtained date (including from the new CERN large hadron collider) because they were obtained over brief intervals? This isn’t the way a Realist works, but it is certainly the modus operandi for an Idealist.
“Supposing the Big Bang theory is correct, what were the initial conditions that produced it?”
This can be approached in a more or less practical way by treating the ‘Big Bang’ as a solution to Einstein’s tensor (field) equations. (See, e.g. ‘Quantum Field Theory – A Modern Introduction’, by Michio Kaku, p. 643):
As per the 2.7K isotropic, microwave background radiation, we assume radial symmetry for the metric tensor – for which we adopt a Robertson-Walker form. This omits all angular dependence and leaves a function of form R(t) which sets the scale and defines an ‘effective radius’ of the universe.
We have:
ds^2 = dx^u g_uv dx^u = dt^2 – R^2(t) [ (dr^2/ 1 – kr^2) + r^2 d (S)^2]
where d(S)^2 is the solid angle differential and k = const.
Associate with this a fluid of average density rho(t) and internal pressure p(t)
The energy-momentum tensor becomes: T_o^0 = rho, T_I^I = -p
with all other components zero.
After inserting these into the Einstein field eqns.
(dR/dt /R)^2 = (8 pi)/ 3 (G_N rho) – k / R^2
whence:
(d^2R/ dt^2 )/ R = - 4 pi G_N (p + rho/3) + LAMBDA/ 3
After setting the cosmological constant (LAMBDA) = 0 and eliminating rho, one obtains as a solution for R (radius of universe as power law function).
R= (9/ 2GM)^1/3 [t ^2/3]
One can deduce from this (ibid, p. 645) that at the Planck energy of 10^19 GeV (giga -electron volts) of energy, the symmetries of gauge theory were still united in a single force. This is at a cosmic age of 10^-44 s.
This represents the closest approach of physics to the cosmic singularity (t = 0) but still defines the ‘Big Bang’ since the explosion is already underway and forces are still unified.
This continues as other symmetries ’break’ one by one, leading to the radiation dominated era. (Described by the Bose-Einstein distribution function, which perfectly applies to the expanding pure photon gas).
The fact that the ‘Big Bang’ can be obtained as a solution to one version of Einstein’s tensor equations, discloses that QM and GR equations certainly don’t ‘blow up’ and are impossible to use.
Mr. Wear then makes the assertion that it “would certainly be a violation of our concepts of cause and effect to say that suddenly, out of nothing….came this cataclysmic explosion”
But again, as I noted earlier, cause and effect notions are of little use. What we need instead are necessary and sufficient conditions for the event to occur - which by the way, is not an ‘explosion”! I refer Mr. Wear to ASTRONOMY magazine, May, 2007, ‘5 Things You Need To Know’, p. 31:
“The Big Bang wasn’t any kind of explosion. It was closer to an unfolding or creation of matter, energy, time and space itself. What would actually have been a much better name is ‘expanding universe theory’.”
As to how spontaneous cosmic inception can occur, this was referenced by T. Padmanabhan, 1983, ‘Universe Before Planck Time – A Quantum Gravity Model', in Physical Review D, Vol. 28, No. 4, p. 756.
To fix ideas, we are interested in first determining the gravitational action, and from this whether acausal determinism is more or less likely to apply. For any action S(g) if
S(g) < < h (the Planck constant)
where h = 6256 x 10^-34 J/s
we may be sure that classical causality is out the window and we are dealing with acausal determinism
If S(g) > > h
the converse holds.
To evaluate S(g) as Padmanabhan shows (op. cit.) , we need V the 4-volume of the space-time manifold for which we choose a de Sitter space, in the first approximation.
We have
S(g) = c^3/ (16 pi G) INT_V R(-g)^1/2 d^4x
where G is the gravitational constant, c is the speed of light, the integral (INT) is over the 4-volume V with the differential (d^4x) to match.
In the big bang model one takes V as the spatial volume enclosed by the particle horizon, and bounded by the time span (t) of the universe. Thus, at any epoch t for k = 0,
S(g) ~ t^1/2
The particle horizon is defined by
rS(g) = 2 ct
Einstein's gravitational equations (with cosmological term, for the sake of generality) are
R ( i k ) - (1 / 2) g ( i k ) R = T ( i k ) + lambda g ( i k )
where the ‘lambda’ denotes the cosmological constant. For de Sitter space it is equal to:
(n – 1)(n – 2)/ [2 a^2]
where a is a scale factor, and n denoted the dimension (4) of the volume under consideration. R(ik) is the Ricci tensor.
Now for S(g) ~ t^1/2, R (the scalar curvature of de Sitter space) = 0, so S(g) = 0
However, the above happens because the Einstein tensor (T_ik) has trace = 0 in the early universe. The ‘trace’ is the sum of the diagonal elements of a tensor, e.g.
Tr(M) = 0
where M =
[0 1 0 ]
[0 -1 0 ]
[0 0 1 ]
This means the limits must definitely be for acausal determinism, NOT classical – including classical causality.
Wear also alludes to a “sequence of oscillations” (ibid.) but this is egregious, since there will be no oscillations, as the universe is not only forever expanding but accelerating in its expansion.
Universes that re-collapse (decelerate), expand forever with zero limiting velocity (e.g. v uniform) or expand forever with positive limiting velocity (accelerate) are called in turn: 'closed' (can have curvature k = +1); 'critical' (k =0) or 'open' (can be k = -1), respectively
Now, to determine whether any F-R-W (Friedmann-Robertson-Walker) cosmological template leads to deceleration or not, we need to find the cosmic density parameter:
OMEGA = rho / rho_c
where the denominator refers to the critical density. Thus if:
rho > rho_ c
(c = critical)
then the cosmic density is able to reverse the expansion (e.g. decelerate it) and conceivably usher in a new cycle. (New Big bang etc.) The observations that help determine how large OMEGA is, come mainly from observing galaxy clusters in different directions in space and obtaining a density estimate from them.
Current data, e.g. from Boomerang and other satellite detectors shows that OMEGA ~ 0.3 or that:
rho = 0.3 (rho_c)
I.e. that rho < rho_ c, so there is no danger of the cosmos decelerating.
Precision measurements of the cosmic microwave background (CMB), including data from the Wilkinson Microwave Anisotropy Probe (WMAP), have recently provided further evidence for dark energy. The same is true of data from two extensive projects charting the large-scale distribution of galaxies - the Two-Degree Field (2DF) and Sloan Digital Sky Survey (SDSS).
The curves from other data with corrected apparent magnitude v. redshift (z) give different combinations of OMEAG_dark to OMEGA_matter over the range. However, only one of the graph combinations bests fits the data:
OMEGA_dark = 0.65 and OMEGA_matter = 0.35
Corresponding to an expansion accelerating for the last 6 million years- with much more dark energy involved (~ 0.65) than ordinary matter.
When the predictions of the different theoretical models are combined with the best measurements of the cosmic microwave background, galaxy clustering and supernova distances, we find that:
0.62 < OMEGA_dark < 0.76,
where OMEGA_dark = rho_dark/ rho_c, and -1.3 < w < -0.9.
In tandem, the numbers show unequivocally that dark energy is the acceleration agent, and in addition that dark energy comprises the lion’s share of what constitutes the cosmos (~ 73%).
In addition, all of this data is firmly backed up by earlier Boomerang (balloon) data that – when plotted on a power spectrum- discloses two adjacent ‘humps’ one a bit higher than the other. The “first acoustic peak” and the “second acoustic peak” fit uncannily to the sort of spherical harmonic function that describes a particular plasma condition. In this case, one that conforms to the supernova-derived values of OMEGA (d, m). (See: ‘Balloon Measurements of the Cosmic Microwave Background Strongly Favor a Flat Cosmos’, in Physics Today, July 2000, p. 7 and 'Supernovae, Dark Energy and the Accelerating Universe', by Saul Perlmutter, in Physics Today, April, 2003, p. 53)
Lastly, astronomers make no “claim” that galaxies are moving apart with increasing velocities. We have actual data that this is so, and it’s based on the basic physics of the Doppler effect.
-----------------------! L1 -------! L2----
---!----------!------------------
L1(o) L2(o)
Thus, in the above pictograph, lines L1(o) and L2(o) are the observed, redshifted (by some number of nanometers) spectral lines for some distant object such that:
v = cz
Where V denotes the velocity of recession, c is the speed of light, and z is the red shift
z = {L2(o)/ L2} - 1
Note again that L2 is the (lab-emission) standard line and L2(o) the observed line wavelength. If z > 0 we say the line is Doppler-effect redshift and receding.
To illustrate, say the hydrogen alpha line (emitted at 656.3 nm, e.g. L2 = 656.3 nm) is redshifted in some distant object to 666 nm (L2(o)). Then we have:
z = 1.015 – 1.000 = 0.015
This translates to a recessional velocity: v = (3 x 10 8 m/s) (0.015) = 4.5 x 10^6 m/s
As to Wear’s claim that it “may be difficult to place credence in such observations over a comparatively brief interval of time”, perhaps, but this “brief interval” is all we have to work with. What, will he dismiss all our painstakingly obtained date (including from the new CERN large hadron collider) because they were obtained over brief intervals? This isn’t the way a Realist works, but it is certainly the modus operandi for an Idealist.
Labels:
Boomerang balloon,
curvature,
dark energy,
dark matter,
Ricci tensor,
WMAP
Saturday, December 13, 2008
An Invitation to God-template?
As religiosity heats up again approaching Christmas, a number of new "God" books - and some not so new- have come to my attention. One of these has been the book: ‘No One Sees God’ by Michael Novak, 1994 winner of the Templeton Prize.
Novak's is one of those books an atheist sometimes reaches for in the hope of seeing whether a Christian can forge a detailed apologia for his position. Also, whether an atheist can be greeted by something more than bombast, threats or venom for not kowtowing to the mainstream God-addiction.
For the first thirty or so pages, until the author veers into palpable atheist baiting (‘Not the Way to Invite a Conversation’, using the "New atheist" books of Dawkins, Dennett, and Harris as templates) it was a pretty good read. One sees a reasonable and rational mind at work, and one not afraid to admit that atheists may have something in their favor - for at least pushing lazy Christians to examine issues and aspects of their faith. (But, of course, to me - Christianity is not one monolithic faith, but a patchwork of about 70 different sects - from the largest, Roman Catholicism - to which Novak belongs, to the smallest Science of Mind enclaves)
Indeed, it reminded me of the sort of dialectic content that often permeates arguments with my longtime Christian friend, John Phillips. E.g. the author’s claim (p. 17) that he hypothesizes that:
“unbelievers, especially those who have never known religion in their personal lives, or who have had bad experiences with it, experience a revulsion against a reasoned knowledge of God”.
Of course, as my friend John noted and recalled, I have not had so nice experiences of religion either – going back to a nun teacher in first grade pressing a hot needle (she’d just heated using a match) into my right palm to remind me “of Hell”.
“Remember boy, the fires of Hell are a million times hotter than this and they burn you inside and out! Don’t ever forget it!”
Well, I didn’t, and thus began my journey to rational atheism as I note in my recent book, Atheism: A Beginner’s Handbook.
But to assert, as Novak does, that this might have elicited “a revulsion against reasoned knowledge of God” is to miss the interpretation, and by a country mile.
First, there can be no such thing as “reasoned knowledge” of anything until one has first provided the ontology. Ontology, the basis for primary existence, comes BEFORE knowledge (epistemology) and not after. Thus, all Novak’s later citations of the classics describing God and citing the likes of Aristotle and St. Augustine, do him no service. Rather, such diversionary passages merely show that Novak himself has no clue about who or what this “God” is, he can only say what It isn’t.
Thus, despite waxing long and hard (in reply to the three atheist authors, Dawkins, Harris and Dennett) in Chapter Three (‘Letter to an Atheist Friend’) he fails to make his case that his entity is anything more than a will-o-the-wisp centered in his fertile imagination, or his temporal lobes (as Michael Persinger’s work showed, see my review of his ‘The Neuropsychology of God Belief’) . He himself reinforces his own deficiencies when he admits (p. 274, Epilogue):
“The only knowledge of God we have through reason, all the ancient thinkers have taught us, is dark- and by the via negative- that is, by reasoning from what God cannot be”
True enough! But Novak is not afraid, despite this candid admission, to make all manner of positive statements about God’s nature. A few samples off-hand:
p.196:
“God must be more like human consciousness, insight, a sense of humor, good judgment…”
“God knows well the creatures He made…he has to beat us around the ears a bit”
“In the end it was important for God that his son (who is one with him) became human and dwelt on the Earth”
The Trinity: p. 197,
“to think of God as a Trinity is to think of Him as more like an intimate communion of persons than as a solitary being”.
p. 198:
“When everything is suffused with reason, that is the presence of God”
How exactly are any of these, metaphorical or not, statements via negativa- the dark way? They aren’t! Especially the egregious and misplaced reference to “God’s son” (p. 196) which assumes there is ample evidence that a possible 1st century charismatic Jewish rabbi was a genuine God-man. He wasn’t. He was a confabulation of Christians who felt compelled to copy and imitate the earlier pagan Mithraists’ god-man fables. (E.g. Mithras was born of a virgin, performed miracles, died on a cross...etc. then rose from the dead.)
The collection of Novak’s statements comprise an assortment of positive statements the author proffers about an entity he really doesn’t know because he hasn’t provided any ontology.
Now let’s get into some ontology here. First, following Bertrand Russell’s lead (‘The Problems of Philosophy’) we need to specify the practical and operative laws that apply to existents and entities, under the general rubric of “being”. (Thus, to be most accurate here, when an atheist agrees to debate a Christian, he is only agreeing to the presupposition of “being”. It remains to be worked out or proven, what the exact nature of this being is.)
By “existent” we mean to say that which has prior grounding in the mind, albeit not yet demonstrably shown in reality.
For example, the number ‘2’.
If the number 2 is mental, it is essentially a mental existent. (Do you see literal two lurking in the outside world, apart from what the human mind assigns, e.g. two apples, two oranges, two beetles etc.?)
Such existents are always particular.
If any particular exists in one mind at one time it cannot exist in another mind at any time or the same mind at a different time. The reason is that as time passes, the neural sequence and synapses that elicited the previous “existent” at that earlier time, no longer exists. My conceptual existent of “2” at 3.30 a.m. this morning is thus not the same as my conceptualization of it at 4 p.m. It may APPEAR so, but rigorous neural network tests will show it is not. (E.g. differing brain energies will be highlighted at each time)
Thus, ‘2’ must be minimally an entity that has “being’ regardless of whether it has existence.
Now, we jump into the realm of epistemology from here, with the next proposition:
Generalizing from the preceding example, ALL knowledge must be recognition, and must be of entities that are not purely mental but whose being is a PRECONDITION- and NOT a result- of being thought of.
Applying this to the ontology of “non-contingent creator”, it must be shown it exists independently of being thought of. (E.g. there must be a way to declare and isolate its independent existence from the constellation of human brains which might get tempted to confabulate it out of innate brain defect or emotional need)
Here’s another way to propose it: If one demands that this entity (G-O-D) is not susceptible to independent existence, and therefore the mere announcement or writing of the words incurs validity, then the supposed condition has nothing to do with reality. It is like averring we all live inside a 12-dimensonal flying spaghetti monster. I would be laughed into oblivion, especially as I incur no special benediction (as you do) by invoking the G-noun.
In effect, if the proposed “non-contingent creator” or its single word equivalent isn’t subject to independent existence, then its alleged “truth” is separated from verification. Truth then becomes what is communicated to us by proxy (or proxy vehicle, e.g. Pope Benedict, the Bible, Novak or any other Xtian apologist) with the existent (or a metaphor) in the mind of the communicator who deems himself qualified to make the “truth” exist.
But such a “truth” (or any associated invocation of “reason” in its service) is fraudulent and cannot be a valid expression of the condition. What it means is there is little assurance the communicated secondary artifact has all the elements and particulars needed to be an affirmed REAL entity. The truth is dispensed according to our needs (in this case the need to believe humans are seen after by a Cosmic Daddy) – all we need ignore is the constellation of evidence that refutes it.
Second: How to escape from this ontological problem?
Logicians have been aware for centuries of the pitfalls of appealing to pure causes, or to generic causality. We see this in multiple places in Novak’s book such as on page 217 where he takes Dan Dennett to task for what is claimed to be spurious invocation of causes and causality:
“Besides, Dennett interprets the cause of a cause as if both were the same, like one turtle on another”
This sort of causal approach is exactly what makes discussions sterile because it invites dead ends and ambiguity. By contrast, as Robert Baum notes in his text, Logic, 4th Edition, causal explanations are only of limited utility because of the intuitive, non-systematic nature of causal inference. Not only are we confronted with multiple types of cause but also proximate and remote causes. For example, a collision of a comet with a large meteoroid in space may be the proximate cause of the comet’s shifting orbit enough for its nucleus to collide with Earth. However, another collision – say of a large asteroid with Earth- may be engendered by the remote cause of the YORP effect.
For this reason, it is far more productive to instead reframe causes into necessary and sufficient conditions. As Baum notes (p. 469) this is advisable because the term ‘cause’ has been too closely associated in most people’s minds with a “proximate efficient” cause. Like one billiard hitting another and sending it into a side pocket. Or one small asteroid hitting another to send it into Earth- intersecting orbit.
In these terms, whatever “secondary causes” or other causes pertain to G-O-D are irrelevant and do not advance the arguments. We therefore put these causal references aside as Baum recommends, and substitute for them necessary and sufficient conditions for the claimed existent. If someone is unable to provide these, then either he doesn’t know what he is talking about, or has engendered a fantasy creation or phantasmagoria in his own brain which he now offers to us (non-believers) as reality.
Let’s now review what these n-s conditions are. A necessary condition is one, without which, the claimed entity cannot exist. A sufficient condition is one which, if present, the entity must exist.
For example, consider a hydrogen emission nebula. The necessary condition is the nebula or interstellar cloud of hydrogen exist in the first place. The sufficient condition for the existence of a hydrogen emission nebula in space would be proximity of the nebula to a radiating star. In this case, the star’s radiation causes the hydrogen atoms in the nebula to become excited – cause electrons to jump to higher energy levels- then go to lower with the emission of photons)
Leaving out all the fluff about “first causes” or “secondary causes” we therefore simply ask: What are the necessary and sufficient conditions for a “God” of the type Novak proposes to exist?
Note, that in having to explicate these, Novak also is compelled to show how his God varies from all the other God- concepts proposed. This also disallows the cavalier statement he makes in his Epilogue where he conflates the Judeo-Christian God with that of deists. E.g., page 274:
“If there is a God such as Jews, Christians and deists have held there to be…..”
But wait! The Deist’s God is NOT the same as that of Jews or Christians! Strictly speaking, Deism treated in its orthodox and traditional form is not Theism. Deism is, in fact, only one step removed from atheism. The only real difference is that in deism some kind of non-specific "first cause" is proposed, but after that all distinctions collapse. The atheist avers there is no one or nothing "minding the store" and so does the deist.
Deism, to give an analogy, is analogous to a child who makes a toy with a gear wheel, and the toy has the ability to move after being wound up and released. Thus, the child makes the toy (he's a clever kid) winds it up, releases it down the sidewalk, then walks away never to glance at it or its final outcome, destination. In this case, the child plays an analogous role to the ambiguous first cause of deism and the toy is analogous to the universe.
One of the most egregious arguments of Novak is when he avers (p. 267) that:
“The trouble is that atheism is a leap in the dark and not a rational alternative. No one can possibly prove a negative or know enough to be certain there is no God”
But none of the atheists I know, myself included, DO THAT! So what Novak has succeeded in doing is inventing his own straw man atheist or tackling dummy and nothing more. Let’s take his ‘shot in the dark’ tag. Not at all! What we (rational atheists) say is that in the absence of YOU proving your claim (or at least giving us the necessary and sufficient conditions for it) - we are maintaining a position of non-investment of our mental or emotional resources and energies into it. This is the conservative and natural position to adopt for a true rationalist.
In a similar vein, if a neighbor tells me he has alien ghosts inhabiting his attic, I would also withhold investment of any emotional or intellectual commitment to it. Until he could provide me with some empirical justification or evidence, I am fully entitled to ignore his claim as possessing any remote connection to reality. Likewise for Novak’s claim of a deity no matter how many descriptive metaphors he can dredge up for it. Minus those n-s conditions he is merely promoting phantasm as reality like the alien ghost believing neighbor. True, on a vaster perhaps more sublime scale, but in the end the same category of Macguffin.
As to ‘proving a negative” no atheist does that either. We simply maintain the conservative “show us, we’re from Missouri” outlook. Thus, we regard the improbability of a deity that is invisible and governs and designs the cosmos as about the same as one trillion alien ghosts from Tau Ceti inhabiting the DC Beltway.
The error of latter day Christian obscurantists like Novak is in placing the onus of proving a negative on us, when in fact the onus is on him to prove his existent is substantive and not his own mental confection. Failing any hard evidence, say like a video recording for an alien ghost or apparition, the next best thing is to give us those necessary and sufficient conditions signed, sealed and delivered. As it is, what we end up with (in Novak's book) is his own creation of deity as a mirror image of his mental self-representation.
There are so many other egregious assertions or claims that pervade the book, such as atheism allowing the meme that “everything is permitted”, or that purpose must be exposed via science, that it would require a whole other book to deal with them. So let me end this review with the case of Novak’s daughter, who we are told, p. 42: “decided atheism cannot be true because it is self-contradictory”.
He then goes on to expostulate that: “atheists want all the comforts of rationality that emanates from rational theism but without any personal indebtedness to any Creator, Governor, Judge.."
And so Novak’s daughter concludes it is more reasonable to believe there is a God, then withholding investment for any evidence demonstrating it is more than a munchkin of her mind. And let us recall hear the words of astrophysicist Carl von Weiszsacker: “It is impossible to understand rationally a God in whom one did not believe in already”.
First, Novak’s daughter is wrong to accept that we define everything as “chancy” or “absurd”. Indeed, there is within the cosmos a domain of natural law regularity within which our deterministic mechanics can work very well. It is only when one moves on to the quantum theory (quantum mechanics) that hard prediction becomes dicey. Our job, as rationalists and scientists, is not to succumb to metaphysics or overeager mystics, but to try to show how much of our cosmos CAN be understood in terms of known laws and how much cannot. Thus, on setting the limits for rational enquiry we set limits on our own penchant for mental eruptions and inventions. We use our empirical methods, whatever they are, to put curbs on our human penchant to inflate reality.
What we do know now, compliments of our most modern advances in infrared and microwave astronomy, is that the assay of the cosmos holds little room for what we recognize as “order”. The latest results from the Boomerang balloon survey and the Wilkinson Microwave Anisotropy project disclose that darkness pervades 93% of the cosmos, whether in the form of dark matter(23%) or dark energy (70%). We humans represent emergence of a rational brain to fathom these mysteries compliments of natural selection on one small planet, and the probable beneficial intervention of a large asteroid 65 million years ago which wiped out our prime competition.
Second, NONE of these rational modes or methods was earned via “comfort” or given to us compliments of the god-mongers or Christians. We had to earn our rational results (say those showing that dark energy behaves like a plasma that fits a spherical harmonic distribution) step by step through patience and many errors, and the final ability to reach our goals. Thus, we owe absolutely nothing to any “judges” or mystical “creators’ inhabiting the mind of Novak, or anyone else. To say so is to attempt to validate science’s service in the perpetuation of never-ending mumbo-jumbo, theological dogmas and superstitious phantasmagorias.
I am a rational atheist, proud of it, and beholding for it to no fictitious entity that inhabits the wayward neurons of someone’s brain.
The good news here is that this is one of the better “God books’ in circulation. The bad news, is that it’s not anywhere near as good as James Byrne’s “GOD”(Continuum, 2001). But then again, after reading Tipler’s fulsome ‘Physics of Christianity’ it was a relief of sorts!
Novak's is one of those books an atheist sometimes reaches for in the hope of seeing whether a Christian can forge a detailed apologia for his position. Also, whether an atheist can be greeted by something more than bombast, threats or venom for not kowtowing to the mainstream God-addiction.
For the first thirty or so pages, until the author veers into palpable atheist baiting (‘Not the Way to Invite a Conversation’, using the "New atheist" books of Dawkins, Dennett, and Harris as templates) it was a pretty good read. One sees a reasonable and rational mind at work, and one not afraid to admit that atheists may have something in their favor - for at least pushing lazy Christians to examine issues and aspects of their faith. (But, of course, to me - Christianity is not one monolithic faith, but a patchwork of about 70 different sects - from the largest, Roman Catholicism - to which Novak belongs, to the smallest Science of Mind enclaves)
Indeed, it reminded me of the sort of dialectic content that often permeates arguments with my longtime Christian friend, John Phillips. E.g. the author’s claim (p. 17) that he hypothesizes that:
“unbelievers, especially those who have never known religion in their personal lives, or who have had bad experiences with it, experience a revulsion against a reasoned knowledge of God”.
Of course, as my friend John noted and recalled, I have not had so nice experiences of religion either – going back to a nun teacher in first grade pressing a hot needle (she’d just heated using a match) into my right palm to remind me “of Hell”.
“Remember boy, the fires of Hell are a million times hotter than this and they burn you inside and out! Don’t ever forget it!”
Well, I didn’t, and thus began my journey to rational atheism as I note in my recent book, Atheism: A Beginner’s Handbook.
But to assert, as Novak does, that this might have elicited “a revulsion against reasoned knowledge of God” is to miss the interpretation, and by a country mile.
First, there can be no such thing as “reasoned knowledge” of anything until one has first provided the ontology. Ontology, the basis for primary existence, comes BEFORE knowledge (epistemology) and not after. Thus, all Novak’s later citations of the classics describing God and citing the likes of Aristotle and St. Augustine, do him no service. Rather, such diversionary passages merely show that Novak himself has no clue about who or what this “God” is, he can only say what It isn’t.
Thus, despite waxing long and hard (in reply to the three atheist authors, Dawkins, Harris and Dennett) in Chapter Three (‘Letter to an Atheist Friend’) he fails to make his case that his entity is anything more than a will-o-the-wisp centered in his fertile imagination, or his temporal lobes (as Michael Persinger’s work showed, see my review of his ‘The Neuropsychology of God Belief’) . He himself reinforces his own deficiencies when he admits (p. 274, Epilogue):
“The only knowledge of God we have through reason, all the ancient thinkers have taught us, is dark- and by the via negative- that is, by reasoning from what God cannot be”
True enough! But Novak is not afraid, despite this candid admission, to make all manner of positive statements about God’s nature. A few samples off-hand:
p.196:
“God must be more like human consciousness, insight, a sense of humor, good judgment…”
“God knows well the creatures He made…he has to beat us around the ears a bit”
“In the end it was important for God that his son (who is one with him) became human and dwelt on the Earth”
The Trinity: p. 197,
“to think of God as a Trinity is to think of Him as more like an intimate communion of persons than as a solitary being”.
p. 198:
“When everything is suffused with reason, that is the presence of God”
How exactly are any of these, metaphorical or not, statements via negativa- the dark way? They aren’t! Especially the egregious and misplaced reference to “God’s son” (p. 196) which assumes there is ample evidence that a possible 1st century charismatic Jewish rabbi was a genuine God-man. He wasn’t. He was a confabulation of Christians who felt compelled to copy and imitate the earlier pagan Mithraists’ god-man fables. (E.g. Mithras was born of a virgin, performed miracles, died on a cross...etc. then rose from the dead.)
The collection of Novak’s statements comprise an assortment of positive statements the author proffers about an entity he really doesn’t know because he hasn’t provided any ontology.
Now let’s get into some ontology here. First, following Bertrand Russell’s lead (‘The Problems of Philosophy’) we need to specify the practical and operative laws that apply to existents and entities, under the general rubric of “being”. (Thus, to be most accurate here, when an atheist agrees to debate a Christian, he is only agreeing to the presupposition of “being”. It remains to be worked out or proven, what the exact nature of this being is.)
By “existent” we mean to say that which has prior grounding in the mind, albeit not yet demonstrably shown in reality.
For example, the number ‘2’.
If the number 2 is mental, it is essentially a mental existent. (Do you see literal two lurking in the outside world, apart from what the human mind assigns, e.g. two apples, two oranges, two beetles etc.?)
Such existents are always particular.
If any particular exists in one mind at one time it cannot exist in another mind at any time or the same mind at a different time. The reason is that as time passes, the neural sequence and synapses that elicited the previous “existent” at that earlier time, no longer exists. My conceptual existent of “2” at 3.30 a.m. this morning is thus not the same as my conceptualization of it at 4 p.m. It may APPEAR so, but rigorous neural network tests will show it is not. (E.g. differing brain energies will be highlighted at each time)
Thus, ‘2’ must be minimally an entity that has “being’ regardless of whether it has existence.
Now, we jump into the realm of epistemology from here, with the next proposition:
Generalizing from the preceding example, ALL knowledge must be recognition, and must be of entities that are not purely mental but whose being is a PRECONDITION- and NOT a result- of being thought of.
Applying this to the ontology of “non-contingent creator”, it must be shown it exists independently of being thought of. (E.g. there must be a way to declare and isolate its independent existence from the constellation of human brains which might get tempted to confabulate it out of innate brain defect or emotional need)
Here’s another way to propose it: If one demands that this entity (G-O-D) is not susceptible to independent existence, and therefore the mere announcement or writing of the words incurs validity, then the supposed condition has nothing to do with reality. It is like averring we all live inside a 12-dimensonal flying spaghetti monster. I would be laughed into oblivion, especially as I incur no special benediction (as you do) by invoking the G-noun.
In effect, if the proposed “non-contingent creator” or its single word equivalent isn’t subject to independent existence, then its alleged “truth” is separated from verification. Truth then becomes what is communicated to us by proxy (or proxy vehicle, e.g. Pope Benedict, the Bible, Novak or any other Xtian apologist) with the existent (or a metaphor) in the mind of the communicator who deems himself qualified to make the “truth” exist.
But such a “truth” (or any associated invocation of “reason” in its service) is fraudulent and cannot be a valid expression of the condition. What it means is there is little assurance the communicated secondary artifact has all the elements and particulars needed to be an affirmed REAL entity. The truth is dispensed according to our needs (in this case the need to believe humans are seen after by a Cosmic Daddy) – all we need ignore is the constellation of evidence that refutes it.
Second: How to escape from this ontological problem?
Logicians have been aware for centuries of the pitfalls of appealing to pure causes, or to generic causality. We see this in multiple places in Novak’s book such as on page 217 where he takes Dan Dennett to task for what is claimed to be spurious invocation of causes and causality:
“Besides, Dennett interprets the cause of a cause as if both were the same, like one turtle on another”
This sort of causal approach is exactly what makes discussions sterile because it invites dead ends and ambiguity. By contrast, as Robert Baum notes in his text, Logic, 4th Edition, causal explanations are only of limited utility because of the intuitive, non-systematic nature of causal inference. Not only are we confronted with multiple types of cause but also proximate and remote causes. For example, a collision of a comet with a large meteoroid in space may be the proximate cause of the comet’s shifting orbit enough for its nucleus to collide with Earth. However, another collision – say of a large asteroid with Earth- may be engendered by the remote cause of the YORP effect.
For this reason, it is far more productive to instead reframe causes into necessary and sufficient conditions. As Baum notes (p. 469) this is advisable because the term ‘cause’ has been too closely associated in most people’s minds with a “proximate efficient” cause. Like one billiard hitting another and sending it into a side pocket. Or one small asteroid hitting another to send it into Earth- intersecting orbit.
In these terms, whatever “secondary causes” or other causes pertain to G-O-D are irrelevant and do not advance the arguments. We therefore put these causal references aside as Baum recommends, and substitute for them necessary and sufficient conditions for the claimed existent. If someone is unable to provide these, then either he doesn’t know what he is talking about, or has engendered a fantasy creation or phantasmagoria in his own brain which he now offers to us (non-believers) as reality.
Let’s now review what these n-s conditions are. A necessary condition is one, without which, the claimed entity cannot exist. A sufficient condition is one which, if present, the entity must exist.
For example, consider a hydrogen emission nebula. The necessary condition is the nebula or interstellar cloud of hydrogen exist in the first place. The sufficient condition for the existence of a hydrogen emission nebula in space would be proximity of the nebula to a radiating star. In this case, the star’s radiation causes the hydrogen atoms in the nebula to become excited – cause electrons to jump to higher energy levels- then go to lower with the emission of photons)
Leaving out all the fluff about “first causes” or “secondary causes” we therefore simply ask: What are the necessary and sufficient conditions for a “God” of the type Novak proposes to exist?
Note, that in having to explicate these, Novak also is compelled to show how his God varies from all the other God- concepts proposed. This also disallows the cavalier statement he makes in his Epilogue where he conflates the Judeo-Christian God with that of deists. E.g., page 274:
“If there is a God such as Jews, Christians and deists have held there to be…..”
But wait! The Deist’s God is NOT the same as that of Jews or Christians! Strictly speaking, Deism treated in its orthodox and traditional form is not Theism. Deism is, in fact, only one step removed from atheism. The only real difference is that in deism some kind of non-specific "first cause" is proposed, but after that all distinctions collapse. The atheist avers there is no one or nothing "minding the store" and so does the deist.
Deism, to give an analogy, is analogous to a child who makes a toy with a gear wheel, and the toy has the ability to move after being wound up and released. Thus, the child makes the toy (he's a clever kid) winds it up, releases it down the sidewalk, then walks away never to glance at it or its final outcome, destination. In this case, the child plays an analogous role to the ambiguous first cause of deism and the toy is analogous to the universe.
One of the most egregious arguments of Novak is when he avers (p. 267) that:
“The trouble is that atheism is a leap in the dark and not a rational alternative. No one can possibly prove a negative or know enough to be certain there is no God”
But none of the atheists I know, myself included, DO THAT! So what Novak has succeeded in doing is inventing his own straw man atheist or tackling dummy and nothing more. Let’s take his ‘shot in the dark’ tag. Not at all! What we (rational atheists) say is that in the absence of YOU proving your claim (or at least giving us the necessary and sufficient conditions for it) - we are maintaining a position of non-investment of our mental or emotional resources and energies into it. This is the conservative and natural position to adopt for a true rationalist.
In a similar vein, if a neighbor tells me he has alien ghosts inhabiting his attic, I would also withhold investment of any emotional or intellectual commitment to it. Until he could provide me with some empirical justification or evidence, I am fully entitled to ignore his claim as possessing any remote connection to reality. Likewise for Novak’s claim of a deity no matter how many descriptive metaphors he can dredge up for it. Minus those n-s conditions he is merely promoting phantasm as reality like the alien ghost believing neighbor. True, on a vaster perhaps more sublime scale, but in the end the same category of Macguffin.
As to ‘proving a negative” no atheist does that either. We simply maintain the conservative “show us, we’re from Missouri” outlook. Thus, we regard the improbability of a deity that is invisible and governs and designs the cosmos as about the same as one trillion alien ghosts from Tau Ceti inhabiting the DC Beltway.
The error of latter day Christian obscurantists like Novak is in placing the onus of proving a negative on us, when in fact the onus is on him to prove his existent is substantive and not his own mental confection. Failing any hard evidence, say like a video recording for an alien ghost or apparition, the next best thing is to give us those necessary and sufficient conditions signed, sealed and delivered. As it is, what we end up with (in Novak's book) is his own creation of deity as a mirror image of his mental self-representation.
There are so many other egregious assertions or claims that pervade the book, such as atheism allowing the meme that “everything is permitted”, or that purpose must be exposed via science, that it would require a whole other book to deal with them. So let me end this review with the case of Novak’s daughter, who we are told, p. 42: “decided atheism cannot be true because it is self-contradictory”.
He then goes on to expostulate that: “atheists want all the comforts of rationality that emanates from rational theism but without any personal indebtedness to any Creator, Governor, Judge.."
And so Novak’s daughter concludes it is more reasonable to believe there is a God, then withholding investment for any evidence demonstrating it is more than a munchkin of her mind. And let us recall hear the words of astrophysicist Carl von Weiszsacker: “It is impossible to understand rationally a God in whom one did not believe in already”.
First, Novak’s daughter is wrong to accept that we define everything as “chancy” or “absurd”. Indeed, there is within the cosmos a domain of natural law regularity within which our deterministic mechanics can work very well. It is only when one moves on to the quantum theory (quantum mechanics) that hard prediction becomes dicey. Our job, as rationalists and scientists, is not to succumb to metaphysics or overeager mystics, but to try to show how much of our cosmos CAN be understood in terms of known laws and how much cannot. Thus, on setting the limits for rational enquiry we set limits on our own penchant for mental eruptions and inventions. We use our empirical methods, whatever they are, to put curbs on our human penchant to inflate reality.
What we do know now, compliments of our most modern advances in infrared and microwave astronomy, is that the assay of the cosmos holds little room for what we recognize as “order”. The latest results from the Boomerang balloon survey and the Wilkinson Microwave Anisotropy project disclose that darkness pervades 93% of the cosmos, whether in the form of dark matter(23%) or dark energy (70%). We humans represent emergence of a rational brain to fathom these mysteries compliments of natural selection on one small planet, and the probable beneficial intervention of a large asteroid 65 million years ago which wiped out our prime competition.
Second, NONE of these rational modes or methods was earned via “comfort” or given to us compliments of the god-mongers or Christians. We had to earn our rational results (say those showing that dark energy behaves like a plasma that fits a spherical harmonic distribution) step by step through patience and many errors, and the final ability to reach our goals. Thus, we owe absolutely nothing to any “judges” or mystical “creators’ inhabiting the mind of Novak, or anyone else. To say so is to attempt to validate science’s service in the perpetuation of never-ending mumbo-jumbo, theological dogmas and superstitious phantasmagorias.
I am a rational atheist, proud of it, and beholding for it to no fictitious entity that inhabits the wayward neurons of someone’s brain.
The good news here is that this is one of the better “God books’ in circulation. The bad news, is that it’s not anywhere near as good as James Byrne’s “GOD”(Continuum, 2001). But then again, after reading Tipler’s fulsome ‘Physics of Christianity’ it was a relief of sorts!
Friday, December 5, 2008
The Financial Black Hole
"This has become essentially the dark matter of the financial universe" - comparing it to the dark matter discovered in astrophysics.”
Chris Wolf, hedge fund operator, quoted in FORTUNE, October 7.
"The big problem is there are so many public companies- banks and corporations, and no one really knows how much exposure they have to CDS (credit default swap) contracts."
Morgan Stanley derivatives salesman (Frank Partnoy) quoted in FORTUNE (ibid.)
The latest news in The Financial Times has not been encouraging as their headline ('Index Points to Record Default Threat', p. 13, Dec. 2) continues to warn of the unfolding crisis in CDS or "credit default swaps". As described by Chris Wolf, the "dark matter of the financial universe".
Following on from the FT article, alarm bells should be ringing as the Mrkit iTraxx Crossover index rose over 1000 basis points for the first time since its creation and meanwhile, in the U.S., the main credit default swaps indicator (for 125 companies) rose to 271 basis points.
Some of the world's leading investment grade companies now look to be in danger of default according to CDS prices.
What are these esoteric instruments and why are we at such risk from them, especially their being embedded in the mortgage securities market? That is what I want to explore in this blog entry. This entails understanding what the associated term “toxic debt” means and how it factors into the unfolding economic catastrophe that we behold. Almost all of it is tied up in these “credit default swaps”. The sum total of these esoteric financial “black holes” is now estimated to be no less than $55 TRILLION. (See,e.g. 'AIG's Complexity Blamed for Fall' in The Financial Times, Oct. 7, 2008 and 'The $55 TRILLION QUESTION' FORTUNE, October, p. 135).
TO comprehend why these CDSs comprise toxic debt we need to delve into some financial history. In particular, a move made in the 1980s known as “securitization”. Up until then, the banks were the primary holders of mortgage debt. With a government deregulating “green light”, however, banks were able to offload these mortgages (whose defaults always cost the banks dearly) to Wall Street. There, clever people gathered millions of mortgages from across the country and repackaged them into entities called “collateralized mortgage obligations” or CMOs.
These were then inserted into bond funds which were sold to cautious investors as “safe” instruments. After all, bonds are supposed to be safer than stocks, right? Wrong! Bonds, such as U.S. Treasurys are – by virtue of having the name and backing of the U.S. government behind them. But not bond funds, which can be loaded with all manner of financial tripe that can engender losses over the short or long term.
As an example, most bond funds in the 1980s and 1990s were loaded with IOs, or inverse only strips, as well as inverse floaters, and CMOs (referred to as “toxic waste” in bond trader parlance). The IOs pay only mortgage interest. Inverse floaters, meanwhile, pay more when interest rates FALL than when they rise. All these tricks were used to try to juice up yields to lure investors. That, along with touting them as “government securities” - since legally speaking mortgage securities are “government –backed” but that doesn’t mean your investment is FDIC-insured! In this way, the bond fund purveyors could get people to think they were making safe investments when nothing could be further from the truth.
My own wife was in one of these bond funds as part of a 401k “Life cycle” fund about ten years ago. I noticed every quarter, despite being in “bond funds”, she was losing more than $800 each quarter and getting no company match (because they aren’t obliged to match in the case of losses). Upon further scrutiny, I discovered the bond funds were laden with IOs and inverse floaters as well as CMOs. I immediately had her exit the Life Cycle thing and put all her 401k money into fixed income assets. Fortunately, she acted in time – as otherwise she likely would have lost more than 30% with the post-9-11 downturn.
We now move ahead to the late 1990s, and CMOs have transmogrified into CDOs (collateralized debt obligations) though the basic meaning is the same. Again, these represented millions of repackaged mortgages now sold as “securities” as part of bond funds.
Sometime in the early 2000s, a gaggle of “quants”- gifted mathematical types based in investment banks- got the idea for a creation that could juice up huge profits for their banks, and based on unregulated derivatives. Thus were born the “credit default swaps”. These were basically devised as “side bets” made on the mortgage securities market and the performance of the CDOs therein.
We all know what a “side bet” is. For example, if you travel to a Vegas Sports book, you will find you can not only make bets on a particular game, say Giants beating the Patriots in the Super Bowl- but also ancillary happenings to do with the game. For example, one can bet on: how many first downs the Giants will make in the first quarter, or how many sacks the NE defenders will make in the game, or how many rushing first downs a particular player will make – say Sammy Morris of the Pats. Any and all side bets are feasible.
In the case of the CDS realm, side bets were allowed on all sorts of things, such as whether particular CDOs would lose money, or the interest rate (average) on a segment of them would drop one half percent, or whether there would be at least 100,000 foreclosures in the third quarter of the financial year.
In the case of the credit default swap, all that was needed to make the bet formal was a counter-party. Thus, the “party” renders the bet and the amount wagered, and the counter-party takes the bet. The actual exchange, as already noted, was often done on cell phones and no formal records other than what the cell phone statement showed were available.
Now, the investment banks’ quants realized that the bets as such might not grab the interest of the mainstream banks they needed to buy into them. After all, the banks could LOSE on many of these bets and it would be to their unending detriment. Thus, the quants took the CDSs and repackaged them along with regular mortgage securities – with CDOs, into what they called “structured investment vehicles”. Or SIVs.
These were then sliced and diced and sold to the mainstream, Main Street banks as safe securities. To make this “kosher” so to speak, bond rating companies (like AIG) were asked to give a bond rating and preferably the safest (AAA) to signal to the mainstream Banks these were A-OK purchases.
Despite the fact that the rating agencies had not the faintest or foggiest clue what the SIVs contained, they sold the things to the banks and the banks happily bought them up unaware of what was actually in them. By 2003 the total of credit default swaps in the financial system was estimated to be around $6 trillion. By August of this year, it had reached $55 trillion.
That is, $55 trillion in hidden and subjective financial BETS buried into mortgage securities as SIVs, with no formal tracer available! Couple this now to a bona fide debt, such as a car loan or mortgage from approved bank or mortgage loan company. Everything is spelled out in detail so that even a person of average intelligence can see what he or she is getting into.
In the case of the mortgage, for example, a full amortization schedule -table is available to show monthly payments, and the principal vs. interest. There is no guessing, no doubt. The debtor knows his obligations and what he has to do to make good on them.
By contrast, with the CDS (credit default swaps) nothing is known other than that the instrument has some subjective worth at one time. But HOW MUCH? TO WHOM? We have no clue since none of the esteemed quants who invented them knows where the “bodies are buried” so to speak! I am not even sure, if they were compelled to complete a typical ISO-9000 process form, they could replicate exactly HOW their esoteric instruments were created!
What we see here, and which is abundantly evident even to the most hard-core libertarian ideologue, is that credit default swaps and the instruments into which they have been buried and disseminated are indeed “toxic waste” by any rational financial measure. I am not even sure one can call them “debt” – although to the banks that now have them on their books they represent humongous debt! Since each quantity of these things lowers the value of the bank’s assets by some factor, and increases its liability.
To make this more understandable I refer to the graphic at the top of this entry- which compares two banks with roughly the same volume of assets, but different equities – since one bank (A) has fewer CDS.
Now, since banks owning the instruments into which credit default swaps have been buried will not readily disclose their extent, then it stands to reason one bank – say Bank A – cannot know how much “bad debt” or “toxic assets” the other one owns or has on its books. If this is the case, a bank with relatively higher equity (Bank A in Fig. 1- at top) will be unwilling to lend capital to a bank for which the toxic asset volume is unknown. After all, if it lends in good faith then the other banks fails because of the higher CDS proportion, it will have only itself to blame.
It is this unknown which has directly engendered the current credit freeze. Because no bank knows the volume of CDS any other holds, it cannot know the extent of any other bank’s equity position or credit worthiness. Thus, has the LIBOR rate recently exploded – this is the London Interbank Offered Rate- which is a measure of bank to bank lending confidence. It most certainly will not begin to go down, reflecting higher lending confidence, until some agent steps in and proceeds to buy all the CDS now on the banks’ books.
WHO is in the position to do this? Well, certainly no private entity has the resources! The only one is the government, and more specifically the quasi-governmental entity known as the Federal Reserve which can, if it must, create enough money by fiat to buy all the CDS and get rid of this toxic sludge once and for all.
Now, some libertarians will no doubt exclaim ‘Why not just do nothing?’ but in asking that they are clearly not cognizant of the degree of financial collapse that would precipitate from such folly. We are talking here of credit seizing up everywhere! No more money for student loans, at any price, no loans for businesses to meet payroll or plant improvement and you can forget about any expansions! No money for home construction, to purchase new cars, to do home renovations, to re-fi a mortgage……NADA! In effect, as Nobel Prize winning Princeton economist Paul Krugman has noted, one would usher in a Second Great Depression – and this one – by virtue of the global banking effects, would make the first look like the proverbial walk in the park.
Hence, bottom line, there is no option. The $55 trillion must be purged and it must be done before banking collapses proceed like falling dominoes. Now, no one is arguing here that full value must be paid for all those toxic assets, I mean even 20 cents on the dollar would be better than nothing – though even that would add $11 trillion to the existing bailout deficit. But doing nothing is not an option, and only the most financially obtuse, who have no remote clue of what is transpiring now, would even propose it.
Finally, in the wake of this catastrophe - which will probably take four to five more years to unfold- it is clear that all the credit default swaps which caused this mess need to be outlawed. Further, all derivatives, irrespective of where or how they are used, need to finally come under SEC regulation.
We cannot afford another event like the credit default swap mess, ever again! For stock investors, the future will be bloody bleak as Stephen Roach noted in his 'Market & Investing' FT column three days ago. As he notes the "post-bubble" world will see a very anemic recovery. Not any of this market bursting forth back up to where it was within months, or even years....possibly decades.
In the long deleverage process, with all asset bubbles punctured, no fund or stock will be able to jack up yield by using tricks such as stacking investments with derivatives, IOs or other crap. People simply won't buy them. In the future investment world the snake-bitten will reach only for what is understandable and transparent.
In a way this is a good thing since the stock market was always designed more as a financial casino for the wealthy, not for ordinary Jacks and Janes to park their precious retirement money.
Monday, November 10, 2008
Faith-Based Climate Models? (II)
Peter Huber in his article ('Faith-Based Models', FORBES, Oct. 27, p. 105) continues his skeptical diatribe by writing:
"Some then try to deal with the fact that more cloud cover will reduce the amount of inbound sunlight that reaches the surface and also boost the amount of heat radiated back intio space from above the clouds and so on and so forth".
But as already noted (previous instalment) how cloud cover acts depends on the TYPE of cloud! As Prof. Gale Christianson ('Greenouse') has noted:
"wispy high flying cirrus are semi-transparent to incoming sunlight but block infrared radiation emitted by the Earth thus CONTRIBUTING to the Greenhouse Effect”
Thus, there is no real problem here other than what Huber has created. Reinforcing this the authors of the paper ('Can Earth’s Albedo and Surface Temperature Increase Together’ in EOS, Vol. 87, No. 4, Jan. 24, 2006, p. 37) have emphasized that:
"whereas low clouds have decreased during the most recent years, high clouds have increased to a larger extent leading to both an increase in cloud amount AND an increased trapping of infrared radiation"
Thus, high altitude cloud cover abets infrared radiation trapping and contributes to the global greenhouse.
Huber then disingenously refers to having to parse "millions of lines of terribly complex computer code" -but this isn't necessary to ascertain the effects of the cloud cover. However, satellite data from a range of meteorological satellites covering the entire Earth is! One can, believe it or not, scan said data and see how the variables compare without doing "millions of pages of computer code".
When I prepared my own solar data (sunspot group area, magnetic intensity, solar flare occurrence) in 1980 in preparation for my first paper ('SID Flares and Sunspot Morphology', in Solar Physics, Vol. 88, Nos. 1-2, Oct. 1983) I could easily see how the data was trending and the extent of the correlations even before the first multivariate analysis was done on the university's IBM computer. But it is in Huber's interest to portray the task of ascertaining real global warming as some horrendously vast, complex task accessible only to certain high priests of climate science.
Huber then writes:
"And then it ends with a great leap back to simplicity. The atmosphere grew somewhat warmer in the 20th century. How do we know that human carbon emissions were the cause? Supposedly because the models are scientifically sound, they can't track the temperature changes back to volcanoes, solar variations or any other natural cause so the cause must be us."
Again, disingenuous! As readers may recall, Mount Pinatubo erupted in 1992. This volcanic event reduced global warming effect for up to 2-3 years after and this has been well-documented in numerous sources, as any goolging foray will show. Thus, a period of temperature change (decrease) has been tracked to a specific volcanic event.
More recently, we are aware of much of the worst heating from global warming being concealed by the phenomenon of global dimming. The effect was first spotted by Gerry Stanhill, an English scientist working
in Israel. Comparing Israeli sunlight records from the 1950s with current
ones, Stanhill was astonished to find a large fall in solar radiation.
"There was a staggering 22% drop in the sunlight, and that really amazed
me," he says.
Intrigued, he searched out records from all around the world, and found the
same story almost everywhere he looked, with sunlight falling by 10% over
the USA, nearly 30% in parts of the former Soviet Union, and even by 16% in
parts of the British Isles. Although the effect varied greatly from place to
place, overall the decline amounted to 1-2% globally per decade between the
1950s and the 1990s.
The most alarming aspect of global dimming is that it may have
led scientists to underestimate the power of the greenhouse effect.
While it's known how much extra energy has been trapped in the Earth's atmosphere
by the extra carbon dioxide (CO2), it's surprising is that it has so far translated to a temperature rise of just 0.6°C.
The most worrisome aspect, as a PBS docmentary (2004) by the same title showed, is that once the aerosols and pollutants spawning dimming are removed, the heating of Earth may attain unprecedented proportions of more than 5C in a century.
Thus, Huber's sarcastic reference to the Earth getting "somewhat warmer" is precisely because of global dimming obscuring the most pronounced effects.
As for tracking temperature changes back to variations on the Sun, this has also been done and quite extensively.
In fact, an exhaustive series of studies of temperature - solar sunspot number correlations have already been done and they are listed in the monograph 'Sun, Weather & Climate', by John R. Herman and Richard A. Goldberg, Dover, 1978, p. 127 - Table 3.5)
A total of eight periods are listed under column three, with their correlation coefficients, which include:
1891- 1917 (-0.44)
1870 - 1918 (-0.33)
1893 - 1924 (-0.25)
1888 - 1920 (- 0.24)
1892 - 1920 (-0.38)
1862 - 1920 (-0.33)
1867- 1923 (-0.46)
1871 - 1920 (-0.38)
Note that all entries exhibit a negative correlation coefficient, indicating an inverse relationship between sunspot number and temperature. Meanwhile for Period 2 (Column 4) the Table shows:
1925 - 1957 (- 0.1)
1921 - 1954 (+0.21)
1926 - 1954 (+0.32)
1921 - 1947 (+0.16)
1921 - 1950 (- 0.29)
1921 - 1953 (+0.24)
1924 - 1953 (+0.10)
1921 - 1950 (+0.23)
These coefficients mostly disclose positive correlation, the exception being the 6th entry from top. The authors note (cf. p. 128) that for the entire data set "the correlation coefficient for annual temperature and sunspot number (11-yr. cycle) was -0.38 up to 1920, but for the period 1921- 1950 the correlation had reversed and the coefficient was +0.23".
I maintain these results are totally consistent with the hypothesis of anthropogenic global warming. In their respective papers seeking to mitigate human responsbility in global warming, S.I. Akasofu, and earlier Sallie Baliunas and Willie Soon argued the opposite, but they forgot - or neglected to factor in - the 100 year delay time for CO2 deposition and retention in the atmosphere. Thus, inputs at the time of the onset of the industrial revolution, ca. 1845, would not manifest significantly until 100 years later.
And indeed, we see the inversion of averaged correlation coefficients from -0.38 (up to 1920) to +0.23 up to 1950, a total net change of +0.61 in the positive direction, which can take into account a total variability (explained by it) of some 36% (the total change - squared). It is clear, certainly to me, that Akasofu, Baliunas and Soon have drawn exactly the wrong conclusion from their respective results. Indeed, in the latter’s paper ( Fig. 1), showing the IPCC data and the temp. rise of 0.4C between 1910-1940, the accumulation of CO2 in the atmosphere is surely taken into account. Their conclusion that “the 'CO2' signal does not commence until 1940” is precisely what is in error.
This sort of error, neglecting time delay for signal exposure, is not unique and has been made many times by professionals who should know better. As well by students I have taught. (Though I can understand it more plausibly in the latter. It is just ironic that it is made in a paper purporting to overturn or at least dilute the IPCC results.)
In addition to the preceding work, Solar physicist John Eddy, made it his research specialty to study long-term solar variations connected to climate change, noted the period of 12th century warming in his book, ‘The New Solar Physics’, AAAS Selected Symposium, Westview Press, 1979, p. 17.
Eddy noted that this coincided with a period of higher solar activity (i.e. more sunspots) and possibly greater luminosity – on account of the fact that the irradiance is amplified around sunspots owing to redirection of convective heat flow. (Bear in mind the plasma in spots is at lower temperatures, by about 1500C, because of the powerful magnetic fields in them).
During solar cycle 20 – when I also conducted investigations on solar flares and their effects- the then Solar Max satellite used an active cavity radiometer to measure temperature increases arising from higher activity – especially as generated by more convection at the periphery of large spots. The differential was something on the order of 0.1C at the Sun! Since the radiant energy must now transit 150 million kilometers, and its intensity falls off as the inverse square, one can see this would translate into negligible increases at Earth.
What about longer period increases in solar luminosity associated with its possibly being a variable star – as opposed to sporadic sunspot outbursts?
The maximal magnitude of inherent solar -induced climate variability was probably first highlighted by Sabatino Sofia et al in their paper 'Solar Constant: Constraints on Possible Variations Derived from Solar Diameter Measurements', in Science, Vol. 204, 1306, 1979. Their estimate was a solar change in irradiance of roughly 0.1 % averaged over each solar cycle. (Irradiance is a measure of the energy per square meter received from the Sun).
Thus – if the solar irradiance effect at Earth (solar constant) is normally about 1360 watts/m^2, this would imply an increase of roughly 1.36 W/m^2.. The problem is that there is no observational evidence to support this in the warming period of the 12th century, or any time in the past century – when global warming spiked to serious levels. (Some like Sofia have argued that even if it had occurred, it would only engender a temp. increase contribution of perhaps one-fourth of one degree, or significantly less than what has been documented.
More recent space-based observations appear to show a variation in solar irradiance of at least 0.15% over the standard 11-year solar cycle. (E.g. Parker, E.N., Nature, Vol. 399, p. 416). However, even with this higher percentage ascribed to solar changes, the heating effect is nowhere near comparable to that induced from man-made global warming. (See, e.g. Martin I. Hoffert et al, in Nature, Vol. 401, p. 764).
As the authors in the latter study point out, the heating component arising from greenhouse gas emissions from 1861-1990 amounted to anywhere from 2.0 to 2.8 watts per square meter. The solar variability component detected over the same period amounted to 0.1 to 0.5 watts per square meter. Thus, even the MAXIMUM solar variability amounted to only a fraction (25%) of the MINIMUM power input from human-induced greenhouse warming!
Thus, we see how on all these points, Huber is proven wrong. We can indeed track temperature changes to natural events on Earth (e.g. volcanoes) as well as solar variations, and we see that the magnitudes of these are not enough to account for higher temperatures on Earth, but can conceal the most aggravated and enhanced effects (as in the case of volcanic eruptions and global dimming).
Huber through his piece refers to skeptic scientist Richard Lindzen of MIT, but who is Lindzen after all? He is a meterologist (not specifically a CLIMATE scientist - who typically take longer views) who was one of 100 signers of a petition to the effect man-made global warming is a "fallacy". What is not said, ever, is that these pitiful 100 contrarians are a minuscule fraction of the more than 20,000 working climate scientists who have published more than 15,000 papers validating the phenomenon of anthropogenic warming over the past ten years.
Indeed, the largest scientific organization on the planet - the American Geophysical Union - includes its position statement on human-induced warming as part of its public policy web page:
http://www.agu.org/sci_soc/policy/positions/climate_change2008.shtml
Another suspicious "petition" that has made the rounds is the so-called "Oregon Petition". (I actually received a question about it on the all experts site). The questioner made reference to the "17,000 names" on the petition, all allegedly sicentists but he or she evidently never took note that nearly all the names were fake. Names like "Perry Mason", John Grisham" et al, none of whom were scientists, none of whom were willing signees. The name at the front was "Edwin Teller" but everyone knows he is no climate scientist and his most recent accomplishment was promoting the specious "SDI" or 'Strategic Defense Initiative' in the 1980s - where high-powered lasers and particle beam weapons were to be mounted to satellites to shoot down ICBMs. All nonsense, shown to be bunkum by the American Physical Society's 'Directed Energy Weapons' study (Physics Today, May, 1987)
Lastly, Huber insists that: "Few college graduates, let alone school children, have the sicentific background to think critically about any of this".
This is more codswallop. Merely because I myself can't process of validate a process, or phenomenon or engineering device doesn't necessarily mean I reject its use or accept it as what it is claimed. I have no remote notion of how robotics works to produce something like the Japanses animatronic robot (who dances and performs human gestures) but that doesn't mean I am skeptical to the point of thinking it's a fake and there's really a little man inside a suit!
Nonetheless, any college graduate can certainly look for certain key attributes when approaching something like global warming claims, and look for these criteria:
1- How many sources does it come from? If only one source, of course one must suspect the claims! If from hundreds of peer-reviewed papers, there is a powerful reason to accept the claims.
2- What is the pedigree of the sources? If in such fare as The Journal for Geophysical Research, this is a powerful commendation for validity, but if in a private organization's web paper (like the George C. Marshall Institute, a free market think tank) it isn't. Why? Because the latter is operated by economists, lobbyists and dedicated to an ideological viewpoint that has not been vettted by SCIENCE.
3- What proportion of experts agree to the claims, as opposed to the contrarians who do not? In the case of Lindzen and his cohort of 100, this is a tiny fraction of the 20,000 climate scientists worldwide who had a hand in formulating the AGU position statement shown earlier.
In the end, no "faith" in man-made global warming is being promoted or needed, contrary to Huber's take. One can, motivated by sufficient intelligence and curiosity, obtain enough climatology papers and - with a basic background in general physics - ascertain for oneself that global warming is real, and occurring.
It is now past time for the skeptics to cease their nonstop recitation of red herrings vis-a-vis anthropogenic global warming. As writer Terry Black put it in hius recent Mensa Bulletin article, 'Never Trust A Skeptic' (Nov.-Dec., p. 32):
"It (global warming) has been repeatedly confirmed by the International Panel on Climate Change and the Nobel Prize Committee among many others. A purist might say 'it's still not proven', not beyond doubt. But I've reached the point where I've stopped doubting climate change and begun doubting the doubters, who seem ill-informed and intellectually dishonest. Only an idiot still doubts the Holocaust, or the moon landing. I submit that global warming is equally well-established."
Right on, Mr. Black!
2-
"Some then try to deal with the fact that more cloud cover will reduce the amount of inbound sunlight that reaches the surface and also boost the amount of heat radiated back intio space from above the clouds and so on and so forth".
But as already noted (previous instalment) how cloud cover acts depends on the TYPE of cloud! As Prof. Gale Christianson ('Greenouse') has noted:
"wispy high flying cirrus are semi-transparent to incoming sunlight but block infrared radiation emitted by the Earth thus CONTRIBUTING to the Greenhouse Effect”
Thus, there is no real problem here other than what Huber has created. Reinforcing this the authors of the paper ('Can Earth’s Albedo and Surface Temperature Increase Together’ in EOS, Vol. 87, No. 4, Jan. 24, 2006, p. 37) have emphasized that:
"whereas low clouds have decreased during the most recent years, high clouds have increased to a larger extent leading to both an increase in cloud amount AND an increased trapping of infrared radiation"
Thus, high altitude cloud cover abets infrared radiation trapping and contributes to the global greenhouse.
Huber then disingenously refers to having to parse "millions of lines of terribly complex computer code" -but this isn't necessary to ascertain the effects of the cloud cover. However, satellite data from a range of meteorological satellites covering the entire Earth is! One can, believe it or not, scan said data and see how the variables compare without doing "millions of pages of computer code".
When I prepared my own solar data (sunspot group area, magnetic intensity, solar flare occurrence) in 1980 in preparation for my first paper ('SID Flares and Sunspot Morphology', in Solar Physics, Vol. 88, Nos. 1-2, Oct. 1983) I could easily see how the data was trending and the extent of the correlations even before the first multivariate analysis was done on the university's IBM computer. But it is in Huber's interest to portray the task of ascertaining real global warming as some horrendously vast, complex task accessible only to certain high priests of climate science.
Huber then writes:
"And then it ends with a great leap back to simplicity. The atmosphere grew somewhat warmer in the 20th century. How do we know that human carbon emissions were the cause? Supposedly because the models are scientifically sound, they can't track the temperature changes back to volcanoes, solar variations or any other natural cause so the cause must be us."
Again, disingenuous! As readers may recall, Mount Pinatubo erupted in 1992. This volcanic event reduced global warming effect for up to 2-3 years after and this has been well-documented in numerous sources, as any goolging foray will show. Thus, a period of temperature change (decrease) has been tracked to a specific volcanic event.
More recently, we are aware of much of the worst heating from global warming being concealed by the phenomenon of global dimming. The effect was first spotted by Gerry Stanhill, an English scientist working
in Israel. Comparing Israeli sunlight records from the 1950s with current
ones, Stanhill was astonished to find a large fall in solar radiation.
"There was a staggering 22% drop in the sunlight, and that really amazed
me," he says.
Intrigued, he searched out records from all around the world, and found the
same story almost everywhere he looked, with sunlight falling by 10% over
the USA, nearly 30% in parts of the former Soviet Union, and even by 16% in
parts of the British Isles. Although the effect varied greatly from place to
place, overall the decline amounted to 1-2% globally per decade between the
1950s and the 1990s.
The most alarming aspect of global dimming is that it may have
led scientists to underestimate the power of the greenhouse effect.
While it's known how much extra energy has been trapped in the Earth's atmosphere
by the extra carbon dioxide (CO2), it's surprising is that it has so far translated to a temperature rise of just 0.6°C.
The most worrisome aspect, as a PBS docmentary (2004) by the same title showed, is that once the aerosols and pollutants spawning dimming are removed, the heating of Earth may attain unprecedented proportions of more than 5C in a century.
Thus, Huber's sarcastic reference to the Earth getting "somewhat warmer" is precisely because of global dimming obscuring the most pronounced effects.
As for tracking temperature changes back to variations on the Sun, this has also been done and quite extensively.
In fact, an exhaustive series of studies of temperature - solar sunspot number correlations have already been done and they are listed in the monograph 'Sun, Weather & Climate', by John R. Herman and Richard A. Goldberg, Dover, 1978, p. 127 - Table 3.5)
A total of eight periods are listed under column three, with their correlation coefficients, which include:
1891- 1917 (-0.44)
1870 - 1918 (-0.33)
1893 - 1924 (-0.25)
1888 - 1920 (- 0.24)
1892 - 1920 (-0.38)
1862 - 1920 (-0.33)
1867- 1923 (-0.46)
1871 - 1920 (-0.38)
Note that all entries exhibit a negative correlation coefficient, indicating an inverse relationship between sunspot number and temperature. Meanwhile for Period 2 (Column 4) the Table shows:
1925 - 1957 (- 0.1)
1921 - 1954 (+0.21)
1926 - 1954 (+0.32)
1921 - 1947 (+0.16)
1921 - 1950 (- 0.29)
1921 - 1953 (+0.24)
1924 - 1953 (+0.10)
1921 - 1950 (+0.23)
These coefficients mostly disclose positive correlation, the exception being the 6th entry from top. The authors note (cf. p. 128) that for the entire data set "the correlation coefficient for annual temperature and sunspot number (11-yr. cycle) was -0.38 up to 1920, but for the period 1921- 1950 the correlation had reversed and the coefficient was +0.23".
I maintain these results are totally consistent with the hypothesis of anthropogenic global warming. In their respective papers seeking to mitigate human responsbility in global warming, S.I. Akasofu, and earlier Sallie Baliunas and Willie Soon argued the opposite, but they forgot - or neglected to factor in - the 100 year delay time for CO2 deposition and retention in the atmosphere. Thus, inputs at the time of the onset of the industrial revolution, ca. 1845, would not manifest significantly until 100 years later.
And indeed, we see the inversion of averaged correlation coefficients from -0.38 (up to 1920) to +0.23 up to 1950, a total net change of +0.61 in the positive direction, which can take into account a total variability (explained by it) of some 36% (the total change - squared). It is clear, certainly to me, that Akasofu, Baliunas and Soon have drawn exactly the wrong conclusion from their respective results. Indeed, in the latter’s paper ( Fig. 1), showing the IPCC data and the temp. rise of 0.4C between 1910-1940, the accumulation of CO2 in the atmosphere is surely taken into account. Their conclusion that “the 'CO2' signal does not commence until 1940” is precisely what is in error.
This sort of error, neglecting time delay for signal exposure, is not unique and has been made many times by professionals who should know better. As well by students I have taught. (Though I can understand it more plausibly in the latter. It is just ironic that it is made in a paper purporting to overturn or at least dilute the IPCC results.)
In addition to the preceding work, Solar physicist John Eddy, made it his research specialty to study long-term solar variations connected to climate change, noted the period of 12th century warming in his book, ‘The New Solar Physics’, AAAS Selected Symposium, Westview Press, 1979, p. 17.
Eddy noted that this coincided with a period of higher solar activity (i.e. more sunspots) and possibly greater luminosity – on account of the fact that the irradiance is amplified around sunspots owing to redirection of convective heat flow. (Bear in mind the plasma in spots is at lower temperatures, by about 1500C, because of the powerful magnetic fields in them).
During solar cycle 20 – when I also conducted investigations on solar flares and their effects- the then Solar Max satellite used an active cavity radiometer to measure temperature increases arising from higher activity – especially as generated by more convection at the periphery of large spots. The differential was something on the order of 0.1C at the Sun! Since the radiant energy must now transit 150 million kilometers, and its intensity falls off as the inverse square, one can see this would translate into negligible increases at Earth.
What about longer period increases in solar luminosity associated with its possibly being a variable star – as opposed to sporadic sunspot outbursts?
The maximal magnitude of inherent solar -induced climate variability was probably first highlighted by Sabatino Sofia et al in their paper 'Solar Constant: Constraints on Possible Variations Derived from Solar Diameter Measurements', in Science, Vol. 204, 1306, 1979. Their estimate was a solar change in irradiance of roughly 0.1 % averaged over each solar cycle. (Irradiance is a measure of the energy per square meter received from the Sun).
Thus – if the solar irradiance effect at Earth (solar constant) is normally about 1360 watts/m^2, this would imply an increase of roughly 1.36 W/m^2.. The problem is that there is no observational evidence to support this in the warming period of the 12th century, or any time in the past century – when global warming spiked to serious levels. (Some like Sofia have argued that even if it had occurred, it would only engender a temp. increase contribution of perhaps one-fourth of one degree, or significantly less than what has been documented.
More recent space-based observations appear to show a variation in solar irradiance of at least 0.15% over the standard 11-year solar cycle. (E.g. Parker, E.N., Nature, Vol. 399, p. 416). However, even with this higher percentage ascribed to solar changes, the heating effect is nowhere near comparable to that induced from man-made global warming. (See, e.g. Martin I. Hoffert et al, in Nature, Vol. 401, p. 764).
As the authors in the latter study point out, the heating component arising from greenhouse gas emissions from 1861-1990 amounted to anywhere from 2.0 to 2.8 watts per square meter. The solar variability component detected over the same period amounted to 0.1 to 0.5 watts per square meter. Thus, even the MAXIMUM solar variability amounted to only a fraction (25%) of the MINIMUM power input from human-induced greenhouse warming!
Thus, we see how on all these points, Huber is proven wrong. We can indeed track temperature changes to natural events on Earth (e.g. volcanoes) as well as solar variations, and we see that the magnitudes of these are not enough to account for higher temperatures on Earth, but can conceal the most aggravated and enhanced effects (as in the case of volcanic eruptions and global dimming).
Huber through his piece refers to skeptic scientist Richard Lindzen of MIT, but who is Lindzen after all? He is a meterologist (not specifically a CLIMATE scientist - who typically take longer views) who was one of 100 signers of a petition to the effect man-made global warming is a "fallacy". What is not said, ever, is that these pitiful 100 contrarians are a minuscule fraction of the more than 20,000 working climate scientists who have published more than 15,000 papers validating the phenomenon of anthropogenic warming over the past ten years.
Indeed, the largest scientific organization on the planet - the American Geophysical Union - includes its position statement on human-induced warming as part of its public policy web page:
http://www.agu.org/sci_soc/policy/positions/climate_change2008.shtml
Another suspicious "petition" that has made the rounds is the so-called "Oregon Petition". (I actually received a question about it on the all experts site). The questioner made reference to the "17,000 names" on the petition, all allegedly sicentists but he or she evidently never took note that nearly all the names were fake. Names like "Perry Mason", John Grisham" et al, none of whom were scientists, none of whom were willing signees. The name at the front was "Edwin Teller" but everyone knows he is no climate scientist and his most recent accomplishment was promoting the specious "SDI" or 'Strategic Defense Initiative' in the 1980s - where high-powered lasers and particle beam weapons were to be mounted to satellites to shoot down ICBMs. All nonsense, shown to be bunkum by the American Physical Society's 'Directed Energy Weapons' study (Physics Today, May, 1987)
Lastly, Huber insists that: "Few college graduates, let alone school children, have the sicentific background to think critically about any of this".
This is more codswallop. Merely because I myself can't process of validate a process, or phenomenon or engineering device doesn't necessarily mean I reject its use or accept it as what it is claimed. I have no remote notion of how robotics works to produce something like the Japanses animatronic robot (who dances and performs human gestures) but that doesn't mean I am skeptical to the point of thinking it's a fake and there's really a little man inside a suit!
Nonetheless, any college graduate can certainly look for certain key attributes when approaching something like global warming claims, and look for these criteria:
1- How many sources does it come from? If only one source, of course one must suspect the claims! If from hundreds of peer-reviewed papers, there is a powerful reason to accept the claims.
2- What is the pedigree of the sources? If in such fare as The Journal for Geophysical Research, this is a powerful commendation for validity, but if in a private organization's web paper (like the George C. Marshall Institute, a free market think tank) it isn't. Why? Because the latter is operated by economists, lobbyists and dedicated to an ideological viewpoint that has not been vettted by SCIENCE.
3- What proportion of experts agree to the claims, as opposed to the contrarians who do not? In the case of Lindzen and his cohort of 100, this is a tiny fraction of the 20,000 climate scientists worldwide who had a hand in formulating the AGU position statement shown earlier.
In the end, no "faith" in man-made global warming is being promoted or needed, contrary to Huber's take. One can, motivated by sufficient intelligence and curiosity, obtain enough climatology papers and - with a basic background in general physics - ascertain for oneself that global warming is real, and occurring.
It is now past time for the skeptics to cease their nonstop recitation of red herrings vis-a-vis anthropogenic global warming. As writer Terry Black put it in hius recent Mensa Bulletin article, 'Never Trust A Skeptic' (Nov.-Dec., p. 32):
"It (global warming) has been repeatedly confirmed by the International Panel on Climate Change and the Nobel Prize Committee among many others. A purist might say 'it's still not proven', not beyond doubt. But I've reached the point where I've stopped doubting climate change and begun doubting the doubters, who seem ill-informed and intellectually dishonest. Only an idiot still doubts the Holocaust, or the moon landing. I submit that global warming is equally well-established."
Right on, Mr. Black!
2-
Saturday, November 8, 2008
Faith-Based Climate Models? (I)
In a recent FORBES article (Oct. 27, 'Faith-Based Models', p. 105) columnist Peter Huber takes global warming models to task in a variety of ways. His general conclusion is that "outsider faith in global warming has to be grounded on trust in higher authority, disconnected from any critical scientific reflection at all".
Of course, this dreck enables him to then go on to assert that "promoting that kind of faith is the exact opposite of what science teachers should be doing".
The truth, of course, is far different. That is, that any reasonably intelligent (and curious!) person can convince himself that antrhropogenic global warming is real by simply perusing the essential climate literature. It also helps to at least have some background in thermal physics.
Let's take a look at a few of Huber's complaints to see if any have merit. The first is what he claims is a standard drawing or graphic in schoolbooks. That is - sunlight entering the Earth's atmosphere warms its surface, but then after this heating, its longer (infrared) wavelengths are blocked on re-emission from the surface so that a greenhouse effect takes hold. Huber complains:
"In fact, direct radiation from the surface into outer space plays only a small role in cooling the Earth. Far more important, the chimney like motion of hot air and evaporated water that transfers heat from the surface into the atmosphere".
Of course, he is referring to convection in the atmosphere. However, I would not go so far as Huber in saying that the other graphic amounts to "miseducating children". What it means, is that we employ an admittedly simplified cartoon (or ansatz) to convey the concept.
By adding in convection and its details, as well as cloud cover etc. one would clutter the concept and make it overly complicated for a school child.
This is not unusual at all, and is done all the time in science. For example, when I gave astronomy courses at the Barbados Community College, one graphic I often used showed how energy was transferred from the Sun's core to the photosphere. A zig-zag path was used to represent the photon trying to emerge from the solar core (wherein it was absorbed and re-emitted in different directions) until finally getting to the convective layer, and thence to the photosphere from where it could depart into outer space as electro-magnetic waves.
In many respects, the energy transmission graphic is as simplified as the one for the greenhouse effect which Huber criticizes in grade school general science books. The reason is that in fact billions of photons are in transit, and each taking an average of one million years to get out of the solar core because of the absorptions and re-emissions (by core atoms) in different directions. But this more factual depiction is clearly impossible to show, and would not in any way enhance the teaching of the underlying concept.
Huber himself then goes on to make a classic mistake when he avers:
"When not miseducating children, the climate modelers try to call a chimney a chimney. They recognize that cloud cover and water vapor eclipse carbon dioxide as the dominant greenhouse agents."
In fact, H20 vapor does not eclipse CO2 as a greenhouse agent. Even a tiny, minuscule amount of CO2 is vastly more efficient at blocking the re-radiation of energy than any amount of water vapor- at those bands. (See the NRC Report published ca. 2001 that gives the relative W/m^2 forcing contributions of each greenhouse gas) Part of the misconception arose because early researchers, lacking the current technology of infrared spectroscopy, assumed that water vapor bands already blocked out most of what would (ordinarily) be taken by CO2. (Cf. ‘The Discovery of the Risk of Global Warming’, by Spencer Weart, in Physics Today, Jan. 1997, p. 34).
As to cloud cover, a very useful reference here is the paper: ‘Can Earth’s Albedo and Surface Temperature Increase Together’ in EOS- Transactions of the American Geophysical Union(Vol. 87, No. 4, Jan. 24, 2006, p. 37).
As the authors note, though there is some evidence that Earth’s albedo (ratio of the radiant flux falling on a surface to that reflected from it) has increased from 2000 to 2004 this has NOT led to a reversal in global warming. The authors cite the most up to date cloud data released in August, 2005 from the International Satellite Cloud Climatology Project (ISCCP). The data – from a range of meteorological satellites covering the entire Earth, discloses the most likely reason for the anomaly is primarily in the redistribution of the clouds.
Thus, as the authors point out (ibid.):
"whereas low clouds have decreased during the most recent years, high clouds have increased to a larger extent leading to both an increase in cloud amount AND an increased trapping of infrared radiation."
Prof. Gale Christianson in his book Greenhouse (Penguin, 1999, p. 203)notes that “stratus clouds are gray, dense and low flying and have a net COOLING effect since their albedo is relatively high”.
BUT these are precisely the cloud type that has receded in incidence, as the ISCCP data show, from the EOS article! Now, what type has increased? Christianson again (ibid.):
“Conversely, wispy high flying cirrus are semi-transparent to incoming sunlight but block infrared radiation emitted by the Earth thus CONTRIBUTING to the Greenhouse Effect”
Again, the point I am making is that the use of the ansatz in school texts )to depcit the global greenhouse) is nowhere near as bankrupt educationally as Huber makes it out to be.
More on this in the next instalment.
Of course, this dreck enables him to then go on to assert that "promoting that kind of faith is the exact opposite of what science teachers should be doing".
The truth, of course, is far different. That is, that any reasonably intelligent (and curious!) person can convince himself that antrhropogenic global warming is real by simply perusing the essential climate literature. It also helps to at least have some background in thermal physics.
Let's take a look at a few of Huber's complaints to see if any have merit. The first is what he claims is a standard drawing or graphic in schoolbooks. That is - sunlight entering the Earth's atmosphere warms its surface, but then after this heating, its longer (infrared) wavelengths are blocked on re-emission from the surface so that a greenhouse effect takes hold. Huber complains:
"In fact, direct radiation from the surface into outer space plays only a small role in cooling the Earth. Far more important, the chimney like motion of hot air and evaporated water that transfers heat from the surface into the atmosphere".
Of course, he is referring to convection in the atmosphere. However, I would not go so far as Huber in saying that the other graphic amounts to "miseducating children". What it means, is that we employ an admittedly simplified cartoon (or ansatz) to convey the concept.
By adding in convection and its details, as well as cloud cover etc. one would clutter the concept and make it overly complicated for a school child.
This is not unusual at all, and is done all the time in science. For example, when I gave astronomy courses at the Barbados Community College, one graphic I often used showed how energy was transferred from the Sun's core to the photosphere. A zig-zag path was used to represent the photon trying to emerge from the solar core (wherein it was absorbed and re-emitted in different directions) until finally getting to the convective layer, and thence to the photosphere from where it could depart into outer space as electro-magnetic waves.
In many respects, the energy transmission graphic is as simplified as the one for the greenhouse effect which Huber criticizes in grade school general science books. The reason is that in fact billions of photons are in transit, and each taking an average of one million years to get out of the solar core because of the absorptions and re-emissions (by core atoms) in different directions. But this more factual depiction is clearly impossible to show, and would not in any way enhance the teaching of the underlying concept.
Huber himself then goes on to make a classic mistake when he avers:
"When not miseducating children, the climate modelers try to call a chimney a chimney. They recognize that cloud cover and water vapor eclipse carbon dioxide as the dominant greenhouse agents."
In fact, H20 vapor does not eclipse CO2 as a greenhouse agent. Even a tiny, minuscule amount of CO2 is vastly more efficient at blocking the re-radiation of energy than any amount of water vapor- at those bands. (See the NRC Report published ca. 2001 that gives the relative W/m^2 forcing contributions of each greenhouse gas) Part of the misconception arose because early researchers, lacking the current technology of infrared spectroscopy, assumed that water vapor bands already blocked out most of what would (ordinarily) be taken by CO2. (Cf. ‘The Discovery of the Risk of Global Warming’, by Spencer Weart, in Physics Today, Jan. 1997, p. 34).
As to cloud cover, a very useful reference here is the paper: ‘Can Earth’s Albedo and Surface Temperature Increase Together’ in EOS- Transactions of the American Geophysical Union(Vol. 87, No. 4, Jan. 24, 2006, p. 37).
As the authors note, though there is some evidence that Earth’s albedo (ratio of the radiant flux falling on a surface to that reflected from it) has increased from 2000 to 2004 this has NOT led to a reversal in global warming. The authors cite the most up to date cloud data released in August, 2005 from the International Satellite Cloud Climatology Project (ISCCP). The data – from a range of meteorological satellites covering the entire Earth, discloses the most likely reason for the anomaly is primarily in the redistribution of the clouds.
Thus, as the authors point out (ibid.):
"whereas low clouds have decreased during the most recent years, high clouds have increased to a larger extent leading to both an increase in cloud amount AND an increased trapping of infrared radiation."
Prof. Gale Christianson in his book Greenhouse (Penguin, 1999, p. 203)notes that “stratus clouds are gray, dense and low flying and have a net COOLING effect since their albedo is relatively high”.
BUT these are precisely the cloud type that has receded in incidence, as the ISCCP data show, from the EOS article! Now, what type has increased? Christianson again (ibid.):
“Conversely, wispy high flying cirrus are semi-transparent to incoming sunlight but block infrared radiation emitted by the Earth thus CONTRIBUTING to the Greenhouse Effect”
Again, the point I am making is that the use of the ansatz in school texts )to depcit the global greenhouse) is nowhere near as bankrupt educationally as Huber makes it out to be.
More on this in the next instalment.
Monday, November 3, 2008
DARK ENERGY – A NEW LAW OF PHYSICS? (II)
In the previous article I stated that the dark energy - vacuum equation of state:
w = (p / rho) = -1
is consistent with Einstein's general theory of relativity - which one could say approaches the status of a 'basic law of physics'.
I now want to delve into more detail. Take the equation that defines cosmic expansion:
R’’/R = - 4pi/ 3 G rho (1 + 3 w)
And if we let w = (-1/3):
the whole right side becomes zero, and
-1/3 = p/rho or -rho = 3p
If we set: 0 = (rho + 3p) then:
p = - rho /3
and if: p < (- rho /3) we have gravity that repels
Looking back to the earlier equation for w, one finds:
p = - rho (e.g. pressure = - energy density)
and - rho < (- p /3)
Specifically the term (rho + 3 p) acts as a source of gravity in general relativity, (where rho = energy density).
In this case, a negative pressure dovetails with general relativity's allowance for a "repulsive gravity" - since any negative pressure has associated with it gravity that repels rather than attracts.
This being the case, we may assert that dark energy represents no "new law" of physics, but rather an extrapolation of an existing one. The core issue that still must be addressed is whether this relationship implies the need for a "cosmological constant" and if so, what magnitude it might be.
This brings us to the question: What’s to become of the cosmos if the acceleration is ongoing?
All this in tandem supports the prediction by many dark energy theorists that the cosmos will ultimately expand forever and yield to an ultimate heat death. All objects in the cosmos will be so far apart that no exchange of energy can occur and they will simply die out. Or, exhaust all heat sources and become cold, dead cinders. There simply isn't any agency to counter the accelerating force of dark energy to prevent it.
Will the discovery of the Higgs particle (boson) by the large hadron collider cause us to reassess this end? Not really! Not unless there is some corelative data that also show there is no basis to posit an accelerating expansion.
In the absence of that, the best tack to try to overturn the dark energy thesis is to try to find an alternative explanation for the two close peaks in the power spectrum - indicating that the plasma is subject to the dark energy equation of state. Thus, the current primacy of the dark energy thesis rests on a particular interpretation for those two power spectrum peaks based on using the Legendre functions.
If anyone has any evidence to the contrary, and can PROVE it (using the same power spectrum and the Legendre functions) points to a collapsing universe instead, then go for it.
w = (p / rho) = -1
is consistent with Einstein's general theory of relativity - which one could say approaches the status of a 'basic law of physics'.
I now want to delve into more detail. Take the equation that defines cosmic expansion:
R’’/R = - 4pi/ 3 G rho (1 + 3 w)
And if we let w = (-1/3):
the whole right side becomes zero, and
-1/3 = p/rho or -rho = 3p
If we set: 0 = (rho + 3p) then:
p = - rho /3
and if: p < (- rho /3) we have gravity that repels
Looking back to the earlier equation for w, one finds:
p = - rho (e.g. pressure = - energy density)
and - rho < (- p /3)
Specifically the term (rho + 3 p) acts as a source of gravity in general relativity, (where rho = energy density).
In this case, a negative pressure dovetails with general relativity's allowance for a "repulsive gravity" - since any negative pressure has associated with it gravity that repels rather than attracts.
This being the case, we may assert that dark energy represents no "new law" of physics, but rather an extrapolation of an existing one. The core issue that still must be addressed is whether this relationship implies the need for a "cosmological constant" and if so, what magnitude it might be.
This brings us to the question: What’s to become of the cosmos if the acceleration is ongoing?
All this in tandem supports the prediction by many dark energy theorists that the cosmos will ultimately expand forever and yield to an ultimate heat death. All objects in the cosmos will be so far apart that no exchange of energy can occur and they will simply die out. Or, exhaust all heat sources and become cold, dead cinders. There simply isn't any agency to counter the accelerating force of dark energy to prevent it.
Will the discovery of the Higgs particle (boson) by the large hadron collider cause us to reassess this end? Not really! Not unless there is some corelative data that also show there is no basis to posit an accelerating expansion.
In the absence of that, the best tack to try to overturn the dark energy thesis is to try to find an alternative explanation for the two close peaks in the power spectrum - indicating that the plasma is subject to the dark energy equation of state. Thus, the current primacy of the dark energy thesis rests on a particular interpretation for those two power spectrum peaks based on using the Legendre functions.
If anyone has any evidence to the contrary, and can PROVE it (using the same power spectrum and the Legendre functions) points to a collapsing universe instead, then go for it.
Thursday, October 16, 2008
Dark Energy - Evidence for a New Law of Physics?
Before 1998, few if any astronomers had heard of “dark energy”. Rather, “dark matter” had come to the fore with a series of articles in various periodicals, journals (e.g. Physics Today, (1992), Vol. 45, No. 2, p. 28 by S. Tremaine) Dark matter was acceptable to most of us because at least it could be understood easily at some level. After all, Fritz Zwicky in 1933 actually laid the original, observational basis for dark matter. His measurements of galaxy clusters highlighted a 'missing mass'. He found that the mass needed to bind a cluster of galaxies together gravitationally was at least ten times the (estimated) apparent mass visible.
This mass, because it was inferred but not directly detectable, became the first dark matter. Around the same time there were other confirmations, based on observed stellar motions in the galactic plane by Dutch astronomer Jan Oort. He determined there had to be at least three times the mass visibly present in order for stars not to escape the galaxy and fly off into space.
By the time of Tremaine’s Physics Today article, it was estimated that at least 90% of the universe was in the form of dark matter, and barely 10% constituted visible matter – meaning that it either reflected radiation or emitted it at some wavelength. Many of these results issued from the data acquired by the Cosmic Background Explorer (COBE) satellite.
By 2000, this whole picture had radically changed and new assays for the mass-energy distribution for the universe had ordinary visible matter at only 7% of the total, with fully 93% a “dark component” - of which nearly 70% was dark or vacuum energy, the rest dark matter. (See, e.g. Physics Today, July, 2000, p. 17)
This was almost too much to take. Of course, as a physicist (in solar physics at that time) I’d been familiar with the claim of vacuum energy in various crank forums, or via e-mails from cranks. Most of them embraced the notion that empty space was replete with vacuum energy at an almost infinite density level – and accessible if one can only get to it. Free energy without the hassle of infrastructure!
In no way did anyone – even those astronomers least conversant with modern cosmology – expect the most distant objects to exhibit a slowing down, and the closer ones to exhibit a speeding up: indicating the expansion was accelerating, and worse, that a counter (repulsive) force to gravitation might be operating. However, not when two separate groups find supernovae that are dimmer – and thus further away – than they should have been,
But in science, meticulously obtained and plotted data seldom lie. And by early 1998, the type Ia supernova results of two groups: the Supernova Cosmology Project (based at UC Berkeley) and the High- Z Supernova Search - led by Brian Schmidt of Mt. Stromlo Observatory in Australia, began to show tightening error bars.
Why Type 1a supernovae? First, because they’re bright enough to isolate in different galaxies – hence there’s a cosmological dimension. Second, they exhibit a uniform, consistent light spectrum and brightness decay profile (all supernovae diminish or ‘decay’ in brightness after the initial explosive event). This applies to all galaxies in which they appear so they function as cosmic standard “candles”. Third, all Type 1a’s betray the same absorption feature at a wavelength of 6150 Angstroms (615 nm) - so have the same spectral “fingerprint”.
(See Figure 1)
Basically, the majority of plotted Type 1a supernovae data points congregated along the upper of the two plot lines shown in Figure 1. (One sample point is shown) This placed them firmly in the region of the graph (of observed magnitude vs. red shift) we call “accelerating universe”. On the other side of the diagonal is the "decelerating region". An additional feature of the accelerating side is 'vacuum energy'.
While my first instinct was to reject the notion of vacuum energy, this didn’t withstand further examination. The bottom line is that the best fit to the supernova data indicate that the energy density of the vacuum translates into a repulsive force that can counter gravity’s attraction.
To get an insight, we can examine the equation that underpins cosmic expansion and whether it is accelerated or not (cf. Perlmutter, Physics Today, 2003)
R"/R = - {4pi/ 3} G rho (1 + 3 w)
Here R is a cosmic scale factor, R" is the acceleration (e.g. second derivative of R with respect to time t), G is the Newtonian gravitational constant, rho the mass density. We inquire what value w must have for there to be no acceleration or deceleration. Basic algebra shows that when w = -1/3 the whole right side becomes zero. The supernovae plot data constrains w such that it cannot have a value > (-1/2). Most plausibly, w, the ratio of pressure to density is (Perlmutter, ibid.)
w = (p / rho) = -1
This is consistent with Einstein's general theory of relativity - which one could say approaches the status of a 'basic law of physics'. In this case, a negative pressure meshes with general relativity's allowance for a "repulsive gravity" - since any negative pressure has associated with it gravity that repels rather than attracts.
Some might argue that cosmic repulsion shows a "new law" of physics, but it's merely extending the existing concept of gravitation to show it has a repulsive as well as attractive aspect, and has always been consistent with Einstein's general theory of relativity.
This brings us to the question: What’s to become of the cosmos if the acceleration is ongoing? Clearly, photons emerging from whatever cosmic object (star, nebula etc.) can never catch up to the (too) rapidly expanding space-time. This means that over time, fewer and fewer objects will be visible to any sentient observers. Eventually, all cosmic objects will “vanish” from the scene and all observers – if any remain- will be plunged into featureless skies.
(To be continued)
Do Mom & Pop Really Belong in the Stock Market?
As the continued volatility in the stock market gets more attention, and many oldsters saving for retirement have already lost nearly 40% in their 401ks, the question arises: Do ordinary small fry, whose only money is being saved from nearly stagnant wages, belong in the stock market?
Of course, the endless parade of gurus and pundits of high finance (e.g. Jim Cramer of 'Mad Money' fame on CNBC, until a couple weeks ago) have always issued the same mantra: Just invest and "dollar cost averaging will take care of the rest". When the stocks and whatnot tank and you buy on the dips, you get more shares! What's not to love?
Actually a lot! As the authors of The Great 401 k Hoax have noted, it is living in a fool's paradise to believe that if the companies you invested in are only increasing their profits at 2-3% a year, that you can be earning 7% or even 10%. In fact, what one has then is an aberration in which the gains are out of whack with reality. (This is one reason why real stock investors demand dividends, and refuse to forego them so fund companies etc. can use the money to do "stock buybacks" thereby artificially inflating the share price!)
Further, a Stanford University study some years ago- based on the median return of 62 mutual funds- showed that $1 invested in 1962 would have grown to $21.89 by 1992, on a pre-tax basis. The study disclosed that the $1 would have grown to only $9.87 on an after-tax basis. And the investor would have had to come up with $12.02 to pay the taxes.
By contrast the study showed that a “conservative” investor who put assets into a U.S. Savings Bond in 1962, had every $1 become $10.93 by 1992. It is easy to work out from these numbers which investor actually fared better over the thirty-year interval according to the study. Hint- hint: it wasn't the sucker in stocks.
Unless mutuals investors do the math and watch the numbers they cannot be aware of how little they're actually taking home. (A point also made emphatically in The Wall Street Journal, Nov. 27, 2003, page D1, 'A Harsh Truth: Most of Your Investments Won't Make Money- Even in the Long Term, after assessing stocks, bonds and mutuals).
It is also well for small investors to understand that, to a large degree, they are in a game with a 'stacked deck'. Not only that, but under current laws their investments are almost entirely blind. Like buying a pig in a poke.
This point was emphasized in a London Financial Times article (‘A Metaphorical Proposal’, Mar. 13, 2002, p. 11A) by Michael Skapinker. He cited remarks by Joseph Berardino – chief exec of Arthur Andersen- who noted how the current reporting system “fails to communicate essential information about the real risks facing companies” to the small investor.
If you don't KNOW what you're getting into, how the hell can you have any confidence that you will get anything back? You can't!
Skapinker quotes Berardino as noting how accountants can only issue ‘pass’ or ‘fail’ judgments on companies – but cannot disclose the red ink being bled by a company that’s been passed. (What's referred to as a “bleeding edge” company wherein auditors are actually resigning). As the author notes, to do so would precipitate a collapse in share prices.
Under such conditions, the small investor risks his money and security, by investing in ANY non-FDIC insured monetary device. Indeed, despite the bevy of risks and deceptions, some investors have been insane enough to take out 2nd mortgages to up the ante in the stock market, yet don't even know the role of NAV (Net Asset Value) in calculating net gains, losses.
In fact, the stock market's sole purpose, As E. Brockway observes (The End of Economic Man, 1990), is to steal capital from the poor or middle class (that can least afford losses) and give it to the rich. The technique hardly varies: Pundits, wags and paid shills hype the various stocks, funds or instigate a "buzz" about them - to get suckers to buy in.
The increasing buy-in inflates the price-to -earnings ratio (P-E ratio) and produces a bubble of high profits. The "Big boys" (large, institutional investors) get tipped 1-2 days in advance and cash out, leaving the little guys to sink. If they're lucky they may earn a few bucks. Not much.
The thievery works eventually because most manjacks are conditioned to "buy and hold" rather than fold when the share price dives below a certain threshold. (Which ought to be the tip off). Thus, there are always ample marks left at the end game to be properly fleeced. Amazingly, they're always ready to play the game again, and pile their newly saved up money in.
Lastly, NO American of any class or station (except possibly the super-rich that can afford stupid or reckless losses) has any business putting money into any investments at all unless s/he can pass with at least a 75% a basic investment test. My own version - developed by myself and a financial advisor brother - includes the following questions (no googling, crib notes or texts!):
1) What is a P/E ratio?
2) What is the maximum tolerable expense ratio, beyond which an investor shouldn’t invest in a mutual fund?
3)Distinguish between front and back loads.
4) Joe has $10,000 to invest and the fund is front loaded at 5%. How much is he really investing? How much must he gain the first year to reach break-even? How much must his fund earn to achieve a REAL 5% gain. (Assume the expense ratio is 2%)
5) Distinguish between bonds and bond funds?
6) How would you recognize collateral debt obligations (CDOs) in a bond fund? Interest only strips? Inverse floaters?
7) When investing in stocks, one of the worst tricks used by brokers or managers is collusion using ‘micro caps’ to keep their clients buying and selling stocks within a closed artificial market. (Source:License To Steal: The Secret World of Wall Street Brokers and the Systematic Plundering of the American Investor, page 211). Explain.
8) Small, individual investors in stocks are usually fleeced by brokers through “crossing”, “churning” and “parking”. Explain in turn how each of these would work.
9) WHY is the ADV form Part II essential before hiring a financial advisor? What key information therein would provide a sound basis for rejecting any FA?
10) Distinguish between money market accounts and money market funds. Why are the latter always riskier?
I seriously doubt 9 out of 10 middle or lower income Americans (particularly seniors putting their hopes in the market for retirement) could answer as many as five of the above correctly. And assuming that is so, its' a damned good thing "only 17 percent of households in the bottom 60 percent of income own any taxable stock."
If stocks aren't the answer, what are? Slow and unsexy saving! Then, take that savings by age 65 (say maybe $300,000) and parcel it into separate immediate fixed annuities to provide a safe, dependable income stream over your lifetime, as opposed to a variable money stream - based on the phantom money in the stock market.
As finance columnist Humberto Cruz has complained - and I do now- for some incomprehensible reason Americans would rather play the stock casino than go with dependable immediate fixed annuities. (Variables are not even on the table, as Suze Ormond has noted, they are "stupid" and a waste of time, as per her MONEY magazine interview). Combined with Social Security, a set of immediate fixed annuities is the sane and sensible answer to funding retirement today.
You certainly won't be rich, but then you won't have to sustain a diet of Alpo and Ramen noodles either, or work until your drop dead in your few years remaining on the planet!
Of course, the endless parade of gurus and pundits of high finance (e.g. Jim Cramer of 'Mad Money' fame on CNBC, until a couple weeks ago) have always issued the same mantra: Just invest and "dollar cost averaging will take care of the rest". When the stocks and whatnot tank and you buy on the dips, you get more shares! What's not to love?
Actually a lot! As the authors of The Great 401 k Hoax have noted, it is living in a fool's paradise to believe that if the companies you invested in are only increasing their profits at 2-3% a year, that you can be earning 7% or even 10%. In fact, what one has then is an aberration in which the gains are out of whack with reality. (This is one reason why real stock investors demand dividends, and refuse to forego them so fund companies etc. can use the money to do "stock buybacks" thereby artificially inflating the share price!)
Further, a Stanford University study some years ago- based on the median return of 62 mutual funds- showed that $1 invested in 1962 would have grown to $21.89 by 1992, on a pre-tax basis. The study disclosed that the $1 would have grown to only $9.87 on an after-tax basis. And the investor would have had to come up with $12.02 to pay the taxes.
By contrast the study showed that a “conservative” investor who put assets into a U.S. Savings Bond in 1962, had every $1 become $10.93 by 1992. It is easy to work out from these numbers which investor actually fared better over the thirty-year interval according to the study. Hint- hint: it wasn't the sucker in stocks.
Unless mutuals investors do the math and watch the numbers they cannot be aware of how little they're actually taking home. (A point also made emphatically in The Wall Street Journal, Nov. 27, 2003, page D1, 'A Harsh Truth: Most of Your Investments Won't Make Money- Even in the Long Term, after assessing stocks, bonds and mutuals).
It is also well for small investors to understand that, to a large degree, they are in a game with a 'stacked deck'. Not only that, but under current laws their investments are almost entirely blind. Like buying a pig in a poke.
This point was emphasized in a London Financial Times article (‘A Metaphorical Proposal’, Mar. 13, 2002, p. 11A) by Michael Skapinker. He cited remarks by Joseph Berardino – chief exec of Arthur Andersen- who noted how the current reporting system “fails to communicate essential information about the real risks facing companies” to the small investor.
If you don't KNOW what you're getting into, how the hell can you have any confidence that you will get anything back? You can't!
Skapinker quotes Berardino as noting how accountants can only issue ‘pass’ or ‘fail’ judgments on companies – but cannot disclose the red ink being bled by a company that’s been passed. (What's referred to as a “bleeding edge” company wherein auditors are actually resigning). As the author notes, to do so would precipitate a collapse in share prices.
Under such conditions, the small investor risks his money and security, by investing in ANY non-FDIC insured monetary device. Indeed, despite the bevy of risks and deceptions, some investors have been insane enough to take out 2nd mortgages to up the ante in the stock market, yet don't even know the role of NAV (Net Asset Value) in calculating net gains, losses.
In fact, the stock market's sole purpose, As E. Brockway observes (The End of Economic Man, 1990), is to steal capital from the poor or middle class (that can least afford losses) and give it to the rich. The technique hardly varies: Pundits, wags and paid shills hype the various stocks, funds or instigate a "buzz" about them - to get suckers to buy in.
The increasing buy-in inflates the price-to -earnings ratio (P-E ratio) and produces a bubble of high profits. The "Big boys" (large, institutional investors) get tipped 1-2 days in advance and cash out, leaving the little guys to sink. If they're lucky they may earn a few bucks. Not much.
The thievery works eventually because most manjacks are conditioned to "buy and hold" rather than fold when the share price dives below a certain threshold. (Which ought to be the tip off). Thus, there are always ample marks left at the end game to be properly fleeced. Amazingly, they're always ready to play the game again, and pile their newly saved up money in.
Lastly, NO American of any class or station (except possibly the super-rich that can afford stupid or reckless losses) has any business putting money into any investments at all unless s/he can pass with at least a 75% a basic investment test. My own version - developed by myself and a financial advisor brother - includes the following questions (no googling, crib notes or texts!):
1) What is a P/E ratio?
2) What is the maximum tolerable expense ratio, beyond which an investor shouldn’t invest in a mutual fund?
3)Distinguish between front and back loads.
4) Joe has $10,000 to invest and the fund is front loaded at 5%. How much is he really investing? How much must he gain the first year to reach break-even? How much must his fund earn to achieve a REAL 5% gain. (Assume the expense ratio is 2%)
5) Distinguish between bonds and bond funds?
6) How would you recognize collateral debt obligations (CDOs) in a bond fund? Interest only strips? Inverse floaters?
7) When investing in stocks, one of the worst tricks used by brokers or managers is collusion using ‘micro caps’ to keep their clients buying and selling stocks within a closed artificial market. (Source:License To Steal: The Secret World of Wall Street Brokers and the Systematic Plundering of the American Investor, page 211). Explain.
8) Small, individual investors in stocks are usually fleeced by brokers through “crossing”, “churning” and “parking”. Explain in turn how each of these would work.
9) WHY is the ADV form Part II essential before hiring a financial advisor? What key information therein would provide a sound basis for rejecting any FA?
10) Distinguish between money market accounts and money market funds. Why are the latter always riskier?
I seriously doubt 9 out of 10 middle or lower income Americans (particularly seniors putting their hopes in the market for retirement) could answer as many as five of the above correctly. And assuming that is so, its' a damned good thing "only 17 percent of households in the bottom 60 percent of income own any taxable stock."
If stocks aren't the answer, what are? Slow and unsexy saving! Then, take that savings by age 65 (say maybe $300,000) and parcel it into separate immediate fixed annuities to provide a safe, dependable income stream over your lifetime, as opposed to a variable money stream - based on the phantom money in the stock market.
As finance columnist Humberto Cruz has complained - and I do now- for some incomprehensible reason Americans would rather play the stock casino than go with dependable immediate fixed annuities. (Variables are not even on the table, as Suze Ormond has noted, they are "stupid" and a waste of time, as per her MONEY magazine interview). Combined with Social Security, a set of immediate fixed annuities is the sane and sensible answer to funding retirement today.
You certainly won't be rich, but then you won't have to sustain a diet of Alpo and Ramen noodles either, or work until your drop dead in your few years remaining on the planet!
Tuesday, October 14, 2008
Enough Already!
In each presidential election cycle it appears that the hyperbole, distortions and outright lies get exponentially worse than in the previous cycle. And so it is now. The latest claptrap being pushed by the right wing pod-people and pundits is that....lo and behold....POOR people are to blame for the credit crunch!
The gist of it, launched by Rush Limbaugh some weeks ago, is that the "Community Reinvestment Act" is to blame by opening the doors to poor folks (mainly African-American, of course) who didn't have the capital or means to own in the first place. And hence, the genesis of the sub-prime meltdown.
Thus, the CRA opened its door unscrupulously to any manjack that wanted to own a home, or needed a mortgage. This take has since been exploited by the likes of Neil Cavuto of Faux News to assert (in mid-September): if banks hadn't been "forced to lend to minorities and risky folks" the Wall Street disaster would never have happened.
George Will went one better in a column, claiming banks were hostage to similar legislation to the CRA which criminalized any refusal to lend as discrimination if the bank didn't make mortgage loans to unproductive borrowers.
This is all nonsense.
First, a few facts, which for the Right usually cause their assorted neurons to melt down:
1)The CRA only applies to banks that get federal insurance, which excludes 75% of those that made the sub-prime loans.
2) No clause, provision or code exists anywhere in the Act which requires any bank to make a sub-prime loan to any borrower. Indeed, 180 degrees opposite, the Act calls on banks in the needy communities to make loans "consistent with the safe and sound operation" of the lending institution.
3)Contrary to other Limbaugh-esque nonsense, a number of studies has shown that the CRA recipients pay their bills on time and ultimately become successful homeowners. Thus, the claim by the right's blowhards - that the CRA unleashed millions of deadbeats - is pure bollocks.
So, if neither the CRA (or ACORN - The Association of Community Organizations for Reform Now) is responsible for the housing implosion and credit crunch than who or what is?
Okay, the "who" are the quants, a gaggle of quantitatively-gifted brainiacs who were unable to get decent paying work in their native mathematics or physics professions, and sought higher remuneration in the halls of finance - usually in investment banks like the late Bear Stearns and Lehman Bros.
These quants devised, configured and invented a whole slew of obscure financial instruments, such as derivatives with the name of "credit default swaps" that were sliced, diced then repackaged into "collateralized debt obligations" (CDOs) then resold to banks who repackaged them with other financial crappola and sold them as SIVs or "structured investment vehicles".
These things are now lying around on the books of thousands of banks, wreaking havoc on their equity and making interbank lending impossibly risky because no bank knows how much of this toxic crap any other bank has. Thus, a loan - any loan - would be fraught with peril to the landing bank if IT has high liquidity and is properly capitalized.
Thus, the "what" that is responsible are mainly the CDS (credit default swaps) and the SIVs into which they were packaged - then sold to trusting counterparties - but with false bond ratings (usually given AAA, reserved for the highest quality, instead of the AA or lower (A) they really deserved).
An excellent recent article that fully backs up my contention is 'AIG's Complexity Blamed for Fall' which appeared in the Oct. 7 edition of The Financial Times.
Another excellent article which backs me up appeared in the October FORTUNE, and is entitled 'The $55 TRILLION QUESTION' (p. 135). Quoted in the piece, a University economics professor (Frank Partnoy) who doubles as a Morgan Stanley derivatives salesman noted: "The big problem is there are so many public companies- banks and corporations, and no one really knows how much exposure they have to CDS (credit default swap) contracts."
Since most CDS contracts are made "on the fly", in no formal mode, and often by word of mouth on cell phones (ibid.) or via instant messaging, no one even knows where all the $55 trillion of this toxic waste is buried. As another hedge fund operator (Chris Wolf) quoted in the article put it:
"This has become essentially the dark matter of the financial universe" - comparing it to the dark matter discovered in astrophysics.
Finally, and most apropos, as the FORTUNE piece observed:
“you can guess how Wall Street's cowboys responded to the opportunity to make deals that: 1) can be struck in a minute, 2) require little or no cash upfront and 3) can cover anything.”
Clearly, the blame – 100 percent of it- is on the Street’s capitalist cowboys and all the quants they suckered into working for them for filthy lucre! The Right wing blowhards now need to give it a rest, put down their mics or typewriters, and get with the program.
That future program requires that all these CDS as they currently exist be banned outright from the world of finance. If they are introduced they must be rigorously regulated as all derivatives need to be. It is long past time that the Republican sympathizing regressives stop blaming poor people for the ongoing credit debacle.
The gist of it, launched by Rush Limbaugh some weeks ago, is that the "Community Reinvestment Act" is to blame by opening the doors to poor folks (mainly African-American, of course) who didn't have the capital or means to own in the first place. And hence, the genesis of the sub-prime meltdown.
Thus, the CRA opened its door unscrupulously to any manjack that wanted to own a home, or needed a mortgage. This take has since been exploited by the likes of Neil Cavuto of Faux News to assert (in mid-September): if banks hadn't been "forced to lend to minorities and risky folks" the Wall Street disaster would never have happened.
George Will went one better in a column, claiming banks were hostage to similar legislation to the CRA which criminalized any refusal to lend as discrimination if the bank didn't make mortgage loans to unproductive borrowers.
This is all nonsense.
First, a few facts, which for the Right usually cause their assorted neurons to melt down:
1)The CRA only applies to banks that get federal insurance, which excludes 75% of those that made the sub-prime loans.
2) No clause, provision or code exists anywhere in the Act which requires any bank to make a sub-prime loan to any borrower. Indeed, 180 degrees opposite, the Act calls on banks in the needy communities to make loans "consistent with the safe and sound operation" of the lending institution.
3)Contrary to other Limbaugh-esque nonsense, a number of studies has shown that the CRA recipients pay their bills on time and ultimately become successful homeowners. Thus, the claim by the right's blowhards - that the CRA unleashed millions of deadbeats - is pure bollocks.
So, if neither the CRA (or ACORN - The Association of Community Organizations for Reform Now) is responsible for the housing implosion and credit crunch than who or what is?
Okay, the "who" are the quants, a gaggle of quantitatively-gifted brainiacs who were unable to get decent paying work in their native mathematics or physics professions, and sought higher remuneration in the halls of finance - usually in investment banks like the late Bear Stearns and Lehman Bros.
These quants devised, configured and invented a whole slew of obscure financial instruments, such as derivatives with the name of "credit default swaps" that were sliced, diced then repackaged into "collateralized debt obligations" (CDOs) then resold to banks who repackaged them with other financial crappola and sold them as SIVs or "structured investment vehicles".
These things are now lying around on the books of thousands of banks, wreaking havoc on their equity and making interbank lending impossibly risky because no bank knows how much of this toxic crap any other bank has. Thus, a loan - any loan - would be fraught with peril to the landing bank if IT has high liquidity and is properly capitalized.
Thus, the "what" that is responsible are mainly the CDS (credit default swaps) and the SIVs into which they were packaged - then sold to trusting counterparties - but with false bond ratings (usually given AAA, reserved for the highest quality, instead of the AA or lower (A) they really deserved).
An excellent recent article that fully backs up my contention is 'AIG's Complexity Blamed for Fall' which appeared in the Oct. 7 edition of The Financial Times.
Another excellent article which backs me up appeared in the October FORTUNE, and is entitled 'The $55 TRILLION QUESTION' (p. 135). Quoted in the piece, a University economics professor (Frank Partnoy) who doubles as a Morgan Stanley derivatives salesman noted: "The big problem is there are so many public companies- banks and corporations, and no one really knows how much exposure they have to CDS (credit default swap) contracts."
Since most CDS contracts are made "on the fly", in no formal mode, and often by word of mouth on cell phones (ibid.) or via instant messaging, no one even knows where all the $55 trillion of this toxic waste is buried. As another hedge fund operator (Chris Wolf) quoted in the article put it:
"This has become essentially the dark matter of the financial universe" - comparing it to the dark matter discovered in astrophysics.
Finally, and most apropos, as the FORTUNE piece observed:
“you can guess how Wall Street's cowboys responded to the opportunity to make deals that: 1) can be struck in a minute, 2) require little or no cash upfront and 3) can cover anything.”
Clearly, the blame – 100 percent of it- is on the Street’s capitalist cowboys and all the quants they suckered into working for them for filthy lucre! The Right wing blowhards now need to give it a rest, put down their mics or typewriters, and get with the program.
That future program requires that all these CDS as they currently exist be banned outright from the world of finance. If they are introduced they must be rigorously regulated as all derivatives need to be. It is long past time that the Republican sympathizing regressives stop blaming poor people for the ongoing credit debacle.
Labels:
ACORN,
CRA,
Credit default swaps,
derivatives,
quants
Subscribe to:
Posts (Atom)