Tuesday, December 30, 2008

The Amazing Aurora




In Alaska in March, 2005, I took the above photo of an auroral display just outside Chena Hot Springs, Alaska. I was glad my wife was with me to behold the sight (her first time) since it later evolved to an exciting electronic scene with bursts of red and green interacting and changing positions.

How are these magnificent displays caused?

One can visualize the Earth as a giant spherical magnet, with magnetic
field lines extending from its north to south magnetic poles. These
magnetic field lines, have the property that any charged particles (+
protons, - electron or ions) that approach, will spiral along them.

The Earth itself, is "bathed" in the solar wind, a stream of high speed
charged particles that flows into space, originating from the Sun's
corona. (A hot, gaseous envelope that spews these particles out continuously – moreso when there is a violent explosion known as a Solar Flare)

Around the Earth the speed of these particles can reach 400- 500
km/second. (Because of its high temperature, over a million degrees, the
corona gas is *ionized* so must consist of charged particles, mainly (+)
protons, and (-) electrons).

During high solar activity (e.g. near sunspot maximums) a higher flux of
these charged particles inundates the solar wind, and the region around
the Earth.

The Earth's magnetic field traps these charged particles, and the highest
density is around the polar regions - which we refer to as the "auroral
ovals". In these regions, very high electric currents are set up, as the
charged particles start moving in unison about the magnetic field lines.
These currents can easily reach a few MILLION amperes.

As this discharge occurs, one or more outer electrons is stripped from the
atoms, for example from oxygen in the atmosphere - then RECOMBINES again -
to form new )e.g. oxygen) atoms.

With this RECOMBINATION - there is EMISSION of light, for a certain part
of the visible spectrum.

For example, in the case of recombination of oxygen atoms - their emitted
light is in the GREEN region of the spectrum. The aurora or northern
lights we see displays a kind of green curtain-like shimmering. The remarkable
red aurora is produced by emission at the 630 nm (nanometer) line of oxygen and at relatively high altitudes (e.g. 200-600 km) compared to green - which tends to form below 100 km and the oxygen line at 557 nm is excited.

Auroras can display as both diffuse and discrete. In the first case the shape is ill-defined and the aurora is believed to be formed from trapped particles originally in the magnetosphere which then propagate into the lower ionosphere via wave-particle interactions.

Thus, multiple colored auroras can be explained by emissions from
different atoms in the upper atmosphere, mainly in the region of the
magnetic poles. This is also why, of course, they are more often seen in
the vicinity of the N, S magnetic poles. (Though there have been reports
of N. lights being seen as far south as northern Florida, especially
during periods of esxceptional sunspot activity or LARGE SOLAR FLARES,
massive explosions on the Sun).

A great analogy has been given – by Syun Akasofu- comparing the aurora to images on a TV screen. In this case the (polar) upper atmosphere corresponds to the screen and the aurora to the image that would be projected on it, say for a TV. The electron beam in the TV (remember we are talking about the old-style cathode ray jobs!) corresponds the electron beam in the magnetosphere. In the conventional TV motions of the image are generated by the changing impact points of the electron beam on the screen. Similarly, with the aurora, its motions – such as moving sheets or curtains- are produced by moving impact points of the magnetospheric electron beams.

In gauging the power and intensity of auroras at different times, it is useful to remember that ultimately the aurora derives its power and potential from the Sun and specifically the charged particles of the solar wind. This is why the most spectacular displays are usually near sunspot maximum. Around those times the currents I noted earlier are “amped” up – no pun intended- to 10^6 A or more. To give an example, during a quiet Sun interval like we are in now the residual power for the magnetospheric generator is on the order of maybe a tenth of a megawatt. If we see a new cycle coming on and solar wind activated – we may get that power up to a million megawatts for a few hours.

If intense enough, such solar storms can herald the onset of enormous induction currents such as caused parts of the Ottawa grid to melt down in 1989.

But, as solar cycle 23 slowly ramps up, the aurora - of whatever color or shape- will be eagerly anticipated.

Monday, December 22, 2008

Star of Bethlehem? Or a MacGuffin?

At many lectures I've given one of the most asked questions has been: "Was there really a star of Behtlehem?"

This is a difficult question. In the preliminary pass for any sort of validating records, it entails assuming any or all of the ancient scriptures were true historical artifacts, not mere mythological escapism masquerading as such. For example, Matthew 2:1-1 notes such a "star" - but not one of the other quadriforms peeps a syllable about it. Why not? Why - if it was such a signal event (no pun intended) and an actual occurrence- did none of the other New Testament authors note it? This is disturbing and makes one recall the words of Catholic historian, the Rev. Thomas Bokenkotter, in his monograph ‘A Concise History of the Catholic Church’,(page 17):

“The Gospels were not meant to be a historical or biographical account of Jesus. They were written to convert unbelievers to faith in Jesus as the Messiah, or God.”

This is a shattering admission indeed, and from a historian of Christendom’s largest Church. It is a de facto admission that no historical support exists for any of the accounts in the New Testament, including Matthew's star. But for the sake of this article, let us assume there is something there, some faint signal amidst the noise.

We consider ordinary bright stars first. For an observer in Middle Eastern latitudes 2,000 years ago, there would have been at least ten visible at this time of year. Each one in a different direction, location of the Celestial sphere. Thus, no one star would be visible long, and certainly not at a fixed location or altitude such that it might provide a "search beacon".

The only other stellar candidate one might invoke is a nova or exploding star. Certainly the indredible brightness common to such entities would attract attention, but could any have occurred then and provided the basis for the Matthew citation? Interestingly, this very attirbute of attention -getter eliminates the nova theory from contention. Such a cataclysmic event could not have escaped notice yet there is no mention in any astronomical records of the time - including the Chinese who were already consummate star gazers.

an alternative explanation is that the object was a bright comet. An exceptionally brilliant comet was recorded in 45 B.C. but this is too far in advance of the probable Nativity date. Could such a comet have appeared suddenly and unpredictably around the time. Possibly, but it's doubtful such an event would have been associated with anything beneficent. Two thousand years ago comets were uniformly regarded by all cultures as omens of impending disaster, so we can rule them out.

The only other reasonable explanation is that the Magi witnessed an uncommon astronomical alignment of bright planets. One such candidate is the triple conjuncti9on of the planets Jupiter and Saturn in 7 B.C. A "triple conjunction" here means that Jupiter and Saturn appeared in close proximity no less than three times in succession.

One can spculate here that the Magi in preparing for their jounrey witnessed the first conjunction ca. May 29. A second event was observed on September 29 could have established that Jerusalem was in the general direction they needed to go. Finally, a third conjunction on Dec. 4 would presumably provided the final directional "fix", leading to Bethlehem some eight kilometers away.

The accuracy of the above speculations (and I reinforce that's all these are!) is subject to the dubious assumption that our present calendar is actually a bit off and Christ was actually born in 7 B.C. rather than the 1 B.C. usually quoted.

Given this, one is forced to concede that at the present time there is no comprehensive astronomical explanation which consistently explains all the details. The triple conjunction sounds like the best, assuming we are really in the year 2001 and not the end of 2008.

Perhaps the event must remain forever intangible and beyond the realm of any scientific investigation. Or, perhaps there never was such an object in the first place - and Matthew simply resorted to some elaborate poetic license.

Friday, December 19, 2008

Some Fun with Transpositions

Various aspects of math can provide hours of fun, and amusement. One of these entails transpositions. Some basics on transpositions (and even and odd permutations) first:

A transposition is a permutation which interchanges two numbers and leaves the others fixed. The inverse of a transposition T is equal to the transposition T itself, so that:


T2 = I (identity permutation, e.g the permutation such that I(i) = i for all i = 1,...n)


A permutation p of the integers {1, . . . n} is denoted by


[1 .. . .. n]

[p(1).. p(n)]


So that, for example:



[1 2 3]

[2 1 3]


denotes the permutation p such that p(1) = 2, p(2) = 1, and p(3) = 3.

Now, let's look at EVEN and ODD permutations:

Let P_n denote the polynomial of n variables x1, x2……xn which is the product of all the factors x_i … xj with i < j. That is,

P_n(x1, x2... . xn) = P(x_i .. x_j)


The symmetric group S(n) acts on the polynomial P_n by permuting the variables. For p C S(n) we have:

P_n( x_p(1), x_p(2). . .x_p(n)) = (sgn p) P_n (x_1, x_2…..x_n)


where sgn p = +/-1. If the sign is positive then p is called an even permutation, if the sign is negative then p is called an odd permutation. Thus: the product of two even or two odd permutations is even. The product of an even and an odd permutation is odd.

Back to transpositions!

We just saw:

[1 2 3]

[2 1 3]

The above permutation is actually a transposition 2 <-> 1 (leaving 3 fixed).

Now, let p' be the permutation:


[1 2 3]

[3 1 2]


Then pp' is the permutation such that:


pp'(1) = p(p'(1)) = p(3) = 3

pp'(2) = p(p'(2)) = p(1) = 2

pp'(3) = p(p'(3)) = p(2) = 1


It isn’t difficult to ascertain that: sgn (ps) = (sgn p) (sgn s)

so that we may write:


pp' =

[1 2 3]

[3 2 1]


Now, find the inverse p^-1 of the above. (Note: the inverse permutation, denoted by p^-1 is defined as the map: p^-1 : Zn -> Zn),

Since p'(1) = 3, then p ^-1(3) = 1

Since p'(2) = 1 then p^ -1(1) = 2

Since p'(3) = 2 then p^ -1(2) = 3

Therefore:

p^-1 =

[1 2 3]

[ 2 3 1]



Problem: Express

p =

[1 2 3 4]

[2 3 1 4]



as the product of transpositions, and determine the sign (+1 or -1) of the resulting end permutation.

Let T1 be the transposition 2 <-> 1 leaving 3, 4 fixed, so:


T1 p =

[1 2 3 4]

[1 3 2 4]


Let T2 be the transposition 2 <-> 3 leaving 1, 4 fixed, so:

T2 T1 p =

[1 2 3 4]

[1 2 3 4]


Then:

T2 T1 p = I (identity)

TWO transpositions (T1, T2) operated on p, so that the sign of the resulting permutation (to reach identity) is +1

The permutation is even.

Monday, December 15, 2008

Initial Conditions of the Big Bang?

In a recent issue (Oct.-Nov.) of the Intertel journal, INTEGRA, Ken Wear asks:

“Supposing the Big Bang theory is correct, what were the initial conditions that produced it?”

This can be approached in a more or less practical way by treating the ‘Big Bang’ as a solution to Einstein’s tensor (field) equations. (See, e.g. ‘Quantum Field Theory – A Modern Introduction’, by Michio Kaku, p. 643):

As per the 2.7K isotropic, microwave background radiation, we assume radial symmetry for the metric tensor – for which we adopt a Robertson-Walker form. This omits all angular dependence and leaves a function of form R(t) which sets the scale and defines an ‘effective radius’ of the universe.

We have:

ds^2 = dx^u g_uv dx^u = dt^2 – R^2(t) [ (dr^2/ 1 – kr^2) + r^2 d (S)^2]

where d(S)^2 is the solid angle differential and k = const.

Associate with this a fluid of average density rho(t) and internal pressure p(t)

The energy-momentum tensor becomes: T_o^0 = rho, T_I^I = -p

with all other components zero.

After inserting these into the Einstein field eqns.

(dR/dt /R)^2 = (8 pi)/ 3 (G_N rho) – k / R^2

whence:

(d^2R/ dt^2 )/ R = - 4 pi G_N (p + rho/3) + LAMBDA/ 3

After setting the cosmological constant (LAMBDA) = 0 and eliminating rho, one obtains as a solution for R (radius of universe as power law function).

R= (9/ 2GM)^1/3 [t ^2/3]

One can deduce from this (ibid, p. 645) that at the Planck energy of 10^19 GeV (giga -electron volts) of energy, the symmetries of gauge theory were still united in a single force. This is at a cosmic age of 10^-44 s.

This represents the closest approach of physics to the cosmic singularity (t = 0) but still defines the ‘Big Bang’ since the explosion is already underway and forces are still unified.

This continues as other symmetries ’break’ one by one, leading to the radiation dominated era. (Described by the Bose-Einstein distribution function, which perfectly applies to the expanding pure photon gas).

The fact that the ‘Big Bang’ can be obtained as a solution to one version of Einstein’s tensor equations, discloses that QM and GR equations certainly don’t ‘blow up’ and are impossible to use.

Mr. Wear then makes the assertion that it “would certainly be a violation of our concepts of cause and effect to say that suddenly, out of nothing….came this cataclysmic explosion”


But again, as I noted earlier, cause and effect notions are of little use. What we need instead are necessary and sufficient conditions for the event to occur - which by the way, is not an ‘explosion”! I refer Mr. Wear to ASTRONOMY magazine, May, 2007, ‘5 Things You Need To Know’, p. 31:

The Big Bang wasn’t any kind of explosion. It was closer to an unfolding or creation of matter, energy, time and space itself. What would actually have been a much better name is ‘expanding universe theory’.”

As to how spontaneous cosmic inception can occur, this was referenced by T. Padmanabhan, 1983, ‘Universe Before Planck Time – A Quantum Gravity Model', in Physical Review D, Vol. 28, No. 4, p. 756.

To fix ideas, we are interested in first determining the gravitational action, and from this whether acausal determinism is more or less likely to apply. For any action S(g) if

S(g) < < h (the Planck constant)

where h = 6256 x 10^-34 J/s

we may be sure that classical causality is out the window and we are dealing with acausal determinism

If S(g) > > h

the converse holds.

To evaluate S(g) as Padmanabhan shows (op. cit.) , we need V the 4-volume of the space-time manifold for which we choose a de Sitter space, in the first approximation.

We have

S(g) = c^3/ (16 pi G) INT_V R(-g)^1/2 d^4x


where G is the gravitational constant, c is the speed of light, the integral (INT) is over the 4-volume V with the differential (d^4x) to match.

In the big bang model one takes V as the spatial volume enclosed by the particle horizon, and bounded by the time span (t) of the universe. Thus, at any epoch t for k = 0,

S(g) ~ t^1/2

The particle horizon is defined by

rS(g) = 2 ct

Einstein's gravitational equations (with cosmological term, for the sake of generality) are

R ( i k ) - (1 / 2) g ( i k ) R = T ( i k ) + lambda g ( i k )

where the ‘lambda’ denotes the cosmological constant. For de Sitter space it is equal to:

(n – 1)(n – 2)/ [2 a^2]

where a is a scale factor, and n denoted the dimension (4) of the volume under consideration. R(ik) is the Ricci tensor.

Now for S(g) ~ t^1/2, R (the scalar curvature of de Sitter space) = 0, so S(g) = 0

However, the above happens because the Einstein tensor (T_ik) has trace = 0 in the early universe. The ‘trace’ is the sum of the diagonal elements of a tensor, e.g.

Tr(M) = 0

where M =

[0 1 0 ]
[0 -1 0 ]
[0 0 1 ]


This means the limits must definitely be for acausal determinism, NOT classical – including classical causality.

Wear also alludes to a “sequence of oscillations” (ibid.) but this is egregious, since there will be no oscillations, as the universe is not only forever expanding but accelerating in its expansion.

Universes that re-collapse (decelerate), expand forever with zero limiting velocity (e.g. v uniform) or expand forever with positive limiting velocity (accelerate) are called in turn: 'closed' (can have curvature k = +1); 'critical' (k =0) or 'open' (can be k = -1), respectively

Now, to determine whether any F-R-W (Friedmann-Robertson-Walker) cosmological template leads to deceleration or not, we need to find the cosmic density parameter:

OMEGA = rho / rho_c

where the denominator refers to the critical density. Thus if:

rho > rho_ c

(c = critical)

then the cosmic density is able to reverse the expansion (e.g. decelerate it) and conceivably usher in a new cycle. (New Big bang etc.) The observations that help determine how large OMEGA is, come mainly from observing galaxy clusters in different directions in space and obtaining a density estimate from them.

Current data, e.g. from Boomerang and other satellite detectors shows that OMEGA ~ 0.3 or that:

rho = 0.3 (rho_c)

I.e. that rho < rho_ c, so there is no danger of the cosmos decelerating.

Precision measurements of the cosmic microwave background (CMB), including data from the Wilkinson Microwave Anisotropy Probe (WMAP), have recently provided further evidence for dark energy. The same is true of data from two extensive projects charting the large-scale distribution of galaxies - the Two-Degree Field (2DF) and Sloan Digital Sky Survey (SDSS).

The curves from other data with corrected apparent magnitude v. redshift (z) give different combinations of OMEAG_dark to OMEGA_matter over the range. However, only one of the graph combinations bests fits the data:

OMEGA_dark = 0.65 and OMEGA_matter = 0.35

Corresponding to an expansion accelerating for the last 6 million years- with much more dark energy involved (~ 0.65) than ordinary matter.

When the predictions of the different theoretical models are combined with the best measurements of the cosmic microwave background, galaxy clustering and supernova distances, we find that:

0.62 < OMEGA_dark < 0.76,

where OMEGA_dark = rho_dark/ rho_c, and -1.3 < w < -0.9.

In tandem, the numbers show unequivocally that dark energy is the acceleration agent, and in addition that dark energy comprises the lion’s share of what constitutes the cosmos (~ 73%).

In addition, all of this data is firmly backed up by earlier Boomerang (balloon) data that – when plotted on a power spectrum- discloses two adjacent ‘humps’ one a bit higher than the other. The “first acoustic peak” and the “second acoustic peak” fit uncannily to the sort of spherical harmonic function that describes a particular plasma condition. In this case, one that conforms to the supernova-derived values of OMEGA (d, m). (See: ‘Balloon Measurements of the Cosmic Microwave Background Strongly Favor a Flat Cosmos’, in Physics Today, July 2000, p. 7 and 'Supernovae, Dark Energy and the Accelerating Universe', by Saul Perlmutter, in Physics Today, April, 2003, p. 53)


Lastly, astronomers make no “claim” that galaxies are moving apart with increasing velocities. We have actual data that this is so, and it’s based on the basic physics of the Doppler effect.



-----------------------! L1 -------! L2----

---!----------!------------------
L1(o) L2(o)

Thus, in the above pictograph, lines L1(o) and L2(o) are the observed, redshifted (by some number of nanometers) spectral lines for some distant object such that:

v = cz

Where V denotes the velocity of recession, c is the speed of light, and z is the red shift

z = {L2(o)/ L2} - 1

Note again that L2 is the (lab-emission) standard line and L2(o) the observed line wavelength. If z > 0 we say the line is Doppler-effect redshift and receding.

To illustrate, say the hydrogen alpha line (emitted at 656.3 nm, e.g. L2 = 656.3 nm) is redshifted in some distant object to 666 nm (L2(o)). Then we have:

z = 1.015 – 1.000 = 0.015

This translates to a recessional velocity: v = (3 x 10 8 m/s) (0.015) = 4.5 x 10^6 m/s

As to Wear’s claim that it “may be difficult to place credence in such observations over a comparatively brief interval of time”, perhaps, but this “brief interval” is all we have to work with. What, will he dismiss all our painstakingly obtained date (including from the new CERN large hadron collider) because they were obtained over brief intervals? This isn’t the way a Realist works, but it is certainly the modus operandi for an Idealist.

Saturday, December 13, 2008

An Invitation to God-template?

As religiosity heats up again approaching Christmas, a number of new "God" books - and some not so new- have come to my attention. One of these has been the book: ‘No One Sees God’ by Michael Novak, 1994 winner of the Templeton Prize.

Novak's is one of those books an atheist sometimes reaches for in the hope of seeing whether a Christian can forge a detailed apologia for his position. Also, whether an atheist can be greeted by something more than bombast, threats or venom for not kowtowing to the mainstream God-addiction.

For the first thirty or so pages, until the author veers into palpable atheist baiting (‘Not the Way to Invite a Conversation’, using the "New atheist" books of Dawkins, Dennett, and Harris as templates) it was a pretty good read. One sees a reasonable and rational mind at work, and one not afraid to admit that atheists may have something in their favor - for at least pushing lazy Christians to examine issues and aspects of their faith. (But, of course, to me - Christianity is not one monolithic faith, but a patchwork of about 70 different sects - from the largest, Roman Catholicism - to which Novak belongs, to the smallest Science of Mind enclaves)

Indeed, it reminded me of the sort of dialectic content that often permeates arguments with my longtime Christian friend, John Phillips. E.g. the author’s claim (p. 17) that he hypothesizes that:

“unbelievers, especially those who have never known religion in their personal lives, or who have had bad experiences with it, experience a revulsion against a reasoned knowledge of God”.

Of course, as my friend John noted and recalled, I have not had so nice experiences of religion either – going back to a nun teacher in first grade pressing a hot needle (she’d just heated using a match) into my right palm to remind me “of Hell”.

“Remember boy, the fires of Hell are a million times hotter than this and they burn you inside and out! Don’t ever forget it!”

Well, I didn’t, and thus began my journey to rational atheism as I note in my recent book, Atheism: A Beginner’s Handbook.

But to assert, as Novak does, that this might have elicited “a revulsion against reasoned knowledge of God” is to miss the interpretation, and by a country mile.

First, there can be no such thing as “reasoned knowledge” of anything until one has first provided the ontology. Ontology, the basis for primary existence, comes BEFORE knowledge (epistemology) and not after. Thus, all Novak’s later citations of the classics describing God and citing the likes of Aristotle and St. Augustine, do him no service. Rather, such diversionary passages merely show that Novak himself has no clue about who or what this “God” is, he can only say what It isn’t.

Thus, despite waxing long and hard (in reply to the three atheist authors, Dawkins, Harris and Dennett) in Chapter Three (‘Letter to an Atheist Friend’) he fails to make his case that his entity is anything more than a will-o-the-wisp centered in his fertile imagination, or his temporal lobes (as Michael Persinger’s work showed, see my review of his ‘The Neuropsychology of God Belief’) . He himself reinforces his own deficiencies when he admits (p. 274, Epilogue):

“The only knowledge of God we have through reason, all the ancient thinkers have taught us, is dark- and by the via negative- that is, by reasoning from what God cannot be”

True enough! But Novak is not afraid, despite this candid admission, to make all manner of positive statements about God’s nature. A few samples off-hand:


p.196:

“God must be more like human consciousness, insight, a sense of humor, good judgment…”

“God knows well the creatures He made…he has to beat us around the ears a bit”

“In the end it was important for God that his son (who is one with him) became human and dwelt on the Earth”


The Trinity: p. 197,

“to think of God as a Trinity is to think of Him as more like an intimate communion of persons than as a solitary being”.

p. 198:

“When everything is suffused with reason, that is the presence of God”

How exactly are any of these, metaphorical or not, statements via negativa- the dark way? They aren’t! Especially the egregious and misplaced reference to “God’s son” (p. 196) which assumes there is ample evidence that a possible 1st century charismatic Jewish rabbi was a genuine God-man. He wasn’t. He was a confabulation of Christians who felt compelled to copy and imitate the earlier pagan Mithraists’ god-man fables. (E.g. Mithras was born of a virgin, performed miracles, died on a cross...etc. then rose from the dead.)

The collection of Novak’s statements comprise an assortment of positive statements the author proffers about an entity he really doesn’t know because he hasn’t provided any ontology.

Now let’s get into some ontology here. First, following Bertrand Russell’s lead (‘The Problems of Philosophy’) we need to specify the practical and operative laws that apply to existents and entities, under the general rubric of “being”. (Thus, to be most accurate here, when an atheist agrees to debate a Christian, he is only agreeing to the presupposition of “being”. It remains to be worked out or proven, what the exact nature of this being is.)

By “existent” we mean to say that which has prior grounding in the mind, albeit not yet demonstrably shown in reality.

For example, the number ‘2’.

If the number 2 is mental, it is essentially a mental existent. (Do you see literal two lurking in the outside world, apart from what the human mind assigns, e.g. two apples, two oranges, two beetles etc.?)

Such existents are always particular.

If any particular exists in one mind at one time it cannot exist in another mind at any time or the same mind at a different time. The reason is that as time passes, the neural sequence and synapses that elicited the previous “existent” at that earlier time, no longer exists. My conceptual existent of “2” at 3.30 a.m. this morning is thus not the same as my conceptualization of it at 4 p.m. It may APPEAR so, but rigorous neural network tests will show it is not. (E.g. differing brain energies will be highlighted at each time)

Thus, ‘2’ must be minimally an entity that has “being’ regardless of whether it has existence.

Now, we jump into the realm of epistemology from here, with the next proposition:

Generalizing from the preceding example, ALL knowledge must be recognition, and must be of entities that are not purely mental but whose being is a PRECONDITION- and NOT a result- of being thought of.

Applying this to the ontology of “non-contingent creator”, it must be shown it exists independently of being thought of. (E.g. there must be a way to declare and isolate its independent existence from the constellation of human brains which might get tempted to confabulate it out of innate brain defect or emotional need)

Here’s another way to propose it: If one demands that this entity (G-O-D) is not susceptible to independent existence, and therefore the mere announcement or writing of the words incurs validity, then the supposed condition has nothing to do with reality. It is like averring we all live inside a 12-dimensonal flying spaghetti monster. I would be laughed into oblivion, especially as I incur no special benediction (as you do) by invoking the G-noun.

In effect, if the proposed “non-contingent creator” or its single word equivalent isn’t subject to independent existence, then its alleged “truth” is separated from verification. Truth then becomes what is communicated to us by proxy (or proxy vehicle, e.g. Pope Benedict, the Bible, Novak or any other Xtian apologist) with the existent (or a metaphor) in the mind of the communicator who deems himself qualified to make the “truth” exist.

But such a “truth” (or any associated invocation of “reason” in its service) is fraudulent and cannot be a valid expression of the condition. What it means is there is little assurance the communicated secondary artifact has all the elements and particulars needed to be an affirmed REAL entity. The truth is dispensed according to our needs (in this case the need to believe humans are seen after by a Cosmic Daddy) – all we need ignore is the constellation of evidence that refutes it.

Second: How to escape from this ontological problem?

Logicians have been aware for centuries of the pitfalls of appealing to pure causes, or to generic causality. We see this in multiple places in Novak’s book such as on page 217 where he takes Dan Dennett to task for what is claimed to be spurious invocation of causes and causality:

“Besides, Dennett interprets the cause of a cause as if both were the same, like one turtle on another”

This sort of causal approach is exactly what makes discussions sterile because it invites dead ends and ambiguity. By contrast, as Robert Baum notes in his text, Logic, 4th Edition, causal explanations are only of limited utility because of the intuitive, non-systematic nature of causal inference. Not only are we confronted with multiple types of cause but also proximate and remote causes. For example, a collision of a comet with a large meteoroid in space may be the proximate cause of the comet’s shifting orbit enough for its nucleus to collide with Earth. However, another collision – say of a large asteroid with Earth- may be engendered by the remote cause of the YORP effect.

For this reason, it is far more productive to instead reframe causes into necessary and sufficient conditions. As Baum notes (p. 469) this is advisable because the term ‘cause’ has been too closely associated in most people’s minds with a “proximate efficient” cause. Like one billiard hitting another and sending it into a side pocket. Or one small asteroid hitting another to send it into Earth- intersecting orbit.

In these terms, whatever “secondary causes” or other causes pertain to G-O-D are irrelevant and do not advance the arguments. We therefore put these causal references aside as Baum recommends, and substitute for them necessary and sufficient conditions for the claimed existent. If someone is unable to provide these, then either he doesn’t know what he is talking about, or has engendered a fantasy creation or phantasmagoria in his own brain which he now offers to us (non-believers) as reality.

Let’s now review what these n-s conditions are. A necessary condition is one, without which, the claimed entity cannot exist. A sufficient condition is one which, if present, the entity must exist.

For example, consider a hydrogen emission nebula. The necessary condition is the nebula or interstellar cloud of hydrogen exist in the first place. The sufficient condition for the existence of a hydrogen emission nebula in space would be proximity of the nebula to a radiating star. In this case, the star’s radiation causes the hydrogen atoms in the nebula to become excited – cause electrons to jump to higher energy levels- then go to lower with the emission of photons)

Leaving out all the fluff about “first causes” or “secondary causes” we therefore simply ask: What are the necessary and sufficient conditions for a “God” of the type Novak proposes to exist?

Note, that in having to explicate these, Novak also is compelled to show how his God varies from all the other God- concepts proposed. This also disallows the cavalier statement he makes in his Epilogue where he conflates the Judeo-Christian God with that of deists. E.g., page 274:

“If there is a God such as Jews, Christians and deists have held there to be…..”

But wait! The Deist’s God is NOT the same as that of Jews or Christians! Strictly speaking, Deism treated in its orthodox and traditional form is not Theism. Deism is, in fact, only one step removed from atheism. The only real difference is that in deism some kind of non-specific "first cause" is proposed, but after that all distinctions collapse. The atheist avers there is no one or nothing "minding the store" and so does the deist.

Deism, to give an analogy, is analogous to a child who makes a toy with a gear wheel, and the toy has the ability to move after being wound up and released. Thus, the child makes the toy (he's a clever kid) winds it up, releases it down the sidewalk, then walks away never to glance at it or its final outcome, destination. In this case, the child plays an analogous role to the ambiguous first cause of deism and the toy is analogous to the universe.

One of the most egregious arguments of Novak is when he avers (p. 267) that:

“The trouble is that atheism is a leap in the dark and not a rational alternative. No one can possibly prove a negative or know enough to be certain there is no God

But none of the atheists I know, myself included, DO THAT! So what Novak has succeeded in doing is inventing his own straw man atheist or tackling dummy and nothing more. Let’s take his ‘shot in the dark’ tag. Not at all! What we (rational atheists) say is that in the absence of YOU proving your claim (or at least giving us the necessary and sufficient conditions for it) - we are maintaining a position of non-investment of our mental or emotional resources and energies into it. This is the conservative and natural position to adopt for a true rationalist.

In a similar vein, if a neighbor tells me he has alien ghosts inhabiting his attic, I would also withhold investment of any emotional or intellectual commitment to it. Until he could provide me with some empirical justification or evidence, I am fully entitled to ignore his claim as possessing any remote connection to reality. Likewise for Novak’s claim of a deity no matter how many descriptive metaphors he can dredge up for it. Minus those n-s conditions he is merely promoting phantasm as reality like the alien ghost believing neighbor. True, on a vaster perhaps more sublime scale, but in the end the same category of Macguffin.

As to ‘proving a negative” no atheist does that either. We simply maintain the conservative “show us, we’re from Missouri” outlook. Thus, we regard the improbability of a deity that is invisible and governs and designs the cosmos as about the same as one trillion alien ghosts from Tau Ceti inhabiting the DC Beltway.
The error of latter day Christian obscurantists like Novak is in placing the onus of proving a negative on us, when in fact the onus is on him to prove his existent is substantive and not his own mental confection. Failing any hard evidence, say like a video recording for an alien ghost or apparition, the next best thing is to give us those necessary and sufficient conditions signed, sealed and delivered. As it is, what we end up with (in Novak's book) is his own creation of deity as a mirror image of his mental self-representation.

There are so many other egregious assertions or claims that pervade the book, such as atheism allowing the meme that “everything is permitted”, or that purpose must be exposed via science, that it would require a whole other book to deal with them. So let me end this review with the case of Novak’s daughter, who we are told, p. 42: “decided atheism cannot be true because it is self-contradictory”.


He then goes on to expostulate that: “atheists want all the comforts of rationality that emanates from rational theism but without any personal indebtedness to any Creator, Governor, Judge.."


And so Novak’s daughter concludes it is more reasonable to believe there is a God, then withholding investment for any evidence demonstrating it is more than a munchkin of her mind. And let us recall hear the words of astrophysicist Carl von Weiszsacker: “It is impossible to understand rationally a God in whom one did not believe in already”.

First, Novak’s daughter is wrong to accept that we define everything as “chancy” or “absurd”. Indeed, there is within the cosmos a domain of natural law regularity within which our deterministic mechanics can work very well. It is only when one moves on to the quantum theory (quantum mechanics) that hard prediction becomes dicey. Our job, as rationalists and scientists, is not to succumb to metaphysics or overeager mystics, but to try to show how much of our cosmos CAN be understood in terms of known laws and how much cannot. Thus, on setting the limits for rational enquiry we set limits on our own penchant for mental eruptions and inventions. We use our empirical methods, whatever they are, to put curbs on our human penchant to inflate reality.

What we do know now, compliments of our most modern advances in infrared and microwave astronomy, is that the assay of the cosmos holds little room for what we recognize as “order”. The latest results from the Boomerang balloon survey and the Wilkinson Microwave Anisotropy project disclose that darkness pervades 93% of the cosmos, whether in the form of dark matter(23%) or dark energy (70%). We humans represent emergence of a rational brain to fathom these mysteries compliments of natural selection on one small planet, and the probable beneficial intervention of a large asteroid 65 million years ago which wiped out our prime competition.

Second, NONE of these rational modes or methods was earned via “comfort” or given to us compliments of the god-mongers or Christians. We had to earn our rational results (say those showing that dark energy behaves like a plasma that fits a spherical harmonic distribution) step by step through patience and many errors, and the final ability to reach our goals. Thus, we owe absolutely nothing to any “judges” or mystical “creators’ inhabiting the mind of Novak, or anyone else. To say so is to attempt to validate science’s service in the perpetuation of never-ending mumbo-jumbo, theological dogmas and superstitious phantasmagorias.

I am a rational atheist, proud of it, and beholding for it to no fictitious entity that inhabits the wayward neurons of someone’s brain.

The good news here is that this is one of the better “God books’ in circulation. The bad news, is that it’s not anywhere near as good as James Byrne’s “GOD”(Continuum, 2001). But then again, after reading Tipler’s fulsome ‘Physics of Christianity’ it was a relief of sorts!

Friday, December 5, 2008

The Financial Black Hole


"This has become essentially the dark matter of the financial universe" - comparing it to the dark matter discovered in astrophysics.”

Chris Wolf, hedge fund operator, quoted in FORTUNE, October 7.

"The big problem is there are so many public companies- banks and corporations, and no one really knows how much exposure they have to CDS (credit default swap) contracts."

Morgan Stanley derivatives salesman (Frank Partnoy) quoted in FORTUNE (ibid.)


The latest news in The Financial Times has not been encouraging as their headline ('Index Points to Record Default Threat', p. 13, Dec. 2) continues to warn of the unfolding crisis in CDS or "credit default swaps". As described by Chris Wolf, the "dark matter of the financial universe".

Following on from the FT article, alarm bells should be ringing as the Mrkit iTraxx Crossover index rose over 1000 basis points for the first time since its creation and meanwhile, in the U.S., the main credit default swaps indicator (for 125 companies) rose to 271 basis points.

Some of the world's leading investment grade companies now look to be in danger of default according to CDS prices.


What are these esoteric instruments and why are we at such risk from them, especially their being embedded in the mortgage securities market? That is what I want to explore in this blog entry. This entails understanding what the associated term “toxic debt” means and how it factors into the unfolding economic catastrophe that we behold. Almost all of it is tied up in these “credit default swaps”. The sum total of these esoteric financial “black holes” is now estimated to be no less than $55 TRILLION. (See,e.g. 'AIG's Complexity Blamed for Fall' in The Financial Times, Oct. 7, 2008 and 'The $55 TRILLION QUESTION' FORTUNE, October, p. 135).

TO comprehend why these CDSs comprise toxic debt we need to delve into some financial history. In particular, a move made in the 1980s known as “securitization”. Up until then, the banks were the primary holders of mortgage debt. With a government deregulating “green light”, however, banks were able to offload these mortgages (whose defaults always cost the banks dearly) to Wall Street. There, clever people gathered millions of mortgages from across the country and repackaged them into entities called “collateralized mortgage obligations” or CMOs.

These were then inserted into bond funds which were sold to cautious investors as “safe” instruments. After all, bonds are supposed to be safer than stocks, right? Wrong! Bonds, such as U.S. Treasurys are – by virtue of having the name and backing of the U.S. government behind them. But not bond funds, which can be loaded with all manner of financial tripe that can engender losses over the short or long term.

As an example, most bond funds in the 1980s and 1990s were loaded with IOs, or inverse only strips, as well as inverse floaters, and CMOs (referred to as “toxic waste” in bond trader parlance). The IOs pay only mortgage interest. Inverse floaters, meanwhile, pay more when interest rates FALL than when they rise. All these tricks were used to try to juice up yields to lure investors. That, along with touting them as “government securities” - since legally speaking mortgage securities are “government –backed” but that doesn’t mean your investment is FDIC-insured! In this way, the bond fund purveyors could get people to think they were making safe investments when nothing could be further from the truth.

My own wife was in one of these bond funds as part of a 401k “Life cycle” fund about ten years ago. I noticed every quarter, despite being in “bond funds”, she was losing more than $800 each quarter and getting no company match (because they aren’t obliged to match in the case of losses). Upon further scrutiny, I discovered the bond funds were laden with IOs and inverse floaters as well as CMOs. I immediately had her exit the Life Cycle thing and put all her 401k money into fixed income assets. Fortunately, she acted in time – as otherwise she likely would have lost more than 30% with the post-9-11 downturn.

We now move ahead to the late 1990s, and CMOs have transmogrified into CDOs (collateralized debt obligations) though the basic meaning is the same. Again, these represented millions of repackaged mortgages now sold as “securities” as part of bond funds.

Sometime in the early 2000s, a gaggle of “quants”- gifted mathematical types based in investment banks- got the idea for a creation that could juice up huge profits for their banks, and based on unregulated derivatives. Thus were born the “credit default swaps”. These were basically devised as “side bets” made on the mortgage securities market and the performance of the CDOs therein.

We all know what a “side bet” is. For example, if you travel to a Vegas Sports book, you will find you can not only make bets on a particular game, say Giants beating the Patriots in the Super Bowl- but also ancillary happenings to do with the game. For example, one can bet on: how many first downs the Giants will make in the first quarter, or how many sacks the NE defenders will make in the game, or how many rushing first downs a particular player will make – say Sammy Morris of the Pats. Any and all side bets are feasible.

In the case of the CDS realm, side bets were allowed on all sorts of things, such as whether particular CDOs would lose money, or the interest rate (average) on a segment of them would drop one half percent, or whether there would be at least 100,000 foreclosures in the third quarter of the financial year.

In the case of the credit default swap, all that was needed to make the bet formal was a counter-party. Thus, the “party” renders the bet and the amount wagered, and the counter-party takes the bet. The actual exchange, as already noted, was often done on cell phones and no formal records other than what the cell phone statement showed were available.

Now, the investment banks’ quants realized that the bets as such might not grab the interest of the mainstream banks they needed to buy into them. After all, the banks could LOSE on many of these bets and it would be to their unending detriment. Thus, the quants took the CDSs and repackaged them along with regular mortgage securities – with CDOs, into what they called “structured investment vehicles”. Or SIVs.

These were then sliced and diced and sold to the mainstream, Main Street banks as safe securities. To make this “kosher” so to speak, bond rating companies (like AIG) were asked to give a bond rating and preferably the safest (AAA) to signal to the mainstream Banks these were A-OK purchases.

Despite the fact that the rating agencies had not the faintest or foggiest clue what the SIVs contained, they sold the things to the banks and the banks happily bought them up unaware of what was actually in them. By 2003 the total of credit default swaps in the financial system was estimated to be around $6 trillion. By August of this year, it had reached $55 trillion.

That is, $55 trillion in hidden and subjective financial BETS buried into mortgage securities as SIVs, with no formal tracer available! Couple this now to a bona fide debt, such as a car loan or mortgage from approved bank or mortgage loan company. Everything is spelled out in detail so that even a person of average intelligence can see what he or she is getting into.

In the case of the mortgage, for example, a full amortization schedule -table is available to show monthly payments, and the principal vs. interest. There is no guessing, no doubt. The debtor knows his obligations and what he has to do to make good on them.

By contrast, with the CDS (credit default swaps) nothing is known other than that the instrument has some subjective worth at one time. But HOW MUCH? TO WHOM? We have no clue since none of the esteemed quants who invented them knows where the “bodies are buried” so to speak! I am not even sure, if they were compelled to complete a typical ISO-9000 process form, they could replicate exactly HOW their esoteric instruments were created!

What we see here, and which is abundantly evident even to the most hard-core libertarian ideologue, is that credit default swaps and the instruments into which they have been buried and disseminated are indeed “toxic waste” by any rational financial measure. I am not even sure one can call them “debt” – although to the banks that now have them on their books they represent humongous debt! Since each quantity of these things lowers the value of the bank’s assets by some factor, and increases its liability.

To make this more understandable I refer to the graphic at the top of this entry- which compares two banks with roughly the same volume of assets, but different equities – since one bank (A) has fewer CDS.

Now, since banks owning the instruments into which credit default swaps have been buried will not readily disclose their extent, then it stands to reason one bank – say Bank A – cannot know how much “bad debt” or “toxic assets” the other one owns or has on its books. If this is the case, a bank with relatively higher equity (Bank A in Fig. 1- at top) will be unwilling to lend capital to a bank for which the toxic asset volume is unknown. After all, if it lends in good faith then the other banks fails because of the higher CDS proportion, it will have only itself to blame.

It is this unknown which has directly engendered the current credit freeze. Because no bank knows the volume of CDS any other holds, it cannot know the extent of any other bank’s equity position or credit worthiness. Thus, has the LIBOR rate recently exploded – this is the London Interbank Offered Rate- which is a measure of bank to bank lending confidence. It most certainly will not begin to go down, reflecting higher lending confidence, until some agent steps in and proceeds to buy all the CDS now on the banks’ books.

WHO is in the position to do this? Well, certainly no private entity has the resources! The only one is the government, and more specifically the quasi-governmental entity known as the Federal Reserve which can, if it must, create enough money by fiat to buy all the CDS and get rid of this toxic sludge once and for all.

Now, some libertarians will no doubt exclaim ‘Why not just do nothing?’ but in asking that they are clearly not cognizant of the degree of financial collapse that would precipitate from such folly. We are talking here of credit seizing up everywhere! No more money for student loans, at any price, no loans for businesses to meet payroll or plant improvement and you can forget about any expansions! No money for home construction, to purchase new cars, to do home renovations, to re-fi a mortgage……NADA! In effect, as Nobel Prize winning Princeton economist Paul Krugman has noted, one would usher in a Second Great Depression – and this one – by virtue of the global banking effects, would make the first look like the proverbial walk in the park.

Hence, bottom line, there is no option. The $55 trillion must be purged and it must be done before banking collapses proceed like falling dominoes. Now, no one is arguing here that full value must be paid for all those toxic assets, I mean even 20 cents on the dollar would be better than nothing – though even that would add $11 trillion to the existing bailout deficit. But doing nothing is not an option, and only the most financially obtuse, who have no remote clue of what is transpiring now, would even propose it.

Finally, in the wake of this catastrophe - which will probably take four to five more years to unfold- it is clear that all the credit default swaps which caused this mess need to be outlawed. Further, all derivatives, irrespective of where or how they are used, need to finally come under SEC regulation.

We cannot afford another event like the credit default swap mess, ever again! For stock investors, the future will be bloody bleak as Stephen Roach noted in his 'Market & Investing' FT column three days ago. As he notes the "post-bubble" world will see a very anemic recovery. Not any of this market bursting forth back up to where it was within months, or even years....possibly decades.

In the long deleverage process, with all asset bubbles punctured, no fund or stock will be able to jack up yield by using tricks such as stacking investments with derivatives, IOs or other crap. People simply won't buy them. In the future investment world the snake-bitten will reach only for what is understandable and transparent.

In a way this is a good thing since the stock market was always designed more as a financial casino for the wealthy, not for ordinary Jacks and Janes to park their precious retirement money.