Friday, May 23, 2008

Has Peak Oil arrived? (I)

As Americans prepare to launch their Memorial Day weekend travels, they are staring gas prices of $4 a gallon squarely in their collective face. Is it now the arrival of Peak Oil? That point, long predicted, where oil production peaks and it’s all downhill from now on?

The signs are not sanguine. T. Boone Pickens, quoted in The Financial Times - May 21, (‘Oil Futures Near $140 Amid fears of Shortage’) page 1A, asserts we are now at the point where demand for oil is 87 billion barrels a day, while only 85 billion can be produced. This is acknowledging Peak Oil by any other name. Meanwhile, in The Wall Street Journal of May 22, there appeared the article ‘Energy Watchdog Warns of Oil-Production Crunch’ (p. A1)

The piece noted that the world’s “premier energy monitor” is preparing a sharp downward revision of its oil supply forecasts. The full formal report will be ready by November, but already word is afoot that it will point to global oil supplies plateauing even as demand continues.

The article also notes (p. A12):

“A growing number of people in the industry are endorsing a version of the ‘peak oil’ theory: that oil production will plateau in coming years, as suppliers fail to replace depleted fields with enough fresh ones to boost overall output.”

The skinny right now is that the IEA will forecast as much as a 12.5 billion barrel a day shortfall by 2015. This will certainly surge oil prices to probably well over $600 a barrel and at least $8 per gallon (or what Europeans are now paying) at the pump. Most Peak Oil signifiers pitch its entry at $7 a gallon.

What does Peak Oil mean? And is there anything that can be done to mitigate it?

The problem is not that difficult to break down.

The planet was endowed with ~ 3,000 billion barrels of oil – of which we’ve consumed one third and another one sixth of relatively cheap oil remains, after there will reside another third of “break-even” oil (costs as much to access as it delivers), after which one -sixth of very expensive oil remains (costs much more to reach it than it deliver in energy).

At the heart of these considerations is the net energy eqn. (cf. Weisz, in Physics Today, July, 2004, p. 51)


Q (net) = Q (PR) – [Q (op) + E/T]


In effect, for break-even oil one would find Q(net) = 0

Thus, there is no net gain in energy given the quantity that must be used to obtain it.

For the last 700 billion barrels,

Q(net) = negative quantity = -Q

since the rate of energy production (Q (PR) must be debited by the energy consumed for its operation Q(op), and the energy E invested during its “lifetime” T

Thus its Q(PR) will be small in relation to the bracketed quantity.

In a similar vein, Robert Heinberg (‘The Party’s Over’) uses the quantity EROEI or ‘Energy returned on energy invested’ which for oil reached a high of 30 (ratio) in the 70s and is still the highest of all energy sources at around 22.

Thus, the problem in a nutshell is not “running out of oil’ but running out of CHEAP oil.

Bottom line, we need not run out of the stuff before the world economy runs into problems of untold, unspeakable proportions!


“Peak Oil’ is somewhat misleading a term, since it suggests a specific date of peak production. In the real world, the top part of the oil production curve is nearly flat (cf. A. Bartlett, Physics Today, op. cit. p. 54)

In more practical terms – what it means is that if 2008 is the year of peak oil production then the worldwide oil production in 2028 will be the SAME as in 1988, demanding that Q(net) > 0.

Also, it means that production in 2048 will be the same as 1968, and 2068 will be the same as 1948, and 2088 will be the same as 1928! All this while the population is expected to reach 9 billion or more in the SAME PERIOD! (cf. Bartlett, ibid.) In other words, as time goes on the available accessible oil constantly diminishes even as population constantly rises with the same demands.

Geologist Marion King Hubbert predicted U.S. oil production would peak around 1970 p which it did (at 2.2 liters per person per day). He also predicted world production would peak around 1995, which it would have – had the severe OPEC-induced oil crises not created an artificial supply problem in 1973, thereby pushing this critical peak back 10 years (to 2005, meaning we’d be three years past Peak oil by now).

Wildly optimistic dates have predicted that 2020-2035 will the “true” date, but as Matt Savinar points out (‘Life After the Oil Crash’ p. 7), these are all based on government agencies “that admit cooking their books” – just like they do the unemployment stats (by dropping all those off the unemployment rolls who’ve been out of work longer than six months)

Jon Thompson, company president for Exxon-Mobil, in a 2003 paper posted on the Exxon-Mobil Exploration website (ibid.) noted more realistically:

“By 2015 we will need to find, develop and produce a volume of new oil equal to EIGHT out of every TEN barrels produced today. In addition, the cost of this new oil is expected to be CONSIDERABLY more than is now spent.”

Thus, he acknowledges the global peak production would have to have taken place before 2015! In other words – a peak occurring either in 2005 or any year after, even now, is perfectly consistent with his projection.

Next: Are other factors operating to mimic Peak Oil?

Thursday, April 10, 2008

Ten Answers for Marty Nemko

In the Mensa Bulletin of March, p.28, Marty Nemko asked ten questions of “climate change alarmists” . Below I answer each in turn, since it appears the piece was only intended facetiously and no real answers were expected (at least to be published, according to the Bulletin editors!)

1. Does sufficient evidence exist that global warming will increase enough in the next century to justify the enormous financial and human costs?
Sufficient evidence does exist that global warming will increase enough in the coming century to wreak havoc on many scales, from massive crop failures associated with drought, to the spread of exotic diseases (e.g. dengue fever, Rift Fever, cholera, malaria etc.) scarcely seen in temperate climates.

Prognostications from the Global Climate Modeling (GCM) project disclose a projected increase in Arctic surface temperature over the next century.of five degrees Celsius. This is a critical threshold, especially as the Arctic is our “icebox” – if it “defrosts” the whole planet is in for it. In addition, as the late Carl Sagan has pointed out (see his essay, 'Ambush - the Warming of the World', p. 98, in Billions and Billions, Random House, 1997) the threshold for unstoppable warming change is 6 Celsius.

It is also a mistake to assume there must inevitably be massive costs and sacrifice to rectify global warming. I suggest getting hold of The Wall Street Journal, April 8, the Op-Ed page for the article ‘Climate Change Opportunity’ by Fred Krupp. Krupp notes that “Solving global warming will be an added cost – but a bargain compared to the economic costs of unchecked climate change. And fixing this problem will create an historic economic opportunity”

Krupp even goes so far as to say that whoever solves the problem to find suitable sources of clean energy will make a “megafortune”. Indeed, Europe has already shown the way to green profits in many respects. The trick is to get on board sooner than later, because the longer the delay the greater the inevitable costs for transfer in the end. .

2. If global warming is substantially man-made, why have CO2 concentrations increased in the last decade, yet the average global temperature hasn’t?
In terms of CO2 accumulation and its relation to temperature, there is inevitably a time lag between CO2 buildup and measurable temperature increases. Thus, a buildup of 30-40 ppm of C)2 in a decade would not translate immediately into a temperature signal that can be measured.

Despite that, a report entitled: ‘Warm Oceans Raise Land Temperatures’, appearing in Eos Transactions, Vol. 87., No. 19, 19 May 2006, noted:

It is recognized that land temperatures in recent years have consistently been above normal with indications that 2005 was the warmest year for globally averaged temperatures within the instrumental record.”
3. In assessing certain whether global warming is occurring why does Al Gore cherry-pick certain regions, for example focusing more on the Arctic than the Antarctic?

Al Gore focuses more on the Arctic than Antarctic because it is there that the most pronounced early effects of global warming will be manifested due to the vastly smaller ice sheet mass. Antarctica will also break up and melt (indeed, a 160 square mile segment has just been observed breaking up), but it will take a lot longer (though a section of the ice shelf equal to the area of Connecticut did break off several years ago!)

Thus, melting of the Arctic ice sheet will be the first contributor to global sea level increase. In addition, the positive feedback effects arising from changing surface albedo (reflectance of solar radiation) will first be manifest from the Arctic.

The basis has already been described by Sagan and others: Melting of ice caps (already occurring) results in diminished albedo (reflection of solar radiation back into space), a darker Earth surface - with more IR absorbed. As more Arctic ice melts, the positive feedback proceeds faster. The overall (mean) ocean temperature continues to rise, melting ever more of the ice sheet leading to even more absorption of solar incoming radiation, higher ambient temperatures and so on.

4. Why does the media imply that the IPCC report reflects the consensus of thousands of scientists, when – as reported by CNN – there are dissenting scientists, like Richard Lindzen of MIT?
5. If there’s consensus, why on Dec. 20, 2007, did the U.s. Senate Committee on the Environment and Public Policy issue a report that 400 scientists now believe the evidence doesn’t support that “consensus?
Nemko interprets “consensus” in these questions to mean 100% agreement, but this isn’t the case at all. We have always known a certain minority hard core of scientists (the contrarians – who probably want more attention than being lumped in with others) have existed. People like S. Fred Singer of the University of Virginia and Richard Lindzen of MIT. There will always be such skeptics, and they occur in every field.

What we mean by a consensus is that the majority of the papers published in the past 10-15 years (which number nearly 1500) have supported the thesis of “climate change” and-or global warming. This also means that a majority of climate scientists support that position.

According to Daniel Schrag, professor of geochemistry and director of the Laboratory for Geochemical Oceanography at Harvard (‘Some Don’t Like it Hot’, Harvard Gazette, 3-22-01) the IPCC is by nature a conservative organization. The breadth that gives its findings weight – 3,000 scientists, reviewers, and government officials were involved in drafting the reports – means that consensus had to be reached across broad points of view, including those from countries whose economies are based on oil production.

As Schrag has noted (ibid.)

"This is inevitably a conservative view. This isn't something coming from Greenpeace."

The point of Schrag’s remark is there is indeed a consensus – which the Webster’s Unabridged Dictionary defines as: “A majority of opinion”. Not the totality of opinion!

As for the “Senate Report going largely unnoticed in the media” – well, I would guess that is because the Senate is largely comprised of people who lack any credentials in climate science – and hence are not informed or educated enough to offer a professional scientific opinion – only a political one!

6. If the climate change debate is over, why will 100 scientists argue against the “consensus” at the International Climate Science Coalition conference on March 2-4, 2008?
The “International Climate Science Coalition” does not represent the views or conclusions of mainstream climate science – but rather a tiny subset of contrarians- many of whom are supported by the oil, coal and gas lobbies. Indeed, the “Coalition” itself is a think tank proxy for the fossil fuel lobbies. Thus, it is nether mysterious or astounding that global warming deniers will continue to be heard – in whatever forum they can garner- even if their own!.

7. Why should we spend many billions and greatly restrict freedoms when experts believe that even if global temperatures rise, efforts to slow it will fail?
8. Why should we spend a fortune on a likely failed attempt to stop what may be a nonexistent or relatively minor problem when many more devastating threats will liely befall us?
First, you are assuming that leaving global warming unchecked will be a minor cost. In fact, all the evidence points to it being a major one. One UK study last year estimated the cost of doing nothing at more than $2 trillion over the next 25 years. On the other hand, the Wall Street Journal piece I referenced earlier showed that money can be made on solving the global warming problem rather than avoiding it. As for “restricting freedoms”, I simply don’t buy that driving a 3 or 4 ton, gas-guzzling Hummer, amounts to a guaranteed “freedom”, especially when most of the profits will end up in Saudi pockets.

And we know what 19 Saudis are famous for! With “freedom” comes responsibility, and the latter includes the responsibilities to be astute caretakers of the Earth for the yet unborn, as well as caretakers of our own national security by limiting the use of oil produced by unstable or terror-breeding nations.

Third, it is certainly very probable that efforts to stop global warming will fail. However, this misses the larger picture that if we can at least slow global warming we may still be able to escape its worst manifestation: the runaway greenhouse effect. The late Carl Sagan was interviewed by Ted Turner nearly 20 years ago on CNN and asked about thresholds for catastrophic climate change. Sagan mentioned an increase of six degrees Celsius – from where the planet was then. THIS is what we must effort to avoid. Recall here it was Sagan whose Ph.D. explained the reason for the exceptionally high temperatures on Venus – hot enough to melt lead- as arising from a “runaway greenhouse effect”.

In his Turner interview (still have it on tape) Sagan noted the same could happen on Earth if we don’t pay attention.

And he added there is no more “devastating effect” than a runaway greenhouse and all the harm it would incept.

9. Throughout history, humans have solved such panics- through advances in science and technology- without requiring society to move backward. Why is this situation different?
Again, you are assuming society will “move backward”. But as the earlier cited WSJ piece noted, it will more plausibly mark a major ADVANCE. Not only in terms of preserving some quality of life for future generations, but also in terms of economics. In this regard, the true Luddites are those who demand the status quo, keeping oil as the number one fuel, and dismissing conservation, while the converse may hold our collective salvation.

True, humans have solved problems before with technology, and some intriguing proposals have come forward: such as using mirrors to reflect solar radiation back into space. But almost all these solutions create as many problems as they purport to solve. For example, the mirrors in space solution may detrimentally effect crop growth or alter the planet’s water cycle in unforeseen ways by changing the insolation. Do we really wish to add these to the ravages of “global climate disruption” the term of James McCarthy - Professor of Biological Oceanography at Harvard?

10. Why did the Copenhagen Consensus, a group of 36 experts including four Nobel Prize winners, conclude that, amog 17 challenges facing the world, efforts to stop global warming should receive the lowest priority?
The Copenhagen Consensus – organized by longtime skeptic Bjorn Lomberg, and composed entirely of economists- would naturally have rated global warming lowest in its priorities for challenges facing the world. They are not climate scientists, after all! They’ll be vastly more concerned with economic blowback!

Look, what have these illustrious economists wrought with their misplaced priorities? These bozos don’t even factor in the natural resource bounty of the planet!

These glaring and inexcusable omissions as ecosystem global monetary values, were assayed for one particular year in the study Putting a Price Tag on Nature's Bounty, Science, Vol. 276, p. 1029):

Ecosystem Area (10^6 HA), Global Value (trillions)

Open Ocean ------ 33,200 ---- 8.4

Coastal ----------- 3, 102 -----12.6

Tropical Forest ------1, 900 ----- 3.8

Other Forests ------- 2,955 -----0.9

Grasslands ------------3,898 -----0.9

Wetlands --------- 330 -------4.9

Lakes and Rivers ------ 200 ------ 1.7

Cropland -----------1,400 --------0.1
---------

Total Worth $33.3 Trillion

According to lead author of the study, Robert Costanza, who directs the Institute for Ecological Economics at the University of Maryland. Until economists incorporate such "externalities" in terms of assessing costs, they are fooling themselves that they have any real science. And if they don't incorporate these "externalities" then what they are "tweaking" in their alleged models isn't even real. (Rather, some idealization that is fabricated in some ivory tower and disconnected from blood and sweat economics)

If one isn't aware of the total costs of production, one can't possibly set a genuine price on goods, and that alone demolishes the pet concept of "law of supply & demand" (which is certainly unlike any Newtonian law of physics!) so many economists exalt. This exclusion of natural capital, because it is claimed to be 'unquantifiable' - means that that economics cannot possibly be objective - since it's omitting the basis of many of the resources consumed or polluted for the use of so-called production capacity.

But the basis for excluding natural capital is the same one that ranks global warming as 17th among priorities to solve. If the economics brigade (which brought us to this current credit crisis) can’t even get natural capital properly placed, why trust them to prioritize global warming properly? I don’t!

Saturday, March 22, 2008

Evolutionary Confusion

About a year ago, in a letter appearing in the local press, the correspondent argued that: a) liberals were forcing the teaching of evolution on honest, upstanding citizens, and b) if evolution were indeed fact – as opposed to theory – then humans should not deign to protect any animal species or classify them as “endangered.”

After all, evolution dictated survival of the fittest, did it not?

In fact, the miffed letter writer erred on both counts, and those errors disclose the price of ignorance that we pay in this country. Not only in terms of issues – from endangered species, to the reality of global warming, and the uselessness of “missile defense” - but also the extent to which the electoral process itself is warped by misinformation

First, liberals have never “forced evolution” on anyone. Rather, the teaching of evolution has been dictated by countless facts and evidence (including genetic, DNA links) which would merit its teaching for any enlightened population. In truth, the whole realm of biology is dependent on an evolutionary underpinning to thoroughly understand the origin of species, and the process of adaptation.

This is why various scientific organizations (e.g. The American Geophysical Union, American Association for the Advancement of Science, etc.) have from time to time inveighed against efforts – in assorted school districts – to either limit the teaching of evolution or place it in opposition to known religious doctrines (“creationism” and more recently, “intelligent design theory”)

They appreciate the price in ignorance that will be paid by our young people years later, and the learning deficit created. Placing them at a disadvantage to students of other nations (as numerous standardized test results already disclose). Unable to even recognize that a theory is the highest fact standard for science, in which the predictions of a hypothesis have been formally confirmed – over and over.

Second, ‘survival of the fittest’ was never uttered or stated by Charles Darwin himself, in any of his treatises. It was, rather, promoted by the English sociologist Herbert Spencer, in a misguided attempt to extrapolate Darwinian principles to the social sphere. (E.g. The Study of Sociology, 1873, serialized for an American audience in Popular Science Monthly)

In his serialized tracts, Spencer absolutely repudiated all state assistance to the poor, needy, physically feeble, or infirm – based on a bastardized “survival of the fittest” concept. He believed, erroneously, that people are like beasts that had to be forced to compete for precious resources. If they didn’t do this, they’d produce degenerate, weakened humans- unfit in the evolutionary scheme. Hence, the name “Social Darwinism”.

This Social Darwinism remains embedded in the current incarnation of rabid individualism disseminated by ideologues, who salivate non-stop at the prospect of using it to dismember social safety nets. Offering pitiful “faith-based” services in return.

In terms of eliminating species that encroach into human habitats, advocates confuse natural selection (a valid Darwinian principle) with human interference in ecologies. Decimating them to expand artificial human environments. Evolution has nothing to say about artificial expansions of an aggressively over-populating species. It does, however, assert the same limits of adaptation and resources will apply to that species as any other.

In this sense, humans – in their immense hubris and species-chauvinism- must realize they cannot detach themselves from the natural world and the laws that inevitably apply there.

Saturday, March 1, 2008

"Evil, Sin and the Devil" - Getting a Grip (Part II)

In Pastor Mike’s parlance, the mentally projected “Satan” is indeed “like a ravenous beast seeking innocents to devour”. Think of the T-Rex and its insatiable appetite for flesh. Think of components and aspects of the T-Rex brain in each of us. Lying in wait for the right trigger to set it off – as for Cho, the killer in the Virginia Tech massacre. Now, project that horror and its instincts to tear, rip and kill anything different or vulnerable outside yourself. Voila! We have the “Devil”. Only really a psychological embolism adorned in reptilian tendencies already within us. So alien and terrifying we have to project it outside to a nameless “devil”. It’s simply too appalling and horrific for any of us to take ownership for it.

Interestingly, some authors turn these concepts back on themselves and arrive at mind-boggling conclusions. The authors of the book ‘Mean Genes” for example, make the case that genetic imperatives often drive the most fundamental (epigenetic) morality. The hybrid brain in this sense is merely the facilitator of the genes’ imperatives. Perhaps there is a method behind the “madness” of the brain’s disjunctive function: To aid and abet a primal, epigenetic morality.

On the local level, the genetic imperative means I protect my family first in the event of disaster. The welfare of others is secondary. It is my family’s genes that must prevail. To the extent they do, epigenetic morality is satisfied. A certain pool of genes has increased its survival value.

In the larger societal sense and deformed to an extreme, the epigenetic imperative leads to horrors such as the Holocaust, where Jews were depicted as inimical genetic “aliens” to “true Germans” and the Fatherland. (In a trip to Germany in 1985, I still found a number of WWII era Germans who accepted this.) And hence could be dispensed with as serious threats, once their own humanity was removed. Likewise, the genetic imperative running amuck explains the Rwandan genocide, where Tutsis could be dispensed with as the “genetic aliens” to the REAL Rwandans, the Hutus. (In this case, Hutu talk radio played a key role in spreading the memes for the epigenetic morality)

Examining these genocides at the detached, objective level one cannot but help notice the analogies with ant (or bee) species that invade the habitats (e.g. hives) of others, kill them, make off with their queens and seize their resources.

A mindless epigenetic “god” at work.

In this sense, the epigenetic morality and imperative emerges as the real “god” articulated in the Bible, while the perfecto, “goody two shoes” posturer (invented later by the clever, angelic leaning neocortex) is the fake. This was the contention of author Lloyd Graham in the last chapter of his book, ‘Deceptions and Myths of the Bible’, 1979.

For example, as Graham observes (p. 315):

“Satan is matter and its energies and the (Temptation of Jesus in the desert) story is but a mythologist’s way of telling us…that in the inanimate world matter and energy dominate….The only consciousness here is the epigenetic and this is – as yet- wholly incapable of controlling violent forces. This explains why our imaginary God of love and mercy allows these forces to destroy us”.

Graham’s depiction of the material and epigenetic god is one embedded in carnal lusts, revenge and avarice, so how can humanity be any different?

As Graham earlier notes (p. 272):

“Man owes God nothing, not even thanks. Whatever is, exists because of necessity and not divine sufferance. And whatever exists suffers because of nondivine Causation. Our world is full of suffering, tragedy, disease, disaster, pain; we demand a better reason than religion has to offer.”

Perhaps for this reason, Graham insists that it is the de facto “creations” – humankind- who are the genuine authors of workable morality (“dynamic justness” not moral justice) not the claimed “Maker”.

Religious scholar Elaine Pagels makes much the same point in her book, ‘The Gnostic Gospels’ pointing out that the Gnostics regarded the biblical deity as a degenerate sub- being which they called “demiurgos”.

Of course, the Christian reading this will no doubt chime in: “What about free will? Can we not resist the epigenetic imperative?”

Maybe, but it’s by no means clear that any such entity as “free will” exists other than in limited domains. (E.g. I have the “free will” to choose a vegetarian diet over an all meat diet)

Even Einstein, writing in his marvelous book ‘Ideas and Opinions’ was suspicious that humans were genuinely free agents. As he noted:

“The man who is thoroughly convinced of universal causation …..has no use for the religion of fear and equally little for social or moral religion. A God who rewards and punishes is inconceivable to him for the simple reason that a man’s actions are determined by necessity – internal and external- so that he cannot be responsible….any more than an inanimate object is responsible for the motion it undergoes"

The beauty of atheism is that it dispenses with both demiurgos (the petulant, genetic “evil god”) and “Satan”, and atheists emerge as grown up enough to assume responsibility for their own actions, rather than whining that “the devil made me do it”. We know the real “devil” inheres in those untamed genetic imperatives, and we also know that to the extent we are self-aware – we can often defeat the more parochial and self-serving tendencies and sometimes aspire to greatness. Leap-frogging and circumventing our human limits.

Thereby we can avoid blaming every major human tragedy and back step on some imagined supernatural “dark force” permeating existence and just waiting to catch us unawares.

There is a dark “force” in the cosmos and we call it “dark energy”. But it is something that can be discerned by physics and has no supernatural attributes. Intelligent humans would do best to invest their time investigating the nature and mystery of dark energy, rather than squandering time on silly phantasmagorias and fabrications of the mind like “Satan” and “evil”.

Thursday, February 28, 2008

"Evil, Sin and the Devil" - Getting a grip (Part I)

Having seen my brother (Pastor Mike) become ever more unhinged, and his use of "evil", "sin" and "Devil" multiply with his delusions, it is time to get a grip. Is he talking-writing of actual, objective realities, or blowing gaseous emissions out of his mouth (and mind) that bear no semblance to the real world?

In truth, the use of the word “evil” is freighted with superstitious baggage, of little use in a rational –technological age. It presumes origination from “a negative supernatural force” or “Satan”. It overly complicates the issue while unnecessarily adding theoretical existences. We already know in this case that brain structures (e.g. amygdala, reticular formation etc.) can account for all atavistic behaviors from misdirected lust, to baby killing to mass murder or genocide. One need not invent a supernatural special being or super Devil to account for them!

About a year ago, I also took Wendy Kaminer to task in an issue of Free Inquiry (‘Religious vs. Secular Concepts; April/May 2007, p. 65), for an earlier piece in which she (a claimed secularist) used the loaded terms “sin” and “evil”. Kaminer replied in a short note to my letter , basically averring she had no intention of altering her “attachment to moral categories of good and evil.” This is her prerogative and right, of course, but doesn’t alter my own point one iota. That if Kaminer (or anyone else) embraces such attachment then she is more rightly a religionist and not a secularist. Secular people refrain from using religious words, labels, concepts or language in the sense of positively incorporating them into a secular Zeitgeist..


But let us return to my pastor brother's contentions, and in particular his question: ‘Can an atheist deny the existence of evil?’

I maintain that denial of “evil” is not the issue, but primitive language use is. Thus, “evil” is an antiquated and redundant term since what people refer to as “evil” is easily explainable in terms of brain evolution. Thus, Homo Sapiens is fundamentally an animal species with a host of animal/primitive instincts residing in its ancient brain or paleocortex.

Meanwhile, the paleocortex sits evolutionarily beneath the more evolved mesocortex and neocortex, the latter of which crafts concepts and language. One clever person has compared this tri-partite structure to a car design welding a Lamborghini to a Model T Ford chassis, with a 1957 Chevy engine to power the Lamborghini.

There is much evidence that the aggregate of human behavior will get progressively worse as the complexity inherent in technological and globalized societies increases, but brain evolution is unable to keep pace with it. Basically, we are a species with the capability of making nuclear weapons and intercontinental missiles – but with Cro-Magnon brains – and a swatch of reptilian tendencies.

Indeed, the mixed brain design, in terms of adaptability to modern society, is already theorized as one major cause of depression and mental illness in modern society (e.g. The Noonday Demon, Chapter 11, ‘Evolution’, p 401)

The behavior resulting from this hybrid brain is bound to be mixed, reflecting the fact that we literally have three “brains” contending for emergence in one cranium. Behavior will therefore range from the most selfless acts (not to mention creative masterpieces) to savagery, carnal lust run amuck and addictions that paralyze purpose.

The mistake of the religionist is to associate the first mode of behavior with being “human” and not the latter. In effect, disowning most of the possible behaviors of which humans are capable.- and hence nine-tenths of what makes us what we are. Worse, not only disowning these behaviors but ascribing them to some antagonistic dark or negative supernatural force (“Satan”) thereby making them into a religious abstraction.

The neocortex then goes into over-drive, propelled by its ability to craft words for which no correspondents may exist in reality. Suddenly, our “souls” are at risk of being “lost to Satan” who will then fry us in “Hell”. In effect, the religionist’s higher brain centers divide reality into forces of darkness and light, just like the ancient Manicheans.

As the divide grows and persists, certain behaviorally idealistic expectations come to the fore, and a mass of negative or primitive actions is relegated to “evil”. Humans tuned in to this Zeitgeist, which is soon circulated everywhere, being to suppress all behaviors that they regard as defective or “sinful”. They don’t realize or appreciate that humans are risen apes, and not “fallen angels”.

Are we all “sinners” as Pastor Mike claims? No, we’re an animal species saddled with a tri-partite brain whose higher centers often become self aware of the gulf between the base, atavistic and primitive behaviors (emanating from the reptilian brain) and the ideal, non-atavistic behavior conceived by the neocortex. The neocortical language centers then craft the term “sin” to depict the gulf between one and the other.

In this context, the concept of “sin” makes eminent sense. Sin emerges as the label placed on specific brands and forms of “evil”. In reality, “Sin” itself is predicated on an exaggerated importance of humans in the universe. Thus, it elevates (albeit in a perverse way) the importance of humans in an otherwise meaningless cosmos. With “sin” the human has at least the potential of offending his deity – thereby getting its attention – as opposed to being relegated to the status of a cosmic “roach” (which any advanced alien sentience would regard us).

“Sin” then is localized and reactive behavior at the personal, individual level. “Sin” impinges on and affects the deity that so many believe in. Take away the deity, and sin loses its allure and quickly becomes redundant. How can there be “sin” if there is no deity to offend or to notice “sin”? To tote up all the little “black marks” in its “book of future judgment”.

“The Devil” or “Shaitan” is simply the projection of the most primitive brain imperatives onto the external world. And yes, this imperative (which I will soon get to in more detail in Part II) is capable of mass murder as well as genocides. A supernatural Satan need not be invoked here, only the ancient brain of reptiles – acting collectively – aided and abetted by a newly perverted neocortex, which now does the reptile brain’s bidding, as opposed to attempting to halt it.

The more real and present danger inheres in zealots and extremist religionists projecting their Satanic delusions on fellow humans, and thereby demonizing them to convert them into the most debased and vile outcasts. Thus does Pastor Mike refer to atheists as "agents of Satan" and "Satan's disciples" - as if the ability to merely question rigid or uncritical adherence to a faith qualifes as a demonic attribute.

Wednesday, February 13, 2008

Getting to Know the Sun (II)

Sunspots, like weather systems (which, incidentally arise from solar heating effects) are not subject to a high-order accuracy of predictions. Unlike billiard balls and planets, whose positions can be predicted to near perfection - sunspots are large thermodynamic and magnetic systems. To be truthful - solar researchers at this stage are not even sure of the interplay between sunspot magnetism and thermodynamics, far less predicting how they combine to yield flares or other effects (sprays, surges, prominences). The precise role of each of these phenomena to each other is also an ongoing source of contention between solar researchers.[1]

Consider the sunspot itself. It appears to be just a simple dark blemish on the solar surface, or photosphere. In fact, the darkness is purely an illusion resulting from its lower temperature (about 6,000 F) compared to the surrounding photosphere (11,000 F). The sunspot's "single" state is also somewhat illusory - since nearly all sunspots occur in pairs of differing polarity, like the north and south poles of a magnet. (Solar astronomers call them "plus" and "minus" poles).

A question of long standing seems to be how a cooler gas[2] can be immersed in a much hotter gas without itself reaching a higher temperature. This seems to defy one of the laws of thermodynamics. Investigations have only recently disclosed the underlying reason: the powerful magnetic fields - up to 5,000 times stronger than the Earth's. These incredibly powerful magnetic fields trap the cooler gases inside a confined region (tube) which actually "floats" on the hotter, less dense medium of the photosphere.

In the photo (via link) below is a typical, large sunspot pair, as captured in a catadioptric telescope with solar filter. The image embodies how spots are (currently) theorized to be at opposite ends of a "magnetic tube", spanning opposite magnetic polarities. In this case the leader spot (lower right) is at one end of the tube, and the (partly bifurcated) followe (upper left) is at the other.


http://groups.msn.com/PlanetStahl/mathphysics.msnw?action=ShowPhoto&PhotoID=41



From the photo, the leader spot is at the positive polarity end of the magnetic loop containing the cooler gases, while the followe is at the negative polarity end. The key aspect to note in the above is that sunspots are pictured as sections of magnetic loops seen in projection. The darkest regions occur where the trapped gases are coolest and the magnetic fields strongest. Sunspots themselves display two distinct regions: a dark, central umbra and an outer, lighter penumbra - with the latter at the (somewhat) higher temperature.

Why are sunspots so important? For one thing, they may signify that the Sun is a variable star - or at least not as well-behaved as humans surmise. For example, if the so-called "Maunder Minimum" is a genuine effect, then the reduced numbers of sunspots may well have ushered in the "little ice age" from 1645-1715. And if a "little ice age" can be ushered in by sunspots (or rather their absence) then perhaps a major ice age could be. In any case, the answer won't be found unless sunspots are studied closely, to reveal any long term periodicity in their behavior. (Currently, a new consensus is emerging that no new ice ages will occur once the carbon dioxide concentration exceeds 450 parts per million - which may happen in the next fifty years)

As it is, astronomers know that the Sun exhibits an 11-year average sunspot cycle. This means that every eleven years - on average, sunspot numbers reach a peak - what is called the "sunspot maximum". Superimposed on this is a 22-year cycle, for reversal of magnetic polarities to occur in the "leading" spot. To fix ideas, consider again the photo above: Then the "leader" spot and the "follower" will exchange polarities in the next cycle, assuming that their orientation is in the direction of the Sun's rotation (east to west). The 22-year cycle means that 22 years (again, on average) must elapse before leader spots again have negative polarities at sunspot maximum. (In the intervening, or 11-year maximum, the leaders will all have positive polarity).

For some reason, very large sunspots with "complex" magnetic structures(that is, multiple +/- polarities in the same sunspot)can become unstable and given rise to extremely violent flares. This happened in August, 1972- with a giant flare hurling lethal protons at the Earth that would have killed any astronauts. It also occurred in March, 1989, when a giant sunspot group went unstable and precipitated the flare which downed the Ottawa power grid.

A 1992 movie with Charlton Heston, entitled Solar Crisis, depicted a future time in which the Earth is threatened by an impending "mega" flare that is forecast to tear away the atmosphere and shower deadly radiation on everything. Could such a fantastic flare actually occur - and could it be predicted as accurately as in the movie? This is doubtful on both counts.

For one thing, the energy of flares is limited by how much "free" (extractable) energy can be stored in magnetic tubes. A good analogy is a rubber band. If a rubber band is wound up over and over it gains free potential energy (available for future motion). Release it now and it will rapidly twist apart to its original state. Magnetic tubes on the Sun do something very similar. They are twisted up by the motions of the Sun's turbulent surface[3] - and store free magnetic energy as they twist. The more twisted they are, the more free magnetic energy they acquire - to power flares, and the particle bursts that accompany flares.

Flare energy is limited because: 1- there is a limit to the size a magnetic tube can attain, 2- there is a limit to how much twist a tube can acquire. Each of these limits the amount of total magnetic energy available for release. On that basis, it is unlikely that humans will ever see flares greater than those which occurred in August, 1972 and March, 1989. Obviously, if the Sun is a variable star - its physical conditions could change. One of these conditions is its rate of "differential" rotation[4] , which is believed to be responsible for the origin of sunspot magnetic fields. On that assumption, a higher rate of differential rotation could portend much more violent flares, precisely because magnetic tubes can acquire stronger magnetic fields (greater twist).

A much more credible risk from the Sun would be an inherent variability leading to temperature differences, and weather changes on Earth. At the present time, all the evidence indicates that whatever major solar variability exists is on a very long time scale, say ten thousand years. However, as seen earlier, there is nothing to rule out shorter term variability being superimposed on the longer term. In effect, the Sun could very well undergo smaller percentage changes in its energy output over smaller time scales - say 100 years or less[5]. The only way to determine if short term variability exists is to have solar satellites in place for detailed observations over extended times. At the present time, such satellites are not in any budget. Indeed, the last major solar satellite effort was the "Solar Max", which is now defunct. The SOT, or Solar Optical Telescope, was to be launched in 1985-86, but that was killed in a single budget cut - much to the consternation of solar researchers such as myself.

If advanced extraterrestrials were to pay us a visit, they would probably be astounded at a so-called intelligent species that sends all sorts of satellites and vehicles into apace - none of which is dedicated to continuously observing the home planet's nearest star! I would surmise that a truly intelligent species would place solar observations (e.g. of its own star) as having the highest priority of any space-based research - certainly over space shuttles, planetary probes and spy satellites. There wouldn't be any mystery to a solar priority, since after all the Sun is the 'hub of life' for all living organisms anywhere in the solar system. If the Sun's nuclear reactions 'turned off' for just a day[6] - all life on Earth would be frozen out of existence before the Sun's nuclear furnace switched on again. In other words, for all their technological prowess, humans would not survive even a day without the benefit of the Sun's life giving warmth and light.

For a physical perspective, it is useful to regard the Earth as situated inside the Sun. Technically, this is accurate - since Earth is literally bathed in the "solar wind" - a rapidly moving, tenuous gas continuously cast off from the Sun's corona, or outermost atmosphere. Our total understanding promises to be significantly enhanced with the Ulysses spacecraft - which will include passes over the solar polar regions.[7]

We now understand that the Earth - and all the other planets, originated as hot 'globs' of gas spun off from the bloated infant Sun. As each of these gaseous globs separated, then cooled and assumed its own orbit, it carried away some of the Sun's rotational motion. The legacy of this 'planetary secession' is that the Sun now rotates much more slowly (about once in 27 days) than it did in that early epoch.

The ironic end of the story is that one day in the distant future, an aged and inflated 'red giant' Sun will reclaim most of its original pieces. No life of any form will survive since the Earth will effectively be engulfed. All records of human habitation will be completely obliterated in raging firestorms long before the planet itself is reduced to a cinder. If any single, cogent reason can be advanced to induce humans to colonize other worlds, this is it.

[1] This was also a source of controversy at the 75th AGU Conference mentioned in the earlier footnote. At the May 26th, 1994 Thursday morning session, for example, solar physicist Hal Zirin and space physicist A.J. Hundhausen totally disagreed on whether solar flares led to coronal mass ejections or the other way around. Zirin steadfastly maintained that flares caused the mass ejections, while Hundhausen just as strongly maintained the reverse was the more accurate hypothesis. Such stark disagreements actually show the glaring need for a higher spatial resolution for solar-observing instruments. At least this was the general consensus that emerged from the Space Weather sessions I attended.
[2] I am using the term "gas" here, but actually the Sun's material is plasma. A gas - like oxygen at room temperature, has all its electrons. A plasma, on the contrary, is missing one or more electrons because of the high temperature it is subjected to. We say it is "once ionized" (one electron lost) "twice ionized" and so on. The significance of electrons being lost is that the gas becomes electrically conducting. Thus, a hydrogen plasma in the Sun will conduct electrical currents.
[3] In fact, the Sun has no definite surface, because it is a gaseous body. The 'surface' usually referred to, is that from which visible light emanates: the photosphere ('light-sphere'). It is very dense relative to higher layers, and is where most of the turbulent motions occur that can twist magnetic tubes.
[4] Differential rotation applies to a rotating fluid or gas. It means that the rate of rotation depends on the position of a point on the surface. In the case of the Sun, the fastest rotation is at the lowest latitudes, near the equator.
[5] There are some who claim variability arises from external changes in the Earth's orbit, rather than from internal changes in the Sun. Most often, the "Milankovitch Effect" is mentioned - whereby the Earth supposedly makes significant orbital changes, leading to much higher or lower mean global temperatures - depending on whether it is much nearer or farther from the Sun. At the present time, the solar research community gives very little credence to the "Milankovitch Effect", since there is no compelling evidence that it is significant enough to affect climate.
[6] Of course, there would be a long delay between the 'turn off' and when it was noticed on Earth! The average time needed for photons in the Sun's core (where nuclear reactions occur) to filter to the solar surface and escape - is about 1 million years. Thus, if all nuclear reactions shut down today - then the Sun would only 'black out' in one million more years (on average).
[7] See the papers on Ulysses findings: Science: Vol.268, May 19, 1995.

Wednesday, February 6, 2008

Getting to know the Sun (I)

I became enthralled with astronomy at the age of 12, after sending in a cereal boxtop plus fifty cents for a toy telescope. After the telescope arrived, I recall eagerly using it for nightly excursions around the South Florida sky. It may have been a toy - but it had enough magnifying power to allow me to resolve star clusters, and see the larger craters on the Moon. Some years later, I graduated to a larger telescope I constructed myself, and then to an amateur-astronomer sized scope: a Tasco 2.4 inch refractor.

With each succeeding telescope I more or less looked at the exact same objects: the brighter planets (Jupiter, Mars, Saturn, and Venus), star clusters (like the Pleiades), and the Moon. The idea was always to see how the increased size of the telescope allowed me to see more detailed features. However, by the time I reached the Tasco I was beginning to get somewhat jaded. What I needed was to find some purpose in my astronomical pursuits - apart from simple star-gazing.

It was around that time that I discovered - in my telescope carrying case, a tiny, darkened piece of flint glass embedded in a thick metal frame. Curious, I took it out and examined it up close. Then I rummaged through the box to find a small pamphlet describing the use of "your solar filter". Evidently, the filter was screwed on to the front of the eyepiece assembly - just ahead of the eyepieces. Once in place, the Sun could be viewed safely and sunspots became visible. I was absolutely amazed that so many ominous looking dark spots could nearly fill up the surface and not make any difference in the brightness.

For the next several weeks I became completely captivated by my solar observations, specifically the transit of large sunspots across the Sun's disk. What particularly fuelled my interest was a book I had borrowed from the local library entitled Our Star The Sun, by Donald Menzel of Harvard Observatory[1]. In this fascinating book I learned that the Sun the "daytime star" - was the source of all life on the Earth, and actually "the only practical reason for the study of astronomy". Change the physical nature of the Sun by 1 percentage point, and the survival of the human race, and all life on planet earth, was threatened. In the words of Sir Fred Hoyle:[2]

...if the Sun were to vary a little, only a very little, we should soon be faced by a situation besides which the political crises that fill our lives would fall into entire insignificance

To many laymen, the Sun appears so hot and bright, that no conscious connection is made to the pinpoints of light seen at night. The Sun tends to be segregated from the other stars entirely. Why the extreme difference in appearance if the Sun is a star like the others? To illustrate, the nearest star to Earth other than the Sun is very similar in physical characteristics: size, brightness, mass and surface temperature. It is called Alpha Centauri, and is 4.3 light years away. This works out to 270,000 times further than the Sun's 93 million miles. It looks like a fairly bright pinpoint because of its distance. However, the Sun would look exactly like it if its distance could be magically increased 270,000-fold.

A physical principle used in astronomy states that the brightness of a light source - like a star, decreases as the square of its distance. In concrete terms, if I look at a 100 watt bulb and a 40 watt bulb from ten feet away, I will judge the 100 watt bulb to be significantly brighter (two-and-a-half times to be exact). However, if I were to move the 100 watt bulb to a distance of 100 feet - keeping the 40 watt bulb at ten feet, what will I see? The 100 watt bulb is now ten times further than it was for the original comparison, so its apparent brightness is now (1/102) = 1/100 of what it was, or equivalent to a 1 watt bulb. Thus, the 40 watt bulb will now appear 40 times brighter even though intrinsically it isn't.

The same principle applies to the more distant stars. There are stars thousands of times brighter than the Sun, but they appear as dim pinpoints because they are so much farther away. One would have to "move" them to the same distance as the Sun (93 million miles) to get an appreciation for their actual properties in relation to the Sun's.[3] (Though, if that feat could be achieved, all life on Earth would be incinerated in a microsecond!) What is the point of all this? Simply this: without an awareness of the inverse-square law for light, humans would be deluded into the false perception that their particular star (the Sun) was the biggest and brightest in the universe. This is assuming they included the Sun in the same category as the distant stars. It is certainly not intuitive.

Application of the physical principle removes the Sun's specialness - placing it in the category of a very ordinary, garden variety star known as a "yellow dwarf". There are a multitude of stars that are thousands of times hotter and brighter, and hundreds of times bigger in size. Be that as it may, these same physically imposing attributes place those stars in an improbable position to support life-bearing planets. This will apply to the Sun in another 4 billion or so years: it will be approximately three hundred times its present diameter as a "red giant". All the inner planets, including Earth, will be reduced to burnt out cinders as the Sun expands to devour them one by one. If giant, Earth-smashing asteroids don't succeed in impressing upon humans the need to spread themselves around the cosmos, this certainly should.

A catastrophe like the one above can be well-predicted from nuclear physics. Astronomers know the Sun will one day consume its hydrogen, forcing it to burn less-efficient helium (into which the original hydrogen would have been converted).[4] When the helium is used up, an even less efficient fuel in the form of carbon remains. To compensate for the considerable loss in fuel efficiency (lower temperatures) the solar core must contract under the force of gravity. This generates a good deal of heat, causing the surrounding layers of the Sun's atmosphere to inflate. (Since a heated gas expands). The only major uncertainty is whether this inflation will amount to 100 times the present size, or 500. In terms of human life surviving, it won't make any difference: planet Earth will be just as well roasted.

Of course, solar changes need not be as dramatic as these for life's grip to become very tentative. A change in solar energy output by as little as two percent could threaten most species on Earth with extinction - since the planet would either be transformed into a vast, arid wasteland with daytime temperatures approaching 125 degrees or, alternatively, a frigid glacier with daytime temperatures averaging 50 below zero. Possibly, some hardy bacteria and viruses might survive - but not much else. It is difficult to see how humans could sustain themselves in a hostile environment nearly devoid of water.

In the early and mid-1980's, measurements made by an instrument called ACRIM (Active Cavity Radiometer Irradiance Monitor), aboard the SolarMax satellite, detected an increase of one-half percent in the Sun's brightness on several occasions - due to the presence of many large sunspots. The instrument was capable of detecting changes in energy as small as one-thousandth of one percent. These small order differences would amount to an increase in the Sun's surface temperature on the order of 100 degrees Fahrenheit.

Given that an increase in energy output correlates with the appearance of many spots, it is reasonable to suppose the opposite is true as well: a dearth of sunspots correlates with cooler temperatures. While the ACRIM did find a few such cases, the most notable study is that of John Eddy on the so-called "Maunder Minimum" - over the 17th - 18th centuries, when relatively few sunspots were visible from historical records.[5] Coincidentally, much lower than average temperatures were the norm, earning the period the nickname "little ice age".

In Menzel's book I discovered that the sunspots I observed could "grow" to be as large as 100 thousand miles in diameter - more than twelve times the diameter of the Earth! The Sun's surface was also the site of titanic explosions called solar flares which could engulf as many as one thousand Earths, and release an energy equivalent to two-thousand million megaton H-bombs exploded simultaneously! (The Sun itself has a diameter equal to nearly 4 Earth-Moon distances, and if it could be placed on an immense balance - 330,000 Earths would be needed to equalize the scales.)

For the remainder of my high school senior year, until I left for college, I used my Tasco with the solar filter to observe the passage of sunspots. Following Menzel's guidelines I was actually able to track the same group of spots across the solar surface and deduce the Sun's rotation period, of about 27 days. Thus began what was to be a lifelong fascination with the nearest star, and the basis for future research that has continued to this day. (Though the emphasis has changed from simple sunspot transits to their relationship to solar flares).

Is the Sun really as important as Menzel (and others) have portrayed it? Consider this: in 1973, the Skylab orbiting platform, with solar observing equipment aboard, detected a solar flare that wiped out twenty percent of the (then) ozone layer over North America. In March, 1989, a mammoth solar flare erupted in a region of very large spots, knocking out Ottawa's power grid. Nearly a half-million people were deprived of electricity, for nearly ten hours.[6] A massive magnetized cloud, from a solar eruption on January 6, 1997, is believed to have knocked out a Telstar 401 communications satellite on January 11.

Granted, "monster" flares such as these are somewhat rare, but they disclose how precariously human existence is in relation to the Sun's behavior. Fortunately for humans, the Sun is so steady in behavior that we scarcely notice it - unless we go out and get sunburned. If, by contrast, the Sun were a variable star that changed only a few percent each year in temperature and brightness, we would all be in jeopardy. A minor increase in temperature and brightness of a few percent would blind most inhabitants of Earth, and make the temperatures extremely uncomfortable - reaching the hundred-plus degree mark even at the poles, in winter.

As it is, the issue of whether the Sun is variable or not has still not been settled. There is some circumstantial evidence, including variable tree-ring growth, that the Sun is a "long-term" variable star, and more recent evidence it is short term as well.[7] The former case implies changing its temperature and brightness a few percent every 10,000 years or so. There has been some speculation that just such a change occurred around 10,000 years ago and brought the last ice age to a close. More recently, in the 1800's, an extended period of cold weather gripped much of the planet prompting the term "little ice age" to be used. Interestingly, this corresponded to a period of below-average sunspot frequency now referred to as "the Maunder Minimum", after the astronomer who made the original association.

The possible variability of the Sun, as well its potential for violent eruptions (in solar flares) makes it a pre-eminent subject of astronomical, and human importance. Indeed, attention to the Sun's behavior extends beyond the domain of esoteric research. Defense agencies and the military, for example, as well as power companies and telecommunications systems, are regular consumers of solar data - specifically flares, but also the particle bursts that result from flares. The data is provided through the 24 hours monitoring of the Space Environment Services Center of the National Oceanic and Atmospheric Administration. This is critical because, if conditions are right, energetic particles can saturate the delicate electronic detectors on board a spy or communications satellite. (As occurred with the Telstar 401 in January, 1997).

None of this is mysterious, of course. For years short wave fadeouts known as Dellingers originated with the passage of large sunspot groups with their powerful magnetic fields, near the center of the Sun. Solar flares magnify the effects, especially with electronic detectors on aircraft and in satellites. In some cases, large flares (with high x-ray output) have been known to cause malfunctions in navigation systems aboard commercial aircraft.

All of these provide compelling reasons to study the Sun - so I 've never had any problems explaining why I am "into" solar research - specifically the prediction of flares from sunspots.
Actually, prediction is probably a fairly strong word. The term that is generally favored is "forecasting", and that is just about what many solar observers and researchers do: use their data to make as reliable a forecast (say on flare occurrence) as we can. More often than not, we do not fare any better than weather forecasters.

[1] Menzel, D.: 1958, Our Star the Sun, Harvard University Press, Cambridge, Mass.
[2] Hoyle, F: The Frontiers of Astronomy, Signet Science Books, New York, p. 19.
[3] In fact, there is an easier way. Astronomers use the level playing field called 'absolute magnitude' to compare stars at the same distance. In this scheme, the inverse square law is used to adjust the distance/brightness of all stars to a standard distance of 10 parsecs or 32.2 light years (1 parsec = 3.26 light years). The magnitude scale is really a logarithmic brightness scale, with every even 5 magnitudes corresponding to 100 times brightness difference, and every one magnitude difference corresponding to 2.512 times brightness difference (2.512 being the fifth root of 100).

In this scheme, the Sun's 'absolute magnitude' is rated at +4.8, and that of the dog star Sirius at (-1.6). Since the more positive scale refers to a dimmer star, this implies that Sirius is really some 363 times brighter than the Sun (e.g. 2.512 raised to the power (4.8 - (-1.6)) = 6.4. Note that 'absolute magnitude' is only meaningful for light sources, e.g. stars - not planets - which are only visible by virtue of reflecting sunlight.
[4] The nuclear burning within stars is nicely discussed in numerous texts I will cite at the conclusion of the series.
[5] See, e.g. Eddy, J.A. 1976, in Science, Vol. 192, p. 1189.
[6] This was discussed in a presentation by solar physicist Hal Zirin at the joint American Geophysical Union - Solar Physics Division of the American Astronomical Society Conference held in Baltimore on Thursday, May 26, 1994. (The Conference, marking the 75th Anniversary of the AGU, lasted from the 24th through the 27th of May). Zirin included a slide showing a transformer power cable - such as used in the Ottawa system -with its copper wires melted. (Each copper wire had the thickness of a man's thumb). What happened is that electrically charged particles from the huge flare caused large currents (> 10^6 A) to be induced inside the power lines and transformer wires. The resulting electrical load was simply too much for the circuit conductors to accommodate - something like a fuse blowing in a household circuit.
[7] See: 'A Fickle Sun Could Be Altering Earth's Climate After All', in Science, Vol. 269, (Aug. 4, 1995), p. 633.