Wednesday, December 9, 2015

New Climate Paper Skewers The Trope of "Natural Causes" For Global Warming


Fig. (a) shows global temperature anomalies (1880- 2013) as functions of radiative forcing using CO2 forcing as a linear surrogate. The line has a slope of 2.33 C per CO2 doubling. Fig. (b) shows the residuals from the straight line in (a), which are estimates of the natural variability.


In an audacious new paper appearing in Eos Transactions (1st December, pp. 8-9) author S. Lovejoy of the Dept. Of Physics, McGill University, provides a sound basis for "climate closure" in terms of the primary cause of global warming.  The paper is timely, given the ubiquitous narrative by the anthropogenic denialists that the Earth has been subject to warming - but not from humans  - rather via a "giant fluctuation of solar, nonlinear dynamics that is internal to the atmosphere" or some other natural origin. Of course, this is poppycock, the active straws grabbed by desperate people - mainly driven by economics- who have no other out. After all, most people in the world with more than air between the ears already concur some type of climate havoc is occurring. The only issue is to pin the cause on the right source.

This is where Lovejoy's paper enters. While acknowledging the effects of climate forcings are "difficult to quantify" he points out that ever since 1880 the  forcings  have been directly linked to economics. In other words:

"To a good approximation, if you double the world economy (e.g. by increase in GDP %), double the carbon dioxide (CO2), double the methane and aerosol outputs, double the land use changes (i.e. from rural to urban) you get double the warming." 

As he notes, this justifies using the global CO2 forcing since 1880 "as a linear surrogate for all the anthropogenic forcings". With reference to the graphs shown, Figure (a) (he calls it 1(a)) shows the global annual temperature plotted not as a function of the date but as a function of the CO2 forcing. As he points out:

"Even without fancy statistics or special knowledge, it is easy to see that the temperature (plotted in green) increases very nearly linearly with some additional variations" - which represent the "natural variability"  Then the gradient (black) is found to be 2.33 C per CO2 doubling which then is the actual historical increase in temperature arising from the observed increase in CO2. This is the "effective climate sensitivity".

Lovejoy further has put a check on his assumptions, noting that the figure "sits comfortably" with the IPCC range of 1.5 C - 4.5 C per doubling for the "equilibrium climate sensitivity"

This is important because by avoiding many statistical details, Lovejoy has succeeded in presenting a paper that ought to be accessible just to a person who's been exposed to some basic algebra with only a minimum of physics background.

Then in Figure (b) we see the differences (residues) between the actual temperature and the specifically anthropogenic part. These residuals are the natural variability.  That this is reasonable is confirmed, as Lovejoy notes, because the average amplitude of the residues (+ 0.109 C) is "virtually the same" as the errors in 1-year global climate model hindcasts (+ 0.105 C and + 0.106 C, from Smith et al, Science, 317, 796, 2007 and Laeple et al, Geophys. Research Lett., 35, L10710, 2008), respectively.

In effect, as Lovejoy observes:

"Knowing only the slope of Figure 1(a) and the global annual CO2, we could predict the global annual temperature for next year to this accuracy. Clearly this residue must be close to the true natural variability."

The range of the straight line in Fig. (a) is thus an estimate of the total anthropogenic warming since 1880. Or about 1 C.  Lovejoy next asks:

"What is the probability the denialists are right and that this is simply a giant natural fluctuation?"

To answer this Lovejoy compared industrial variations to preindustrial ones. Then, applying a Gaussian distribution to the data, he arrives at the result that "the chance of a 1 C fluctuation over 125 years being natural is in the range of 1 in 100,000 to 1 in 3,000, 000. "

He further notes that "for long periods the standard deviation of temperature differences is twice the 0.1C value. Hence, a 1 C fluctuation is about five standard deviations or a 1 in 3 million chance".

This appears whopping improbable but he reminds us that nonlinear geophysics "tells us the extremes should be stronger than the usual Gaussian (bell) curve allows".  In other words, such global fluctuations would be about "100 times more likely than the bell curve would predict".

Factoring this in, we still arrive at a lower bound probability of at most 1 in 1,000 for natural causes to predominate over anthropogenic ones. Most of us would take those odds - say in a gambling venue - any time we get them

What impressed me about the work and conclusions is how much it conforms with the earlier research into  C14:C12  isotope deviations compiled by P.E. Damon ('The Solar Output and Its Variation', The University of Colorado Press, Boulder, 1977).  (When the Sun is more active, the heliosphere will be stronger, shielding the Earth from more intense cosmic rays. The effect of this is to reduce the C14 produced in the Earth’s upper atmosphere. Conversely, when the Sun is less active – as it’s been from 2000- 2008 then the shield is weaker and more intense cosmic rays penetrate to our upper atmosphere yielding more C14 produced. )


Damon's results are shown in the accompanying graph below. To conform with solar activity the plot is such that increasing radiocarbon (C14) is downward and indicated with (+). The deviations in parts per thousand are shown relative to an arbitrary 19th century reference level (1890).















As John Eddy observes concerning this output (Eddy, The New Solar Physics,  p. 17):

The gradual fall from left to right (increasing C14/C12 ratio) is…probably not a solar effect but the result of the known, slow decrease in the strength of the Earth’s magnetic moment. exposing the Earth to ever-increased cosmic ray fluxes and increased radiocarbon production.

The sharp upward spike at the modern end of the curve, representing a marked drop in relative radiocarbon, is generally attributed to anthropogenic causes—the mark of increased population and the Industrial Age."


Assuming the validity of the arbitrary norm (zero line or abscissa) for 1890, then it is clear that the magnitude of the Middle Ages warming period (relative C14 strength of -18), for example, is less than about ½ the relative effect attributed mainly to anthropogenic sources in the modern era (-40). Even if one fourth the latter magnitude is assigned to solar activity (based on solar variability component detected over 1861-1990 amounting to 0.1- 0.5 W/m^2 vs. 2.0 to 2.8 W/m^2 for heating component arising from greenhouse gas emissions, cf. Martin I. Hoffert et al, in Nature, Vol. 401, p. 764) the anthropogenic effect is at least 3/2 times  that for the last (exclusively solar) warming period..

Is there still a role for natural variability? Of course, and as Lovejoy notes "without it the warming would have become unrealistically strong".

The problem is that now, there are few or no know agents of "natural variability" that can put a cap on warming. The only man-made one we know would work for sure, is a "nuclear winter" effect induced by a massive exchange of thermonuclear warheads. But I am sure nobody wants that!

5 comments:

Publius said...

In an audacious new paper appearing in Eos Transactions (1st December, pp. 8-9) author S. Lovejoy

Looks like a rehash of his earlier paper, “Scaling fluctuation analysis and statistical hypothesis testing of anthropogenic warming”, S. Lovejoy, Climate Dynamics, published online April 6, 2014.

Did you ever consider:
1) The signal you're trying to explain is 0.8 C and the range of variance is 0.4 C?
2) "Average temperature" is . . . of questionable usefulness; some researchers are actually using the "midrange" (the difference between high and low daily temps) and some researchers will calculate it separately for the Northern and Southern hemispheres, then average those two numbers together.
3) The claimed precision of their measurements is 0.01 C; this with only 6,000 land-based measurement stations and extremely poor coverage over the oceans and poles.

This is important because by avoiding many statistical details, Lovejoy has succeeded in presenting a paper that ought to be accessible . . ..

His simple model is
Tglobe(t) = Tauth(t) + Tnat(t) + e(t)

Where Tauth(t) is fixed and Tnat(t) is stochastic. I suppose he's trying to fit a mixed model, although he appears to just estimate the mean of Tnat(t) and not the variance. I would expect it to be specified more like Tnat(u(t),v(t)). And he ignores e(t).

Ah, ahem . . . how about some modeling of autoregression and covariance structure. Wait, now the statistics aren't simple.

The problem is that now, there are few or no know agents of "natural variability" that can put a cap on warming.

Really, a natural system with unbounded positive feedback?

Copernicus said...

"Really, a natural system with unbounded" positive feedback? "

As Carl Sagan has noted the onset of the Venus tipping point meant that the planet overheated from excess CO2 (added after its carbonate rocks began to expel the gas from overheating) and as it did so the water on the surface evaporated. This would have then filled the atmosphere, making it even more efficient at trapping heat, which would have caused more evaporation (of any oceans), and so on. This is what we mean by a "positive feedback loop". In Sagan's context "unbounded" means forcing leads to conditions that can't be reversed.

This concept was well explicated by Sagan in one of his essays, 'Ambush : The Warming of the World', in his book 'Billions and Billions: Thoughts on Life and Death at the Beginning of the Millennium':

"Melting of ice caps (already occurring) results in diminished albedo (reflection of solar radiation back into space), and a darker Earth surface - with more infrared radiation absorbed - reinforcing the tendency while enhancing the melting effect, leading to further darkening of the surface, reduced albedo and more melting."


"Unbounded positive feedback" in a natural system, i.e. for the runaway greenhouse effect, is not that mysterious as we already have the prime example in Venus. The EU Ensembles project also bears this out.

ENSEMBLES is primarily concerned with quantifying the aggressive carbon "mitigation" scenario. What happens by what time, if we cut CO2 emissions by so much?

Their working scenario thus far (given existing assumptions and variables) leads to a peak in the CO2 equivalent concentration in the atmosphere of nearly 535 parts per million in 2045, before they propose an eventual 'stabilization'. Even so, the concentration peak is precariously close to what many (e.g. the late Carl Sagan) have claimed is at the cusp of the runaway greenhouse effect.

A warning given in the piece, and cautionary note for all over-simplistic takes, is that while simpler models often give useful results they almost uniformly show a modest warming only, of say 2C in the time period up to 2100. (Or 0.8 C up to now) Once one factors in complexities, for example removing the global dimming factor (which had masked two thirds of the warming) things change and fast. You now see warming levels ramped up to the 5-6C range or again - close to what'd be expected in the runaway greenhouse scenario.

Another complexity is that although planetary albedo depends primarily on cloud cover it is the least well studied climatic parameter. Clouds are very poorly parameterized in climate models as a whole. This has led to an ongoing debate over the past five years of whether in fact the sign of albedo change is positive or negative. (See e.g. ‘Can Earth’s Albedo and Surface Temperature Increase Together’ in EOS, Vol. 87, No. 4, Jan. 24, 2006, p. 37)

As the authors note, though there is some evidence that Earth’s albedo has increased from 2000 to 2004 this has NOT led to a reversal in global warming. The authors cite the most up to date cloud data released in August, 2005 from the International Satellite Cloud Climatology Project (ISCCP). The data – from a range of meteorological satellites covering the entire Earth, discloses the most likely reason for the anomaly is primarily in the redistribution of the clouds.

Thus(ibid.):

“whereas low clouds have decreased during the most recent years, high clouds have increased to a larger extent leading to both an increase in cloud amount AND an increased trapping of infrared radiation.”

In addition, once methane is released en masse in the Arctic (from permafrost) we will be well on the way to that runaway, initial "unbounded" effect.

Copernicus said...

I ought to have clearly stated at the outset that the paper - summarized from Eos Transactions - is actually a distillation of THREE previous papers published by Lovejoy - so it is best not to make assumptions or render conclusions before reading ALL the papers. When I referred to "avoiding many statistical details" I meant in the synopsis Eos Transactions paper.

So before making further comments, or criticisms (for me to publish), you will want to check out these two additional papers (in addition to the one you already cited) on which the summary paper is based (As this is a blog and not a specific academic forum, I didn't want to inundate general readers with too many details that would only make eyes glaze over):


1) Lovejoy, S; (2014b), 'Return Periods of Global Climate Fluctuations and the Pause', in Geophysical Research Letters, (41), 4704-10.

2) Lovejoy, S. (2015), 'Using Scaling for Macroweather Forecasting Including the Pause', in Geophys. Res. Letters, (in press)

Hopefully, these additional papers will shed sufficient light needed to make more insightful critical comments.

Copernicus said...


"although he appears to just estimate the mean of Tnat(t) and not the variance. I would expect it to be specified more like Tnat(u(t),v(t)). And he ignores e(t). "


Lovejoy notes that the type or amplitude of the natural variability so that a simple model may suffice:

Tglobe (t ) = Tanth (t )+ Tnat (t ) + ε(t )

As he writes:

Tglobe is the measured mean global temperature anomaly, Tanth is the deterministic
anthropogenic contribution, Tnat is the (stochastic) natural variability (including the responses to the natural forcings) and ε is the measurement error.

He pointedly states: "The latter can be estimated from the differences between the various observed global series and their means; it is nearly independent of time scale [Lovejoy et al., 2013a] and sufficiently small (≈ ±0.03 K) that we
ignore it."

I see no problem with this, given the minuscule magnitude. And he shows in his more recent papers (1,2 previously cited) that ignoring it is a valid approach. To fret about ignoring it amounts to quibbling.

As for T_nat, he did make clear it includes the responses to volcanic, solar and any other natural forcings so that Tnat(t) does not represent pure “internal” variability.

The reason for this is that there are significantly different forcing levels for the volcanic, solar etc. sources.

For the volcanic for example (cf. 'Volcanic Versus Anthropogenic Carbon Dioxide' in Eos Transactions of the American Geophysical Union( Vol. 92, No. 24, June 14, 2011, p. 201) we see:

"anthropogenic CO2 emission rate of 35 gigatons per year is 135 times greater than the 0.26 gigatons per year emission rate for volcanoes, plumes etc. This ratio of 135:1 (anthropogenic to volcanic CO2) is what defines the anthropogenic multiplier, an index of anthropogenic CO2's dominance over volcanic inputs"

Interestingly the only volcanic event which even came close to human emissions was the eruption of Mt. Pinatubo in the Philippines in 1992. It generated CO2 emission rates roughly between 0.001 and 0.006 gigaton per hour, or closely approximating to the 0.004 gigaton anthropogenic per hour (e.g. based on 35 gigatons per year).Thus, as the Eos article observes:

"For a few hours individual volcanoes may emit as much or more CO2 than human activities. But volcanic emissions are ephemeral while anthropogenic CO2 is emitted relentlessly from from ubiquitous sources."

In terms of solar inputs, the solar irradiance is most significant.

On average, with such violent inputs (e.g. solar flares, CMEs) smoothed out, the Earth's temperature changes by about 0.07 K (kelvin) over a solar cycle. Compare this to the 0.6 K change (increase) in global temperatures over the past 100 years arising from human-caused greenhouse effect. Thus, the human component is over 8.5 times greater.

Even if the solar forcing on climate is enhanced by positive feedbacks the amplification is usually no more than a factor 2. So that 0.07 K increases become 0.14 K increases. The human component is still more important by a factor 4.2.

Bottom line the residuals emerging are so small compared to CO2 anthropogenic forcings we don't need Lovejoy to compute variances, especially when Tnat(t) "does not represent pure “internal” variability."






Copernicus said...

"Did you ever consider:
1) The signal you're trying to explain is 0.8 C and the range of variance is 0.4 C? "

That's totally nonsensical! Only the standard deviation (sigma) has *the same dimension as the data*. In this case C (or K) It is also the square root of the variance so the latter can't possibly have the same units.

Lovejoy clearly notes a 1C fluctuation is "about five standard deviations" hence for his data (referencing the original paper) one sigma is about 1C/ 5 = 0.2 C

Then the *variance* would be the square or: 0.04 C^2