Friday, October 31, 2025

A Deep Dive Into the Brier P-Score In Solar Physics

 The forecasting of solar flares is critical given the range of their terrestrial impacts - from power outages (such as occurred in 1989 in Ottawa) to disturbance of aircraft navigation. The Brier P-Score is one of the first methods - applied to statistical flare forecast evaluation.  It was developed  in 1950 as a “proper” assessment technique for flare prediction.  By way of comparison, an “improper” method would be illustrated if a forecaster were to issue a ‘no flare’ forecast (say for major flares) every day of the year and only 5 events occurred. Then, by improperly counting the ‘no flare’ days as actual events a 99 percent success rate could be arrived at.

The standard Brier P-Score is defined (as I showed in my first statistical flare forecasting paper published in The Journal of the Royal Astronomical Society of Canada):

where P is the verification score, M is the number of forecasts made, k is the number of categories for each forecast occasion, and f is the forecast probability with range 0- 1 in each category.

The observation is denoted by the letter O and may be zero (0) (event i in category k does not occur) or 1 (event does occur).  Mathematically, the smaller the forecaster’s score the greater his skill – since the less difference between what is forecast and observed (the squared term at the end)

 A more refined modification due to Saunders (1963) takes into account more factors than the simplified version above, but we will focus on the simpler version.

 Now, as to a specific application. Consider the interval April 5 – 11, 1980 when I actually made ex post facto predictions that were later checked using the P-Score. The results are tabulated as follows and these are for “major SID flares”. E.g. flares that produced an SID event or sudden ionospheric disturbance, of at least importance ‘2’ on a 0-2 scale.

 

Date    4/5 4/6    4/7    4/8    4/9    4/10  4/11

 

Obs.  2        1         2        0        0        2        0

 

Pred. 0        1         2        1         0        1         0

 

f_ik   0        0.2    0.5     0.2    0        0.1     0    =    1.0

 

A P-score of 0.48 resulted from this example. Again, this is raw and just to show how the basic score works. As I noted there are ways to refine it. More recently (1979) Simon and Smith (Solar –Terrestrial Predictions Proceedings, Vol. II, p. 311) have noted that forecast accuracy can be fundamentally limited by Poisson statistics, e.g. the type that yield the Posson distribution:

 P N     =   -l    lN / N!

Where  P N   is the probability corresponding to N flare days of the observed magnetic class (N = 0, 1, 2 etc.)  and  l  is the mean number of  flares per day per magnetic class.  Then the expected frequency of N flare days is found from:

E( d N )  = P N   å  d N   

Where E( d N )    is the expected number of N flare days, and the summation refers to the total number of recorded flare days for the particular magnetic class.   For any given magnetic class the extent of agreement between observed and expected flare days is calculated from:

 c2  =  å  [ O(d N ) - E( d N ) ] 2/   E( d N ) 

It is possible, if such considerations had been applied to the example above, the P-score would have been significantly improved – since fewer predicted flares would have been assigned on those days when fewer occurred

My second paper (published in Solar Physics, 1984 ) examined the specific statistics pertaining to frequency of occurrence and associated intensity.  This began with using the Poisson equation for probability:

In this paper I applied a further index of goodness of fit, obtained by comparing the statistical moments  M n  with the predicted values for the Poisson theoretical distribution.   The moments about the mean ( l)are then given by:

Where  n = 2, 3. 4 etc.  and f j  (j= 1, 2, ...k)  =  f (No) denotes the observed distribution of N flare days  for the observed magnetic class.  For n = 2, for example, we obtain    =  s2   or the mean squared deviation from the mean (variance) which is a measure of the spread of   f (No) ;   for   n = 3  we obtain   =  d3   or the cubed deviation from the mean, i.e. the skewness of  f (No) 

.For a theoretical  Poisson distribution of form:    

P N     =   -l    lN / N! 

We expect:    = l , and  a  =  3 /(2 )1/2

But if these are appreciably different from the observed values a modified form of the theoretical Poisson distribution must be used, i.e.


Where  x  /h  =    and   ( l  +  l/h )   =  .  As with the theoretical Poisson form the goodness of fit may be assessed by using the  c2  distribution.

Suffice it to say, the preceding statistical aspects were critical in disclosing the need to incorporate a flare trigger to account for the different SID effects. One of the major findings on analysis was that: i) Subflares - with typical energy  1029  erg, were the major producers of SID flares, and (ii) 35% of the major SID flares (greatest geo-effective impacts) were optical subflares.

These results in turn disclosed the basis for a Poisson-based "delay time" and magnetic free energy (MFE) buildup preceding geo-effective solar flares, paving the way for a flare trigger.  Thereby it was shown how the flare distribution actually corresponds to a time-dependent Poisson process of the form:

P(t) =   -l   lt  / t!, 

where theoretically the Poisson mean rate of occurrence is: lm =   l Dt, with Dt  = t,  assuming the time interval Dt = 1d.  Since magnetogram measurements referred to solar active regions -sunspot groups will not generally be made at the exact same time each day this ensures  D¹ 1d, so Dt  ¹  t thereby introducing a selection effect variability.  It is this inherent variability which opens the door as it were to the need for the modified Poisson distribution.

If MFE buildup was large, but the energy release (triggering)  'premature'  (t <<t', time of prediction) a subflare could then occur but with terrestrial effects (e.g. short wave fadeouts or SWFs). If the MFE buildup was large and triggering delayed enough to discharge most of all of it, then major impacts occurred, such as powerful magnetic (auroral) substorms.

These consequences were first postulated by me (Proceedings of the Second Caribbean Physics Conference, Ed. L.L. Moseley, pp. 1-11.) to account for the intermittent release of magnetic free energy in large area sunspots,  using:

  t [ ò V    B 22m  dV] =   

1/m  ò V   div ( v X B) X B )dV -    ò V   han  Jms 2]

Where han  is the anomalous resistivity given by Chen (1974)[i]:

h an  =  4pneffwe

where neff  is the effective collision frequency and we is the electron plasma frequency.  And  Jms  the current density at marginal stability of the magnetically unstable region.   Bear in mind that v X B) X B  reference relative footpoint motion within the large active region.

The plasma response to the rotary motion is accounted for by a (-J·E) term (or the  E·J  term, since -J·E·J). The change in total energy over a defined volume V may then be written (using appropriate identities of curl, div):

òv  [ e /t] dV = òv  [E curl H – H curl E] dV -  òv  [J·E] dV

This work led directly to one of the first semi-successful uses of the Brier P-score to predict flare occurrence [ii] followed by publication of the key statistical results in the Meudon Solar-Terrestrial Predictions Proceedings [iii].

 

See Also:

Why Space Weather Is Still "Something of a Black Box"

And:

New Solar Research Confirms Why Delta Sunspots Are More Flare Worthy Than Other Magnetic Classes

And:

Analysis of Helicity Variation Via Collision of 2 Solar Loops In Relative Proximity (Pt. 1)


And:

https://www.ams.org/journals/notices/202510/noti3267/noti3267.html?adat=November%202025&trk=3267&pdfissue=202510&pdffile=rnoti-p1137.pdf&cat=none&type=.html&utm_source=Informz&utm_medium=email&utm_campaign=Informz%20Mailing&_zs=Lq5BH1&_zl=r2kt7


Spherical Trigonometry from Vector Dot Products Applied To The Celestial Sphere



 The approach to the celestial sphere in many standard college courses (for Astronomy majors), often begins with the analysis of spherical triangles such as depicted below:

                        












Fig. 1: Spherical triangle with vector directions to reference circle

The fundamental approach also requires examining the relationship between angles derived as functions of vector dot products and cross products - which we will examine in this post.

Examples:

(i^ x j^)  =  sin c m =   sin c (⊥ i   +  j)

(i^ x k^)  =  sin b =  sin b (⊥ i   +  k)

Applying the vectors shown:

(i^ x j^· (i^ x  k^)  = sin c  sin b cos A

i^· [ j^  (i^ x  k^)]  = i^· [( j^· k^)i^   - ( i^· j^)k^]  =

cos a - cos b cos c

Such that the fundamental formula of spherical trigonometry is arrived at:

cos a =  cos b cos c +  sin b sin c cos A

Which can be applied to any spherical system, whether for Earth or the celestial sphere.

Further:  sin A  = sin a sin B/ sin b

Two companion formulas are also easy to retrieve:

i) cos b = cos a cos c + sin c sin a cos B

ii)  cos c = cos a cos b + sin a sin b cos C

These in tandem set the stage for application to a spherical reference frame;

Fig. 2: Celestial sphere showing basic planes, orientations.

The line EC follows the z-axis from the Earth's center directed outwards as shown.  The plane DBA is called the "fundamental plane" to which EC is normal (i.e. at 90). Similarly, EA and EB mark the x and y axes, respectively.    We now apply this to the celestial sphere:












This will allow use of our fundamental formula which we call the "law of cosines" for spherical triangles.   Now, we use Fig. 3, for a celestial sphere application, in which we use the spherical trig relations to obtain an astronomical measurement.


Using the angles shown in Fig. 3 each of the angles for the law of cosines (given above) can be found. They are as follows:

cos a = cos (90o - 
d)

where 
d = declination

cos b = cos (90 o - Lat)

where 'Lat' denotes the latitude. (Recall from Fig. 1 if φ is polar distance (which can also be zenith distance) then φ = (90 - Lat))

cos c = cos z

where z here is the zenith distance.

sin b = sin (90 deg - Lat)

sin c = sin z

and finally,

cos A = cos A

Where A is the azimuth.


Example Problem:

Let's say we want to find the declination of the star if the observer's latitude is 45 o N, the azimuth of the star is measured to be 60 o, and its zenith distance z = 30 o. Then one would solve for cos a:

cos a = cos (90 o - 
d)=

cos (90 o - Lat) cos z + sin (90 o - Lat) sin z cos (A)

cos (90 o - 
d) =  cos (90 o - 45 o) cos 30 o

+ sin (90 o - 45 o) sin 30 o cos 60 o

And:

cos (90 o - 
d) = cos (45 o) cos 30 o

+ sin (45 o) sin 30 o cos 60 o

We know, or can use tables or calculator to find:

cos 45 o = 
Ö2 / 2

cos 30 o = 
Ö3/ 2

sin 45 o = 
Ö2/ 2

sin 30 o = ½

cos 60 o = ½

Then: 

cos (90 - d)= {(Ö2/ 2 )( Ö3/ 2)} + {Ö2/ 2} Ö (½) }

cos (90 - 
d)= Ö6/ 4 + Ö2/ 8 = {2Ö6 + Ö2}/ 8

cos (90 - 
d) = 0.789

Suggested Problems:

1) Show that: 

a) (i^ x j^)  =  sin c (⊥ i   +  j)

b) (i^ x k^)  =  sin b =  sin b (⊥ i   +  k)


2) In a spherical triangle ABC, C  =  90 o  , a = 119 o  30'  and B =  52 o.5.  Calculate the values of b, c and A.

3)The altitude of a star as it transits your meridian is found to be 45o along a vertical circle at azimuth 180o, the south point.  Find the declination of the star.


Wednesday, October 29, 2025

Basic Mensa Algebra Problem

 

This is a fairly basic (and general) algebra problem involving powers and multiplication of like factors.

Problem: Which digits n (0 - 9) have a power x (greater than one) that is a sequence if the digit n?


nx  =   n n n . . . n (x > 1)

 

Climate Study Addition Of "57 Superhot" Days A Year Bears Further Analysis (Also Chat GPT's Input)

 

                      Arctic zonal map from 9 years ago showed beginning of superhot days a year


A recent (Oct. 18)  Denver Post article proclaimed ‘The World is on a path to add 57 superhot days a year’.  This scenario- based on a climate study cited - assumes countries fulfill their promises (from the 2015 Paris Agreement) to curb CO2 emissions so that by the year 2100 the planet warms by only 2.6 C (4.7 F) above pre-industrial times. In that case “57 superhot days would be added to what the Earth gets now”.

This according to the computer simulations released by the climate scientists belonging to The World Weather Attribution (WWA) and U.S. – based Climate Central. (The AP report noting the study is yet to be peer-reviewed "but uses established techniques for climate attribution".).  As the Post account from the AP also noted (quoting the study authors):

Superhot days are defined for each location as days that are warmer than comparable dates between 1991 and 2020

This makes sense given the past ten years, from 2015 to 2024, have been the hottest on record, with 2024 being the warmest year overall, according to scientific and weather organizations.   Indeed, "since 2015 the world has added 11 superhot days on average."

The report also doesn't say exactly how many people will be adversely affected, but Climate Central VP for Science Kristina Dahl (a report co-author) warns:

"There will be pain and suffering because of climate change. But if you look at this difference - between 4C of warming and 2.6C of warming - that is encouraging"

She was referring to thc two scenarios, 2.6 C (4.7F) of warming and 4C (7.2 f) warming, the latter taken to be the path the world was on before the Paris Agreement.  In other words, the study's authors have assumed our planet is and will remain on the less disastrous climate path.  However, others disagree and we note - from the article:

"The 2015 Paris Agreement has made strides against climate change but its too little and too slow".

A take with which I concur.  Further, lamebrained voters in the US of A put Trump back in power again even knowing he would set us back with his yen for fossil fuels.  See e.g.

Brane Space: Sad To Say: Trump EPA & Radical Supreme Court Are Determined To Turn Earth Into Another Hellish Venus

Given this, I firmly believe the worst IPCC scenario, not even referenced in the study, needs consideration. 

 This is the IPCC SSP5-8.5 scenario projecting an increase of 5.7 C (10.3 F) by the end of the century).

To find an estimate of the superhot days that would be added per year, say for the above SSP5-8.5 scenario, I tasked Chat GPT with the computations while using the basic data from the climate report.  Below are the results found:

If global warming follows the high-emissions SSP5-8.5 pathway (about 5.7 °C by 2100 relative to pre-industrial), a simple extrapolation of the Climate Central / WWA results implies roughly ~180–195 extra “superhot” days per year on average worldwide by the end of the century — a best central estimate of about ≈188 extra superhot days per year (added to what we get today).

 How I got that (step-by-step, with the report’s numbers)

The joint report gives two key points (all relative to today’s climate):

  • At 2.6 °C global warming +57 extra hot days / year (on average). Contentful
  • At 4.0 °C global warming +114 extra hot days / year (on average). Contentful

You asked about SSP5-8.5 ≈ 5.7 °C (IPCC AR6 gives a 5–95% SSP5-8.5 range up to ~5.7 °C for 2081–2100). IPCC

Two reasonable ways to extrapolate from the report’s numbers:

Method A — linear extrapolation using the (2.6 57) and (4.0 114) points

1.      Difference in extra hot days: 114 − 57 = 57 days.

2.     Temperature difference between those points: 4.0 − 2.6 = 1.4 °C.

3.     Slope ≈ 57 ÷ 1.4 = 40.7142857… extra hot days per °C.

4.     From 4.0 °C up to 5.7 °C is +1.7 °C, so add 1.7 × 40.7142857 ≈ 69.21 days to 114 ≈183 extra days.

Method B — use the report’s baseline “today” (present ≈ 1.3 °C) and the 2.6 point

1.      Present ≈ 1.3 °C (report’s stated present warming). Contentful

2.     Increase from present to 2.6 °C = 2.6 − 1.3 = 1.3 °C, which corresponds to +57 days slope 57 ÷ 1.3 = 43.8461538 days/°C.

3.     Increase from present to 5.7 °C = 5.7 − 1.3 = 4.4 °C, so 4.4 × 43.8461538 ≈ 192.92 extra days.

The two extrapolations give ≈183 and ≈193 extra hot days; averaging (and allowing a sensible uncertainty band for nonlinearity and model spread) yields a central estimate ≈188 extra hot days with a plausible range of ~180–195 (I round conservatively to ~180–200 to reflect additional uncertainties).

Important caveats & context

  • The Climate Central / WWA report provides robust, multi-model averaged results for 2.6 °C and 4.0 °C scenarios but does not report a 5.7 °C case explicitly — my answer is an extrapolation. The underlying relationship is roughly linear between the two reported points, which is why linear extrapolation is a reasonable first approximation, but it may not capture non-linear physical or regional effects at very high warming. Contentful
  • The numbers are global averages across countries. Individual countries (especially small island or tropical nations) can see much larger increases (the report shows many places would receive well over 100 extra hot days under 2.6 °C and far more at 4 °C). So local outcomes can be very different from the global mean. Contentful
  • “Superhot days” here = days hotter than the 90th percentile of comparable dates in 1991–2020 (the report’s definition). The human impacts (health, labor, mortality) scale with intensity, duration, vulnerability and adaptation — so the raw added-days number understates complexity of real harms. Contentful
  • SSP5-8.5 itself is a high-end emissions pathway; IPCC AR6 reports a wide model range for that pathway (multi-model range up to about 5.7 °C by 2081–2100). Use of SSP5-8.5 implies very large additional warming and thus very large increases in extreme heat days. IPCC

Bottom line

If the world follows a very high-emissions SSP5-8.5 path (≈5.7 °C by 2100), a conservative extrapolation of the Climate Central / WWA analysis points to on the order of ~180–200 extra superhot days per year on average globally (central estimate ≈ 188 extra days relative to today). That would turn large parts of the calendar into “superhot” conditions for many populations — a dramatic and dangerous shift.

The results here from the A.I. are stunning, but pretty well in line with what I already expected, from analyses I've done based on the net heating rate (due to CO2 concentration) of   2.7 W/ m2   per year.


See Also:

by Karl Grossman | July 31, 2025 - 5:00am | permalink

Trump’s EPA to repeal core of greenhouse gas rules,” was the Reuters headline this week as Lee Zeldin, chosen by Donald Trump to be administrator of the U.S. Environmental Protection Agency, announced what Reuters said “will rescind the long-standing finding that greenhouse gas emissions endanger human health, as well as tailpipe emission standards for vehicles, removing the legal foundation of greenhouse gas regulations across industries.”

“Zeldin announced the agency’s plan to rescind the ‘endangerment finding’ at a truck factory in Indiana, alongside Energy Secretary Chris Wright, and called it the largest deregulatory action in U.S. history,” reported Reuters.

The move was anticipated.

» article continues...

And:

As CO2 Concentration Hits A New Record The Threat Of Tipping To A More Hostile Global Climate Has Increased

And:

Kids Born Since 2020 Are Justified In Being Terrified By The Coming Climate Catastrophe (New Nature Research)

And:

Smoke-filled Air, Ochre Skies Provide Preview Of Life At Cusp Of Runaway Greenhouse Effect

And:

New UN Report Issues "Code Red" For Humanity On Climate - Is It Hyperbole?

And:

Bjorn Lomberg - Climate Change Clown Has No Clue Concerning Adaptation To A Rapidly Warming World 

And: