Wednesday, May 2, 2012

The Case for Plasma Numerical Simulations

It is one of the ironies of solar physics that although the object of our enquiry is so overwhelming in its manifest effects on our terrestrial environment, we can only study it from afar. In the best case scenario, we can launch satellites that will deliver excellent solar imagery, or even allow a satellite to orbit the Sun- but in the end we still can't directly weigh and handle solar plasma.

This means that apart from observational data, the only way to seriously do solar research is via numerical simulations on a computer. This technique - which can be basically described as "down and dirty" - is without much finesse, and more a case of brute force churning out many different results using discrete volumes appended to the region or area of interest. Contrary to the beliefs of many outside the space-solar physics research community, you can't just grab the "right partial differential equation" and get a nice, analytical solution. (It is the very nonlinearity of most of the equations of interest that prohibit this!)

Nope. You have to subject that beautiful equation to a "meat grinder" and arrive at thousands of different finite difference equations to obtain incremental values which are then integrated. evaluated.  There is also a wide difference in the complexity of the evaluations and numbers of simulations, say in comparing the space physics to the solar physics environment.

Take the case of a simulation of the aurora, such as imaged here:

The density of charged particles in space is approximately 1 per cm^3, and for aurora this may be up to 10/cm^3- 100 /cm^3 . Typical dimensions for dynamical space plasma processes (in the magnetosphere, say for the aurora) are a few Earth radii. Considering a box with a base length of 20,000 km, this box would contain 10^28 charged particles. This set up (to reckon in behavior of  all  particles)  would require an  improvement in computer memory by a factor of 10^17 to carry out this simulation with the real particle density. Frankly, this would be insuperable.

While this sounds like an exercise in futility, the fact is that things aren't quite as bad as they seem. What is the basic "trick"? It entails exchanging the individual particle picture for a fluid dynamical one. For a large number of problems the discreteness of the particle dynamics does not matter. In those cases fluid approximations of these systems may yield excellent results for the relevant parameter range. In cases where the particle dynamics is important, one can often re-scale charge and mass of the particles (in other words simulate super-massive particles) and yield the desired effects.

Ideally suited for the task of fluid treatments is the Boltzmann equation:

 @f/ @t + v*grad f + F/m*@f/@t = (@f/@t)_C

where each  @ denotes a partial derivative, and (@f/@t)_C is the time rate of change in f due to collisions.

The first moment yields a 'two-fluid' (e.g. electron-ion) medium and is obtained by integrating the above eqn. with F = q/m (E + v X B). If one then assumes a sufficiently hot plasma so it's collisionless, the term on the RHS, (@f/@t)_C -> 0.

This is the Vlasov equation:

@f/@t + v*grad f + q/m (E + v X B)*@f/@t = 0

Now, in many cases we're mainly interested in the effects of small scale turbulence which is enhanced by (turbulent) viscosity and diffusion. Consider this simple diffusion equation, applicable to small scales in the aurora (say for diffusion of ions, H+):

@f/@t - φ [@^2f/@x^2 + @^f/@y^2 @^f/@z^2 ] = 0

One numerical simulation method that might be used is called "FTCS" or the forward time centered space scheme. Reducing the diffusion to one dimension (x) and applying finite differences, the preceding partial differential equation is converted to:

f(n+1)/j = f n/j + s (delta x)^2  L_xx  fn/j  where:

s = φ(delta t) /(delta x)^2

The "truncation error" is then given by:

E n/j =  φ(delta x)^2/ 2(s - 1/6) @^4f/ @x^4  + O[(delta x)^4]

Thus, at each stage the magnitude of the error resulting from truncation of  numerical values can be estimated.

Generalizing this same approach to 2-dimensions, yields:

f(n+1)/jk = f n/jk + s_x (delta x)^2  L_xx  fn/jk +  s_y (delta y)^2  L_yy  fn/jk


s _x = φ_x(delta t) /(delta x)^2
s_y = φ_y(delta t) /(delta y)^2

and, of course, this can also be extended to 3 dimesions, but of course the simulation equations increase in complexity.  Discretization  of parameters via numerical simulation methods always introduces errors and there are various types of such error sources. Typical errors for a discretization are due to:

1- Truncation and rounding.

2- The discrete representation and the limited computer resources (memory and speed).

3- Accumulation of errors in a systematic manner for certain numerical methods.

The way to improve on the results is to reduce the respective error source. This can be done by:

1) Demanding better spatial and/or temporal resolution (smaller grid space, more finite elements, larger number of base functions, smaller time step)

2) A change to a better and more appropriate simulation method (there are many!)

Of course, doing solar simulations is even more difficult because: a) there are many more particles to deal with per cm^3 (more like 10,000,000 vs. 1 - 10) and b) there are stronger forces acting to cause magnetic diffusion and nonlinearity, as well as acting over much larger regions.

A key point of attack will hinge on what manner of differential equation one is dealing with. One starts with a general partial DE template:

A@^2u/ @x^2 + B@^2u/ @x@y + C @^2u / @y^2 + D@u /@x+ E @u/ @y+ Fu + G = 0

B^2 – 4AC < 0 (elliptic eqn)

B^2 – 4AC = 0 (parabolic)

B^2 – 4AC >0 (hyperbolic)

More on this in the solar context, next.

No comments: