Chapter 2
General aspects

1.  Integration methods

To integrate our equations of motion by using discrete timesteps - whatever of the three discussed simulation methods we're going to use - we always need an algorithm to make it possible to approximate these equations in our computer program.
There are today many different numerical integration methods, all with their specific advantages and disadvantages, but many of their features can be described in terms of the errors they introduce into the approximation if we compare them with each other, using one fixed timeinterval and one fixed number of integration steps.

With numerical integration techniques we always have to deal with to 2 major sources of error:

First I shall describe some numerical integration methods for the equations of motion of our particles and then I'll discuss the 2 major sources of error more lengthy.

1.1  Brief discussion of several methods

As said earlier, in my first (somewhat clumsy) multibody simulations I only used Euler and Runge-Kutta 4 for numerical integration. However, they both have some disadvantages : the first has a large discretisation error, so it requires a very small timestep to be accurate enough; the second has smaller discretisation errors, but takes a lot of computations per timestep which is why the round-off error grows much more rapid than with Euler.

A few very important remarks first :

It is important to notice that the acceleration (only) depends on the relative particle positions (which is always the case for a conservative force), so if we calculate the acceleration after an update of the positions,we actually use the approximated acceleration at time t+Dt and not at t !

Also, if we update the velocities before updating the positions we use the velocity at time t+Dt and not at t to find the new positions ! However, updating the positions before updating the velocities is no problem, as the velocities do not directly depend on the positions.

So, in a straight forward scheme the most logical order is:

  1. Calculate the new accelerations using the positions

  2. Update the positions using the velocities

  3. Update the velocities using the accelerations

Note however that this does't apply to the leapfrog and wobble schemes ! In fact these methods make a clever use of these dependencies to reduce the error, by deliberately altering their order.

Euler
This method is the simplest one : it only uses the first term of the Taylor expansion (see page ), so there are only linear terms and only first derivatives in this method. For our type of problems it looks as follows:

s(t+Dt) = s(t)+v(t)*Dt
v(t+Dt) = v(t)+a(t)*Dt
(1)
Of course the order may swapped, but then we have to realize that we actually use the speed at t+Dt for the position calculation.

Heun
This method - often called the improved Euler method - just takes the average value at t and t+Dt for both the speed and position updates. It is a so-called second order method.

s(t+Dt) = s(t)+Dt* v(t)+v(t+Dt)
2
v(t+Dt) = v(t)+Dt* a(t)+a(t+Dt)
2
(2)
Note that we need to store 2 values for both speed and acceleration for each timestep instead of only one as in Euler, so it requires extra memory to keep the old values as well as the new ones.

Runge-Kutta 4
Here we use even more values between t and t+Dt to approximate the next step. It is formally a fourth order method. In the formula below we only use 3 points, but in this case the evaluation at t+[1/2]Dt is taken twice,which is just because all functions involved ( a(t) and v(t) ) only take one argument. In other cases however this may not be true and we have to use 4 different points.

s(t+Dt) = s(t)+Dt* v(t)+4*v(t+[1/2]Dt)+v(t+Dt)
6
v(t+Dt) = v(t)+Dt* a(t)+4*a(t+[1/2]Dt)+a(t+Dt)
6
(3)
Note that this not only requires storage for 3 values for both speed and acceleration, but also the extra calculation of the speed and acceleration at half the timestep, and especially the acceleration calculation is a very time consuming step in particle simulations.

Leapfrog
This method, which is by far the most popular one for particle simulations, looks like a first order method and also takes the same number of calculations as needed for a first order method, but it can be proven that it is in fact a genuine second order method or even better, at least as good as Heun. Also, under certain conditions the total roundoff error will always be bounded between an upper and a lower limit, so no matter how many iterations we'll carry out, the roundoff error will never exceed a certain extreme, in contrast with i.e. Euler. (see the error propagation discussion in )
The clever thing is that evaluation of speed and position is being done at times that are [1/2]Dt apart from each other, so that they can be viewed as being an average between their value at respectively t and t+Dt ! This trick cancels all the odd terms in the Taylor approximation, only leaving the even terms. For this reason the leapfrog scheme is also said to be ``time-reversible''. The scheme looks as follows :

v(t+ 1
2
Dt) = v(t- 1
2
Dt)+a(t)*Dt
s(t+Dt) = s(t)+v(t+ 1
2
Dt)*Dt
(4)
In practice this is almost always done by setting the initial velocity to the value w(0) = v(0)-[1/2]Dt*a(0) and then use this new speed instead of the original, so that we can use a scheme very much equal to the Euler scheme above :

s(t+Dt)
=
s(t)+w(t)*Dt
w(t+Dt)
=
w(t)+a(t+Dt)*Dt
(5)
Note that the acceleration is now updated between calculating the new positions and the new speeds !!

Wobble
This is a method I ``developed'' myself before I knew about the Leapfrog scheme. However, it isn't too bad : little calculations and fairly accurate, so I'd like to mention it also in this report, but unfortunately the Leapfrog scheme seems to be better and also a little faster.
The Wobble scheme doesn't use an offset in speed such as in leapfrog, which could make evaluation of i.e. energies at a certain time a trifle easier (if you don't neglect the offset at all, as I did in most cases ...).
The basic trick is to split the timesteps into two different types: the even ones and the odd ones. For the even timesteps we use a scheme similar to the Euler scheme above, but for the odd ones we reverse the order of the updates for speed and position.
So, for t = (2n+1)·Dt (with n an integer) it looks like :

s(t+Dt)
=
s(t)+v(t)*Dt
v(t+Dt)
=
v(t)+a(t+Dt)*Dt
(6)
And for t = 2n·Dt ( n again integer) :

v(t+Dt)
=
v(t)+a(t)*Dt
s(t+Dt)
=
s(t)+v(t+Dt)*Dt
(7)
The accelerations are updated before the scheme itself, so in the odd ones before the position updates and in the even ones before the speed updates.
Note that the Wobble scheme has a better balance and roundoff error than i.e. the Euler scheme, but it still has larger roundoff errors than the Leapfrog scheme.

RKWobble
This one uses the same trick as the wobble scheme(s) above, but now with a Runge-Kutta 4 style scheme, instead of a simple first order scheme.

1.2  Using only one equation of motion

It is perfectly possible to use only positions and accelerations and completely forget about the speeds, by substituting the speed using the equation for the positions. But in that case we have to remember the previous positions (instead of the speed) to be able to integrate the second-order equation we'll then obtain.

For the Euler scheme in 1 this second order scheme looks like :

s(t+2*Dt)-s(t+Dt)
Dt
= s(t+Dt)-s(t)
Dt
+a(t)*Dt
(8)
which could be (by recollecting terms and shifting the time over Dt ) rewritten as :

s(t+Dt) = 2*s(t)-s(t-Dt)+a(t-Dt)*Dt2
(9)

And for the Leapfrog scheme in 5 this yields in the same way :

s(t+Dt) = 2*s(t)-s(t-Dt)+a(t)*Dt2
(10)

The term s(t)-s(t-Dt) (which is the difference in position !) in fact represents the velocity term (now multiplied by Dt ) that we saw in the original equation schemes.

1.3  Discretisation errors

Discretisation errors are caused by the fact that by using an approximation function to predict the new situation after one integration step, we will always loose information if compared with the real situation. The size of the error of course depends on the stepsize : the larger the step, the bigger the error.

But it also depends on the exact form of this approximation function : this is simply a straight line in the case of the Euler method (which is why we call it a first order method), but it is a parabola in the case of the Heun method (called a second-order method) and a 4th-order polynomial in the case of Runge-Kutta 4 (obviously called a fourth order method). In general we can say that the higher the order, the smaller the (discretisation) error.

What makes things worse is that the discretisation error is being propagated: the error made in step 1 will be amplified in step 2 and so on.

The local discretisation error (thus the error for one timestep) is given by the following general formula :

en+1 = 1
(r+1)!
F(r+1)(n)·(Dt)r+1
(11)

where r is the order of the integration method, F() is a function that gives the analytical solution of the differential equation and thus F(r)(xn) the value of the rth derivative of function F for step n , and e is the difference between the value we'll find at step n+1 using the numerical method and the exact functionvalue of x at step n+1 .

This looks rather horrible at first glance, but it is more easy to see this if we realize that basically every numerical approximation uses only the first few terms of the infinite Taylor series for F() to achieve the approximation :

F(n+Dt) = F(n)+F(1)(n)*(Dt)+ 1
2!
F(2)(n)*(Dt)2+ 1
3!
F(3)(n)*(Dt)3+......
(12)
A first order method only uses the zero and first term of the Taylor series, a second order the zero, first and second term and so on. So, it must be obvious that the difference between the approximation and the exact value is dominated by the first term of the neglected part of the full Taylor series, i.e. the second term for a first order system, the third for a second order system and so on !!

Well, now we know the error made in one step, but of course we are more interested in the total discretisation error (this is the accumulated error after a number of steps) for a certain number of timesteps. Fortunately that looks a little easier:

For p = [(tn+p-tn)/(Dt)] timesteps the total discretisation error is roughly given by the formula :

en+p £en+1 = tn+p+tn
Dt
·en+1 = c*(Dt)r
(13)

with c being some unimportant constant and r the order of the integration method. The latter part of the equation was found by substituting the expression 11 for en+1 .

Note that the order of the global discretisation error is one lower than the order of the local discretisation error. Also, in particle simulation we may say in general that the global discretisation error can never be bounded for an unbounded number of timesteps, due to the nature of the problem.

More information about this subject can be found in many books about differential equations and numerical methods, such as in Boyce & Diprima ([]) : ``Numerical methods'', and Strauss ([]) : ``Computation of Solutions''.

1.4  Round-off errors

Apart from discretisation errors, we also see a very different source of errors, introduced by the computer itself. Computers can almost never calculate exact answers: they are only accurate up to a limited number of digits, so every calculation introduces a round-off error. It is obvious that the more calculations we have to make in one timestep, the larger the maximum roundoff error for one timestep will be.

In some cases it is possible that the total round-off error will never exceed a certain maximum: in that case we will call the integration method stable.

In the case of our particle simulations several methods can be stable, provided the timestep is small enough. Stability often depends on the size of the timestep: if the timestep gets too large possibly stable systems will also become unstable.

Let's view the error-propagation for Euler and Leapfrog a little closer :

1.4.1 Euler error-propagation

We can rearrange the Euler scheme to obtain a second-order difference equation, instead of two first order equations, simply by combining the two first order equations from the scheme into one equation, as already shown in 9. This equation could be rewritten to :

s(t+Dt)-2*s(t)+s(t-Dt) = a(t-Dt)*Dt2
(14)

The round-off error at step n (called en ) is given by subtracting the approximation s¢n we found on the computer (using a limited number of digits) from the exact approximation sn (using an infinite number of digits) :

en = s¢n-sn

If we substitute this into 14 and bear in mind that the acceleration only depends on the positions we'll get :

en+1-2*en+en-1 = (a(sn-1+en-1)-a(sn-1))*Dt2

Which is more or less equal to (approximating the second part using the standard definition for the derivative) :

en+1-2*en+en-1 = d a(sn-1)
d sn-1
en-1*Dt2

Well, this is a standard second order difference equation. Assuming solutions of the form E = al1n+bln2 and substituting the term [(d a(sn-1))/(d sn-1)] with it's maximum amplitude WE2 , this yields us :

l*en-2*en+ 1
l
en = 1
l
en*(WEDt)2

or (dividing by en and multiplying with l) :

l2-2*l+1-(WEDt)2 = 0

This quadratic equation only has complex roots for 4-4*(1-(WEDt)2) < 0 or (WEDt)2 < 0 , which of course has no solutions. This implies that the error in the Euler scheme will at any rate be always nonperiodic, so unless both roots are smaller than 1 (in which case the error is being damped out) the Euler scheme will be always unstable.

Well, as both roots are only between ±1 for 1±[1/2]Ö{(WEDt)2} < 1 and because this equation obviously has no valid solutions for both roots at the same time, this must lead to the conclusion that Euler is in our case always unstable.

1.4.2 Leapfrog error-propagation

If we take the Leapfrog scheme from equation 10 we'll find in the same way as above :

s(t+Dt)-2*s(t)+s(t-Dt) = a(t)*Dt2
(15)
and thus :

en+1-2*en+en-1 = (a(sn+en)-a(sn))*Dt2

Which is more or less equal to (approximating the second part using the definition for the derivative) :

en+1-2*en+en-1 = d a(sn)
d sn
en*Dt2

This is again a standard second order difference equation. Let's assume again solutions of the form E = al1n+bln2 and also substitute the term [(d a(sn))/(d sn)] with it's maximum amplitude -WL2 , as we are only interested in this extreme. This yields us in the same way as with Euler :

l*en-2*en+ 1
l
en = en*-(WLDt)2

or (dividing by en and multiplying with l) :

l2+((WLDt)2-2)*l+1 = 0

This quadratic equation has complex roots for ((WLDt)2-2)2-4 < 0 or (WLDt)4-4(WLDt)2 < 0 which equals to (WLDt)2((WLDt)2-4) < 0 . The first solution is trivial, the second yields to |WLDt| < 2 . As long as this is true the Leapfrog scheme will be stable, because the magnitude of the roots are in that case always on the unit-circle. Note that WLDt can't be negative, so there is no reason to investigate this any further.

For |WLDt| > 2 both roots are real and one of them is moving from -1 to -¥, while the other moves from -1 to 0 , so in this case we always have one root with a magnitude larger than 1 . Thus the Leapfrog scheme will be unstable for |WLDt| > 2 .

Of course this leaves us with the problem of finding WL , but for the moment it is enough that we know that under certain conditions this scheme is always stable with respect to the round-off error. Thus, how much it may look the same as Euler above it is definitely different at this point !!


Figure 1: Comparison between a stable method with a large roundoff error and unstable method with a small roundoff error. The tradeoff is at about 24 iterations in this example.

It is important to realize that short simulations may be better carried out with an unstable method that has a small roundoff error for the given interval, while for long simulations stability may be a far more important criterion than a small roundoff error over a certain interval. (see picture 1)

2.  How accurate is our simulation ?

It is of course very important to be able to say something about how accurate a certain simulation run resembles the true situation. The best of course would be to have a system that could be solved analytically, so we could compare every value at every timestep with the correct value if we wish, but of course it would be pretty useless to carry out a simulation if we already knew the answer !

However, these kind of problems can be used as a kind of testcase for a certain program, before trying it on less trivial problems. What I often used was a system where a very light particle was running in a circle-shaped orbit around a very heavy particle at rest. Every deviation from the circle would be an indication of certain errors in the program.

What could be very useful too is to apply basic physical conservation laws on the simulation : at every time both the energy and the momentum in all directions should be conserved, so calculating these at a certain point in the simulation and compare them with their initial values may give a lot of information about the accuracy.

2.1  Momentum conservation

At every moment the total momentum of the system will be given by:

ptot = N
å
i = 1 
pi
(16)
Simple, linear or first order integration schemes, combined with an exact algorithm as Particle-Particle, will in practice always conserve momentum: if particles interact, the force from particle a on particle b will be - according to Newton's 3th law - always equal but opposite to the force from particle b on particle a . This implies that both also have the same magnitude of error. Well, as these errors are always opposite to each other, they will cancel each other if we sum all momentums to find the total momentum !!

It is possible to only compare the amplitude of the total momentum as I did initially, but it is generally better to compare the momentum in every direction separately as this doesn't take any more time and memory. Due to a bug in the code I swapped the x and y components of the velocity in one case but wasn't able to notice it, as it had no influence on the amplitude of the total momentum ! It was only after this that I decided to compare the momentum in every direction separately ...

Higher order schemes have more problems in conserving momentum, but the nice thing about leapfrog and also wobble is that - although they are mathematically second order - they have the momentum conservation behaviour of a first order method !

But be careful: it is important to realize that - even if the total momentum is conserved - the individual momenta may still contain a large error ! However, fortunately enough this will often result in a large error in the total energy.

2.2  Energy conservation

At every moment the total energy of the system will be given by the sum of the kinetic and potential energies. The potential energy is assumed to be zero at infinity and thus always negative, yielding us for a normal 3D system :

Etot = N
å
i = 1 
1
2
mi·v2i- N
å
i = 1 
  N
å
j ¹ i 
g· mi·mj
|ri® j|
(17)

For a 2D system with genuine 2D forces such as we find for example in a 2D Particle-Mesh simulation (where the force drops ~ [1/r] instead of ~ [1/(r2)] ) the pairwise potential energy is proportional to log| ri® j| , resulting in a slightly diffferent formula :

Etot = N
å
i = 1 
1
2
mi·v2i- N
å
i = 1 
  N
å
j ¹ i 
g¢·mi·mj·log| ri® j|
(18)

In contrast with the momentum above the energy is much less likely being conserved :

Problems arise with the kinetic energy (the first part in both formulas) as it squares the speed, thus dropping the sign in the error, so that the cancellation that worked so nicely with the momentum doesn't work here anymore. Also the relative error is doubled by taking the square of the speed, which makes things even worse.

For the potential energy term (the second part in the formulas) there is also a problem : as it use the absolute value for the distance, the sign in the error is dropped here too, causing the same effect as with the kinetic energy above.

Note that the computation of the potential energy is an O(N2) operation. This implies that with large problems ( N > 104 ) this may take a relatively large amount of time, which is why I used an approximation for this value in the case of the analyser program described in chapter .

Furthermore the discretisation error often causes the system to slowly heat up, for reasons not shown in this report. This also contributes to an increasing deviation from the initial energy. Some algorithms solve this heating problem by decreasing the per-particle energy after a certain number of timesteps with a small factor, but as I consider this a very inelegant solution I did not implement this.

So, in general we may say that energy is not conserved, whatever scheme we use. On the other hand, if the deviation is not to large, the simulation may still be quite accurate. Deviations as large as 1-5 % are no problem in many cases.

3.  Long range versus short range forces

It is obvious that a force that drops with the seventh power of the distance will be far less important for far away particles, than a force that drops only with the square of the distance, like i.e. a gravitational force. In fact the influence of the first one may be completely ignored for all particles except for the ``nearest neighbours'', while the second may still be felt at a considerably longer distance.

So, we see that the distance over which the force can influence other particles can differ very widely. A force that is only felt at close locations could make it possible to use a much faster algorithm by totally neglecting the force at distances higher than a certain threshold. Such a force will be called a short range force from now on. On the other hand a force could work over a very long distance and this one is obviously called long range force.

Special attention should be paid to the density: even if a force drops sharp with the distance it couldn't be neglected if the number of far away particles is much higher than the number of close particles, although it can be possible in some cases to only take into account the total influence of many far away particles, as we do in Tree-code simulations.

Special problems concerning a (more or less) uniform density in 2D and 3D space will be discussed hereafter :

3.1  2D and 3D aspects of uniform density

Consider a system of particles with a uniform density for the moment. This means that the concentration of particles is more or less the same over the whole working area.

Note: In a purely uniform distribution all contributions will cancel out each other, as for every force contribution from a certain distance there is another contribution from exact the opposite side, effectively cancelling the first one. So only for places where the distribution is not purely random the discussion hereafter will hold completely (i.e. near the edges), although what it says about each individual force contribution will always be true, although not very useful.

Now let's see what happens to the total force from all particles at a given distance on a certain test particle, if we assume that the force drops with the square of the distance (as is the case with the 3D gravitational force) both for a 2D system and a 3D system.


Figure 2: Influence of particles at certain distance r® (r+dr)\protect on the total force.

2D system
Consider the density to be given by r = [N/A] , with N the total number of particles and A the total area of our 2D space. Then we may write for the number of particles in a small band from circle r to circle r+dr :

Nr® r+dr = r·dA = r·d(p·r2) = r·2p·r·dr
(19)
And for the (3D) force of one particle in such a band on the particle in the centre, assuming all other masses are equal :

Fr® r+dr = -g m0·m
r2
(20)
Which gives us for the total force from all particles in a small ring r® r+dr :

Ftotdr = N·Fr® r+dr = - g·r·2p·m·m0
r
·dr
(21)
This is for the bands r1 and r2 in figure 2 respectively :

Nr1
=
r·2p·r1·dr
Nr2
=
r·2p·r2·dr
So the total forces become respectively (assuming m = m0 ) :

Ftotdr1
=
Nr1·Fr1 = -g·r·2p·m2· 1
r1
·dr
Ftotdr2
=
Nr2·Fr2 = -g·r·2p·m2· 1
r2
·dr
If we divide the second equation by the first one, we find an expression for the change of the total band force as a function of the distance of the band :

Ftotdr1
Ftotdr2
= r2
r1
(22)
or, in words: for a 2D system the influence of a band at distance r is proportional to the inverse of the distance r to the centre of the band. This implies that the force of far away particles is less important than from nearby particles.

3D system
Now consider the density to be given by r = [N/V] , with N the total number of particles and V the total content of our 3D space. Then we may write for the number of particles in a small volume from the sphere at r to the sphere at r+dr :

Nr® r+dr = r·dV = r·d( 4
3
·p·r3) = r·4p·r2·dr
(23)
Which gives us for the total force of all particles in a small volume r® r+dr (assuming again a gravitational force) :

Ftotdr = N·Fr® r+dr = -g·r·4p·m·m0·dr
(24)
And this leads of course to :

Ftotdr1
Ftotdr2
= 1
(25)
or, in words: for a 3D system the influence of a band at distance r is not proportional to the distance in the gravitational/coulomb case !! This implies that for a uniform density the force contribution of far away particles is exactly as large as the contribution from nearby particles !!

So, a short-range force in a 2D simulation could turn into a long-range force in a 3D system without changing anything else ! This is important to bear in mind if - as I did in this report - only 2D simulations are being considered: in 3D systems with gravitational and electrical forces and a more or less uniform density, the influence of far away particles can never be neglected !

It is obvious that with forces that drop more progressively than [1/(r2)] forces, the faraway particles can be again safely neglected and it is equally obvious that in simulations with a higher dimension than 3 (whatever it may stand for !) also these more progressive forces sometimes can't be ignored anymore.

4.  How to choose the timestep

The exact size of the timestep determines the amount of ``linearisation'' introduced by the numerical integration if compared with the real problem. Of course this shouldn't be too far off from reality. On the other hand: the larger the timestep, the less calculations we need and thus the faster our simulation will be. What we should ask ourselves is how large can we make it without causing serious problems ?

It is obvious that for a particle with a constant speed and no forces working on it we can make our timestep as large as we like. So, the amount of change in speed within one timestep determines how accurate our approach will be : i.e. if the speed doubles within one timestep the results will be very unreliable.

From this follows that we shouldn't allow the speed to change more than about 5-10 % in one timestep. But, this is impossible for a particle initially at rest ! To avoid this problem we'd better use an absolute maximum than a relative maximum. We could i.e. look at the average initial speeds and choose the maximum allowable change according to this average.

Also important is the change in the angle or direction: if the magnitude of the speed hasn't changed it is still possible that the direction has changed considerably. Of course we have the same problem here as above: for a particle initially at rest it is impossible to say something about the relative change in direction.

The easiest thing is to look at the absolute change of speed in each separate direction: in that case we will cover both the magnitude and the direction criteria.

Another aspect in choosing the timestep is the closest distance between 2 particles that we allow (the COLLISIONDETECT macro in the code). If the speed is so large that in one timestep a particle has moved more than the collision detect distance, there could have been a collision between 2 particles without any detection, causing the new positions to be meaningless ! This is even a bigger problem in the case of our gravitational or Coulomb forces: the interparticle force goes to ¥ if the particle distance goes to 0 !

So we never should allow the position to change with more than about 10-25 % of the collision detect distance.

It may be a good idea to relate the collision detect distance to the maximum allowable acceleration: i.e. 2-5 times as large, rather than make it a fixed value that could be only changed at compile time.

Another important restriction for the choice of the timestep was already given earlier in this chapter: it should be small enough to avoid round-off errors to become unbounded.

4.1  Particle-Mesh considerations

If we use an algorithm where the interparticle distance doesn't play a direct role in the computation of the acceleration -such as is the case with the Particle-Mesh method - a minimum collision distance will be meaningless.

However, the Particle-Mesh method has another limit instead: it should never be possible that a particle moves with more than 10-25 % of the mesh distance. This immediately brings us to another problem: if we halve the mesh distance, we should also halve the timestep !

4.2  Variable timesteps

In the discussion above we simply stop the simulation if one of the limits is being exceeded. Another possibility could be to make the timestep larger or smaller according to the force at a certain moment and thus keep the change in speed per timestep more or less constant. We could do this for the whole system of particles, but this could easily result in an unacceptable increase of the total running time. However, it is also possible to assign each particle it's own individual timestep, which is of course much more efficient, although this results in many practical problems, especially with the Tree-Code and Particle-Mesh codes:

For these reasons I decided to refrain from using a variable timestep. After all one of the goals was to build source code that could be easily understood and adapted by newbies and the implementation of this - although not impossible - would make it far less readable.

5.  Collision management

If particles come very close to each other, their interaction may show a more complex behaviour than the simple gravitational or Coulomb interaction. For example extra, more progressive repulsive components may occur, as is the case with interacting atomic particles.

It is also possible that they collide using the simple mechanical collision laws for either central elastic or central non-elastic collisions as is the case with particles that interact through the gravitational interaction. Partly non-elastic collisions and non-central collisions are very complicated to deal with and are considered to be beyond the scope of this report.

Finally we could ignore all collisional effects if they don't play a significant role, as we do in Particle-Mesh.

Some of the techniques will be discussed in short hereafter :

5.1  Collisionless systems: Particle-Mesh

As the Particle-Mesh method ignores pairwise interaction (it can only be applied to systems of more or less uniform distributed particles), it is not possible to deal with collisions here.

In fact we should stop the particle-mesh simulation if to many particles come to close to each other : it is supposed to be used only for (almost) collisionless systems.

Unfortunately, checking the relative positions to determine if we have to stop, can only be done if we check the pairwise interparticle distance and that was exactly what we wanted to avoid as it introduced the O(N2) scalability of the Particle-Particle method. But, this can be improved if we only check for particles in the current cell and its neighbours, but even this causes a lot of extra calculations.

This is being done in a more enhanced version of Particle-Mesh, that uses the Particle-Particle approach for particles close to each other : Particle-Particle-Particle-Mesh or P3M, so here it would be possible to add collision management, but also the P3M method is beyond the scope of this report.

5.2  Elastic collisions

Elastic collisions occur if colliding particles do not change their individual particle mass and shape during the collision : only kinetic energy and momentum are being exchanged and nothing else. Note that elastic collisions can only be calculated for central collisions: for non-central forces we need more information about the change in direction for at least one of the particles.

However, this is in general no problem for 2 different reasons :

Larger problems arise through the necessity of small collision distances : this requires the timestep to be small as well. But this is already discussed in section 4 at page pageref.

How should we calculate the new speeds after a central elastic collision occurred ?

Although this is plain classical mechanics I'll discuss this briefly hereafter, to save other people the time of either searching for this in other places or to derive this themselves :

From the laws of energy and momentum conservation it follows that at every time the total momentum in all directions and the total energy should be all constant. So, for the momentum and energy before and after the time of impact we may write :

m1·wx1+m2·wx2
=
m1·vx1+m2·vx2
m1·wy1+m2·wy2
=
m1·vy1+m2·vy2
m1·w21+m2·w22
=
mv21+m2·v22

Here ' v ' denotes the speed before collision, ' w ' the speed after collision.

Also, if we transfer this system to the centre-of-mass system (with speed vc ), the total momentum within this centre-of-mass system must be necessarily zero. This makes the computation quite easy as from energy conservation follows that - although the direction of the momentum may change - the magnitude of the momentum stays the same for all individual particles :

wx1-vcx
=
-(vx1-vcx)
wy1-vcy
=
-(vy1-vcy)
wx2-vcx
=
-(vx2-vcx)
wy2-vcy
=
-(vy2-vcy)

Note that in the centre-of-mass system the collision is always a frontal collision ! The centre-of-mass speed vc used in the equations above is given by :

vcx
=
m1·vx1+m2·vx2
m1+m2
vcy
=
m1·vy1+m2·vy2
m1+m2
(26)

And this altogether yields us the equations that enable us to calculate the new speed components after a central elastic collision between two particles :

wx1
=
(m1-m2)·vx1+2·m2·vx2
m1+m2
wx2
=
(m1-m2)·vx2+2·m1·vx1
m1+m2
wy1
=
(m1-m2)·vy1+2·m2·vy2
m1+m2
wy2
=
(m1-m2)·vy2+2·m1·vy1
m1+m2
(27)

5.3  Non-Elastic collisions

Another possible interaction could be that the two colliding particles become ``clogged'' and continue from now on as if they were one new particle with the total mass, speed and energy of the old ones. There are 2 ways to achieve this without reducing the number of particles (which is sometimes difficult from a software point of view, especially on a distributed memory machine) :

  1. We can give the new particle information to one of the old particles and set the mass and speed of the other one to zero, effectively flagging the last one as being unused.

  2. We can equally divide the mass between the two old particles and give them both the same new speed and position. This insures they will always stay together as if they were one new particle.

However, I found that from a practical point of view the second method is less attractive, because of the fact that their relative distance is zero, which introduces a singularity for the pairwise potential energy between these two particles that can easily confuse our computer simulation. For this reason I only used method 1 in practice.

Also note that all non-elastic collisions have another impact on the pairwise potential energy: non-elastic collisions do not conserve energy ! This implies that checking the total energy before and after the simulation can be pretty useless, unless the internal energy stored in the joined particles is also taken into account.

As for the equations for the non-elastic collision: they are even more simple than those for the elastic one.

The new mass for both particles becomes :

m1
=
m1+m2
m2
=
0

The new speed can be found by assignment of the total momentum to one particle and then divide this by the total mass and give the other one a zero speed :

wx1
=
m1·vx1+m2·vx2
m1+m2
wx2
=
0
wy1
=
m1·vy1+m2·vy2
m1+m2
wy2
=
0
(28)

And the new position is simply that of the centre-of-mass for one and an arbitrary value (0 !) for the other :

ux1
=
m1·sx1+m2·sx2
m1+m2
ux2
=
0
uy1
=
m1·sy1+m2·sy2
m1+m2
uy2
=
0
(29)

Note that the change of mass needs some special arrangements on a distributed memory system : normally only the particle positions need to be updated from the other slaves, but if we allow non-elastic particles we should also update the particles masses at every iteration step !

5.4  Extra repulsive components

As said already: it is also possible to add an extra repulsive component to the force. This extra component should only become important at short distances and so it should drop faster with the distance than the [1/(r2)] force. To avoid this component becomes significant at distances of about 1 , it also needs a small constant. The total force then becomes something like :

Ftot1® 2 = -g· m1·m2
r1® 22
+ g
c1
m1·m2
r1® 2c2
(30)

With c1 quite large (i.e. > 1000 ) and c2 larger than 2 (i.e. 4 ).

The calculation of the extra force component takes some more time if we do it for all particle interactions, but it is perfectly possible to only calculate this for particles that are close to each other, provided the component is small enough. We have to check for this distance anyway, so it virtually doesn't add any extra calculations to our simulation.

Note that this extra component should also be taken into account for the total energy computation (see equation 17) or else problems may arise with energy conservation, although the contribution on larger distances is often neglectable.

In practice it seems that this method is probably the most widely used type of collision management, although I choose the elastic collision to be the default for my simulations.


File translated from TEX by TTH, version 1.23.