Nonreflecting boundaries for a wave equation simulation - physics

I am implementing a simulation of the wave equation using an array to discretely model a spatial region in which waves can propagate. Currently, waves reflect off the boundaries of the spatial region. However, I want to eliminate this reflection so that waves appear to propagate off forever.
I am aware there are many academic papers discussing nonreflecting / absorbing boundary conditions (e.g. perfectly matched layers?), but most seem to focus on analytic solutions. I cannot figure out how to implement nonreflecting boundaries numerically in my simulation. This is the code I am writing:
for (var i = 1; i < width - 1; ++i) {
for (var j = 1; j < height - 1; ++j) {
var d2f_dx2 = f[i + 1][j] - f[i][j] * 2 + f[i - 1][j];
var d2f_dy2 = f[i][j + 1] - f[i][j] * 2 + f[i][j - 1];
var d2f_dt2 = c2[i][j] * (d2f_dx2 + d2f_dy2);
df_dt[i][j] += d2f_dt2;
}
}
for (var i = 1; i < width - 1; ++i) {
for (var j = 1; j < height - 1; ++j) {
f[i][j] += df_dt[i][j];
}
}
where f is the field, df_dt is the partial derivative of the field with respect to time, d2f_dt2 is the second partial derivative of the field with respect to time, d2f_dx2 is the second partial derivative of the field in the x direction, and d2f_dy2 is the second partial derivative of the field in the y direction.
Does anyone know how I can adjust this code to have nonreflecting boundaries?

After clearing a few 25 year old cobwebs, the solution to your problem will depend on your setting the equations to satisfy the following initial conditions and condition at infinity. It has been far to long for me to translate initial and infinite boundary conditions into the partial differential and then into code for you, but knowing the correct boundary conditions to apply will provide the numerical model your are trying to create. Hopefully this will help.
For the undamped non-reflecting condition, the boundary value problem you are looking to model is described in the wikipedia article you site under the last paragraph of The_Sturm-Liouville_formulation. The The_Sturm-Liouville formulation itself may not provide the proper model, but the boundary conditions discussed in the last paragraph under the heading are those you must satisfy. The derivation is explained in a single dimension, but as noted in the article, the numerical solution for the one-dimension problem can be extended to any number of dimensions.
Boundary Conditions for Undamped Infinite Propagation
boundary value at t=0 == value at t=infinity after X whole periods, where
y = Asin(Bx - C) + D or y = Acos(Bx - C) + D.
The solution for f(x)t and f(y)t will be periodic trigonomic functions with the waves propagating on to infinity. If you think about it, the conditions are clear. At any point in time, the wave you wish to describe is simply an undamped periodic harmonic that will be modeled on a sine, cosine, etc.. The only difference in description of the wave at any point in time will be amplitude and phase as it cycles though a normal period. The identity of which trigonomic function will satisfy your initial condition will depend on the phase angle and shift at time t=0. The boundary condition as time approaches infinity will be that same function after a whole number of periods are complete.

Related

Big O: What is the name for the complexity O(a * b)?

I am new to studying the Big O notation and have thought of this question. What is the name for the complexity O(a * b)? Is it linear complexity? polynomial? or something else. The code for the implementation is below.
function twoInputsMult(a, b) {
for (let i = 0; i < a; i++) {
for (let j = 0; j < b; j++) {
// do something
}
}
}
Edit: According to the course I'm going through, it is not n^2 or quadratic since it uses two different numbers for the loops. Refer to the image below
O(ab) is just O(ab). Technically, ab is a multivariate polynomial of 2nd degree. But this is not equivalent to a quadratic polynomial, such as a2.
If you know more about a and b, you may be able to deduce more about their relationship. For instance, if a = O(b), then O(ab) = O(b2), which is quadratic. On the other hand, if a is a constant, then we can reduce it to O(b), which is linear.
Notice, by the way, that O(a + b) is just O(max(a, b)).
And if the real world interests you, I might also mention that both of these complexity classes show up a lot e.g. in graph theory, where we have the number of vertices |V| and the number of edges |E|, and typically |E| = O(|V|2) but not necessarily. For instance, Depth-first search has a time complexity of O(|V| + |E|), which just means that it is linear in terms of whichever there is more of: vertices or edges.

Self-Correcting Probability Distribution - Maintain randomness, while gravitating each outcome's frequency towards its probability

This is a common problem when you want to introduce randomness, but at the same time you want your experiment to stick close to the intended probability distribution, and can not / do not want to count on the law of big numbers.
Say you have programmed a coin with 50-50 chance for heads / tails. If you simulate it 100 times, most likely you will get something close to the intended 50-50 (binary distribution centered at 50-50).
But what if you wanted similar certainty for any number of repeats of the experiment.
A client of ours asked us this ::
We may also need to add some restrictions on some of the randomizations (e.g. if spatial location of our stimuli is totally random, the program could present too many stimuli in some locations and not very many in others. Locations should be equally sampled, so more of an array that is shuffled instead of randomization with replacement).
So they wanted randomness they could control.
Implementation details aside (arrays, vs other methods), the wanted result for our client's problem was the following ::
Always have as close to 1 / N of the stimuli in each of the N potential locations, yet do so in a randomized (hard-to-predict) way.
This is commonly needed in games (when distributing objects, characters, stats, ..), and I would imagine many other applications.
My preferred method for dealing with this is to dynamically weight the intended probabilities based on how the experiment has gone so far. This effectively moves us away from independently drawn variables.
Let p[i] be the wanted probability of outcome i
Let N[i] be the number of times outcome i has happened up to now
Let N be the sum of N[] for all outcomes i
Let w[i] be the correcting weight for i
Let W_Max be the maximum weight you want to assign (ie. when an outcome has occurred 0 times)
Let P[i] be the unnormalized probability for i
Then p_c[i] is the corrected probability for i
p[i] is fixed and provided by the design. N[i] is an accumulation - every time i happens, increment N[i] by 1.
w[i] is given by
w[i] = CalculateWeight(p[i], N[i], N, W_Max)
{
if (N == 0) return 1;
if (N[i] == 0) return W_Max;
intended = p[i] * N
current = N[i]
return intended / current;
}
And P[i] is given by
P[i] = p[i] * w[i]
Then we calculate p_c[i] as
p_c[i] = P[i] / sum(P[i])
And we run the next iteration of our random experiment (sampling) with p_c[i] instead of p[i] for outcome i.
The main drawback is that you trade control for predictability. After 4 tails in a row, it's highly likely you will see a head.
Note 1 :: The described method will provide at any step a distribution close to the original if the experiment's results match the intended results, or skewed towards (away) outcomes that have happened less (more) than intended.
Note 2 :: You can introduce a "control" parameter c and add an extra step.
p_c2[i] = c * p_c[i] + (1-c) * p[i]
For c = 1, this defaults to the described method, for c = 0 it defaults to the the original probabilities (independently drawn variables).

naudio SineWaveProvider32 gives clicks when changing Amplitude

I am using naudio with SineWaveProvider32 code directly from http://mark-dot-net.blogspot.com/2009/10/playback-of-sine-wave-in-naudio.html to generate
sine wave tones. The relevant code in the SineWaveProvider32 class:
public override int Read(float[] buffer, int offset, int sampleCount)
{
int sampleRate = WaveFormat.SampleRate;
for (int n = 0; n < sampleCount; n++)
{
buffer[n + offset] =
(float)(Amplitude * Math.Sin((2 * Math.PI * sample * Frequency) / sampleRate));
sample++;
if (sample >= sampleRate) sample = 0;
}
return sampleCount;
}
I was getting clicks/beats every second, so I changed
if (sample >= sampleRate) sample = 0;
to
if (sample >= (int)(sampleRate / Frequency)) sample = 0;
This fixed the clicks every second (so that "sample" was always relative to a zero-crossing, not the sample rate).
However, whenever I set the Amplitude variable, I get a click. I tried setting it only when the buffer[] was at a zero-crossing,
thinking that a sudden jump in amplitude might be causing the problem. That did not solve the problem. I am setting the Amplitude to values between
0.25 and 0.0
I tried adusting the latency and number of buffers as suggested in NAudio change volume in runtime but that
had no effect either.
My code that changes the Amplitude:
public async void play(int durationMS, float amplitude = .25f)
{
PitchPlayer pPlayer = new PitchPlayer(this.frequency, amplitude);
pPlayer.play();
await Task.Delay(durationMS/2);
pPlayer.provider.Amplitude = .15f;
await Task.Delay(durationMS /2);
pPlayer.stop();
}
the clicks are caused by a discontinuity in the waveform. This is hard to fix in a class like this because ideally you would slowly ramp the volume from one value to the other. This can be done by modifying the code to have a target amplitude, and then if the current amplitude is not equal to the target amplitude then you move towards it by a small delta amount calculated each time through the loop. So over a period of say 10ms, you move from the old to new amplitude. But you'd need to write this yourself unfortunately.
For a similar concept where the frequency is being changed gradually rather than the amplitude, take a look at my blog post on portamento in NAudio.
Angular speed
Instead of frequency it is easier to think in terms of angular speed. How much to increase the angular argument of a sin() function for each sample.
When using radians for angle, one periode completing a full circle is 2*pi so the angular velocity of one Hz is (2*pi)/T = (2*pi)/1/f = f*2*pi = 1*2*pi [rad/s]
The sample rate is in [samples per second] and the angular velocity is in [radians per second] so to get the [angle per sample] you simply divide angular velocity by sample rate to get [radians/second]/[samples/second] = [radians/sample].
That is the number to continuously increase the angle of the sin() function for each sample - no multiplication is needed.
To sweep from one frequency to another you simply move from one angular increment to another in small steps over a number of samples.
By sweeping between frequencies there will be a continuous chain of adjacent samples and transient spread out smoothly over time.
Moving from one amplitude to another could also be spread out over multiple samples to avoid sharp transients.
Fade in and fade out incrementally adjusting the amplitude at the start and end of a sound is more graceful than stepping the output from one level to another in one sample.
Sharp steps produce rings on the water that propagate out in the world.
About sin() calculations
For speedy calculations it may be better to rotate a vector of the length of the amplitude and calculate sn=sin(delta), cs=cos(delta) only when angular speed changes:
Wikipedia Link to theory
where amplitude^2 = x^2 + y^2, each new sample can be calculated as:
px = x * cs - y * sn;
py = x * sn + y * cs;
To increase the amplitude you simply multiply px and py by a factor say 1.01. To make the next sample you set x=px, y=py and run the px, py calculation again with cs and sn the same all the time.
py or px can be used as the signal output and will be 90 deg out of phase.
On the first sample you can set x=amplitude and y=0.

Find global maximum in the lest number of computations

Let's say I have a function f defined on interval [0,1], which is smooth and increases up to some point a after which it starts decreasing. I have a grid x[i] on this interval, e.g. with a constant step size of dx = 0.01, and I would like to find which of those points has the highest value, by doing the smallest number of evaluations of f in the worst-case scenario. I think I can do much better than exhaustive search by applying something inspired with gradient-like methods. Any ideas? I was thinking of something like a binary search perhaps, or parabolic methods.
This is a bisection-like method I coded:
def optimize(f, a, b, fa, fb, dx):
if b - a <= dx:
return a if fa > fb else b
else:
m1 = 0.5*(a + b)
m1 = _round(m1, a, dx)
fm1 = fa if m1 == a else f(m1)
m2 = m1 + dx
fm2 = fb if m2 == b else f(m2)
if fm2 >= fm1:
return optimize(f, m2, b, fm2, fb, dx)
else:
return optimize(f, a, m1, fa, fm1, dx)
def _round(x, a, dx, right = False):
return a + dx*(floor((x - a)/dx) + right)
The idea is: find the middle of the interval and compute m1 and m2- the points to the right and to the left of it. If the direction there is increasing, go for the right interval and do the same, otherwise go for the left. Whenever the interval is too small, just compare the numbers on the ends. However, this algorithm still does not use the strength of the derivatives at points I computed.
Such a function is called unimodal.
Without computing the derivatives, you can work by
finding where the deltas x[i+1]-x[i] change sign, by dichotomy (the deltas are positive then negative after the maximum); this takes Log2(n) comparisons; this approach is very close to what you describe;
adapting the Golden section method to the discrete case; it takes Logφ(n) comparisons (φ~1.618).
Apparently, the Golden section is more costly, as φ<2, but actually the dichotomic search takes two function evaluations at a time, hence 2Log2(n)=Log√2(n) .
One can show that this is optimal, i.e. you can't go faster than O(Log(n)) for an arbitrary unimodal function.
If your function is very regular, the deltas will vary smoothly. You can think of the interpolation search, which tries to better predict the searched position by a linear interpolation rather than simple halving. In favorable conditions, it can reach O(Log(Log(n)) performance. I don't know of an adaptation of this principle to the Golden search.
Actually, linear interpolation on the deltas is very close to parabolic interpolation on the function values. The latter approach might be the best for you, but you need to be careful about the corner cases.
If derivatives are allowed, you can use any root solving method on the first derivative, knowing that there is an isolated zero in the given interval.
If only the first derivative is available, use regula falsi. If the second derivative is possible as well, you may consider Newton, but prefer a safe bracketing method.
I guess that the benefits of these approaches (superlinear and quadratic convergence) are made a little useless by the fact that you are working on a grid.
DISCLAIMER: Haven't test the code. Take this as an "inspiration".
Let's say you have the following 11 points
x,f(x) = (0,3),(1,7),(2,9),(3,11),(4,13),(5,14),(6,16),(7,5),(8,3)(9,1)(1,-1)
you can do something like inspired to the bisection method
a = 0 ,f(a) = 3 | b=10,f(b)=-1 | c=(0+10/2) f(5)=14
from here you can see that the increasing interval is [a,c[ and there is no need to that for the maximum because we know that in that interval the function is increasing. Maximum has to be in interval [c,b]. So at the next iteration you change the value of a s.t. a=c
a = 5 ,f(a) = 14 | b=10,f(b)=-1 | c=(5+10/2) f(6)=16
Again [a,c] is increasing so a is moved on the right
you can iterate the process until a=b=c.
Here the code that implements this idea. More info here:
int main(){
#define STEP (0.01)
#define SIZE (1/STEP)
double vals[(int)SIZE];
for (int i = 0; i < SIZE; ++i) {
double x = i*STEP;
vals[i] = -(x*x*x*x - (0.6)*(x*x));
}
for (int i = 0; i < SIZE; ++i) {
printf("%f ",vals[i]);
}
printf("\n");
int a=0,b=SIZE-1,c;
double fa=vals[a],fb=vals[b] ,fc;
c=(a+b)/2;
fc = vals[c];
while( a!=b && b!=c && a!=c){
printf("%i %i %i - %f %f %f\n",a,c,b, vals[a], vals[c],vals[b]);
if(fc - vals[c-1] > 0){ //is the function increasing in [a,c]
a = c;
}else{
b=c;
}
c=(a+b)/2;
fa=vals[a];
fb=vals[b];
fc = vals[c];
}
printf("The maximum is %i=%f with %f\n", c,(c*STEP),vals[a]);
}
Find points where derivative(of f(x))=(df/dx)=0
for derivative you could use five-point-stencil or similar algorithms.
should be O(n)
Then fit those multiple points (where d=0) on a polynomial regression / least squares regression .
should be also O(N). Assuming all numbers are neighbours.
Then find top of that curve
shouldn't be more than O(M) where M is resolution of trials for fit-function.
While taking derivative, you could leap by k-length steps until derivate changes sign.
When derivative changes sign, take square root of k and continue reverse direction.
When again, derivative changes sign, take square root of new k again, change direction.
Example: leap by 100 elements, find sign change, leap=10 and reverse direction, next change ==> leap=3 ... then it could be fixed to 1 element per step to find exact location.
I am assuming that the function evaluation is very costly.
In the special case, that your function could be approximately fitted with a polynomial, you can easily calculate the extrema in least number of function evaluations. And since you know that there is only one maximum, a polynomial of degree 2 (quadratic) might be ideal.
For example: If f(x) can be represented by a polynomial of some known degree, say 2, then, you can evaluate your function at any 3 points and calculate the polynomial coefficients using Newton's difference or Lagrange interpolation method.
Then its simple to solve for the maximum for this polynomial. For a degree 2 you can easily get a closed form expression for the maximum.
To get the final answer you can then search in the vicinity of the solution.

Calculating 2D resultant forces for vehicles in games

I am trying to calculate the forces that will act on circular objects in the event of a collision. Unfortunately, my mechanics is slightly rusty so i'm having a bit of trouble.
I have an agent class with members
vector position // (x,y)
vector velocity // (x,y)
vector forward // (x,y)
float radius // radius of the agent (all circles)
float mass
So if we have A,B:Agent, and in the next time step the velocity is going to change the position. If a collision is going to occur I want to work out the force that will act on the objects.
I know Line1 = (B.position-A.position) is needed to work out the angle of the resultant force but how to calculate it is baffling me when I have to take into account current velocity of the vehicle along with the angle of collision.
arctan(L1.y,L1.x) is am angle for the force (direction can be determined)
sin/cos are height/width of the components
Also I know to calculate the rotated axis I need to use
x = cos(T)*vel.x + sin(T)*vel.y
y = cos(T)*vel.y + sin(T)*vel.x
This is where my brain can't cope anymore.. Any help would be appreciated.
As I say, the aim is to work out the vector force applied to the objects as I have already taken into account basic physics.
Added a little psudocode to show where I was starting to go with it..
A,B:Agent
Agent {
vector position, velocity, front;
float radius,mass;
}
vector dist = B.position - A.position;
float distMag = dist.magnitude();
if (distMag < A.radius + B.radius) { // collision
float theta = arctan(dist.y,dist.x);
flost sine = sin(theta);
float cosine = cos(theta);
vector newAxis = new vector;
newAxis.x = cosine * dist .x + sine * dist .y;
newAxis.y = cosine * dist .y - sine * dist .x;
// Converted velocities
vector[] vTemp = {
new vector(), new vector() };
vTemp[0].x = cosine * agent.velocity.x + sine * agent.velocity.y;
vTemp[0].y = cosine * agent.velocity.y - sine * agent.velocity.x;
vTemp[1].x = cosine * current.velocity.x + sine * current.velocity.y;
vTemp[1].y = cosine * current.velocity.y - sine * current.velocity.x;
Here's to hoping there's a curious maths geek on stack..
Let us assume, without loss of generality, that we are in the second object's reference frame before the collision.
Conservation of momentum:
m1*vx1 = m1*vx1' + m2*vx2'
m1*vy1 = m1*vy1' + m2*vy2'
Solving for vx1', vy1':
vx1' = vx1 - (m2/m1)*vx2'
vy1' = vy1 - (m2/m1)*vy2'
Secretly, I will remember the fact that vx1'*vx1' + vy1'*vy1' = v1'*v1'.
Conservation of energy (one of the things elastic collisions give us is that angle of incidence is angle of reflection):
m1*v1*v1 = m1*v1'*v1' + m2*v2'+v2'
Solving for v1' squared:
v1'*v1' = v1*v1 - (m2/m1)v2'*v2'
Combine to eliminate v1':
(1-m2/m1)*v2'*v2' = 2*(vx2'*vx1+vy2'*vy1)
Now, if you've ever seen a stationary poolball hit, you know that it flies off in the direction of the contact normal (this is the same as your theta).
v2x' = v2'cos(theta)
v2y' = v2'sin(theta)
Therefore:
v2' = 2/(1-m2/m1)*(vx1*sin(theta)+vy1*cos(theta))
Now you can solve for v1' (either use v1'=sqrt(v1*v1-(m2/m1)*v2'*v2') or solve the whole thing in terms of the input variables).
Let's call phi = arctan(vy1/vx1). The angle of incidence relative to the tangent line to the circle at the point of intersection is 90-phi-theta (pi/2-phi-theta if you prefer). Add that again for the reflection, then convert back to an angle relative to the horizontal. Let's call the angle of incidence psi = 180-phi-2*theta (pi-phi-2*theta). Or,
psi = (180 or pi) - (arctan(vy1/vx1))-2*(arctan(dy/dx))
So:
vx1' = v1'sin(psi)
vy1' = v1'cos(psi)
Consider: if these circles are supposed to be solid 3D spheres, then use a mass proportional to radius-cubed for each one (note that the proportionality constant cancels out). If they are supposed to be disklike, use mass proportional to radius-squared. If they are rings, just use radius.
Next point to consider: Since the computer updates at discrete time events, you actually have overlapping objects. You should back out the objects so that they don't overlap before computing the new location of each object. For extra credit, figure out the time that they should have intersected, then move them in the new direction for that amount of time. Note that this time is just the overlap / old velocity. The reason that this is important is that you might imagine a collision that is computed that causes the objects to still overlap (causing them to collide again).
Next point to consider: to translate the original problem into this problem, just subtract object 2's velocity from object 1 (component-wise). After the computation, remember to add it back.
Final point to consider: I probably made an algebra error somewhere along the line. You should seriously consider checking my work.