I am trying to solve a problem using simplex method.Although this is a mathematical problem, I need to solve it using any programming language.I am stuck at basic phase itself about dealing those modulus, while coding the matrix Ax=B which is used to solve the problem in a general case simplex.
Route Departure Runtime Arrival Wait time\\
A-B x 4 MOD(x+4,24) MOD(y-(MOD(x+4,24),24)\\
B-C y 6 MOD(y+6,24) MOD(z-(MOD(y+6,24),24)\\
C-D z 8 MOD(z+8,24) MOD(8-(MOD(z+8,24),24)\\
The objective is to minimize the total wait time
subject to constraints
0<= x,y,z <= 24
Simplex is not specifically required, any method may be used.
edit -
This is a part of much bigger problem, so just assuming z = 0 and starting won't help. I need to solve the entire thing.I want to know how to deal with the modulus.
The expression
y = mod(x,24)
is not linear so we can not use it in a continuous LP (Linear Programming) model. However, it can be modeled in a Mixed Integer Program as
x = k*24 + y
k : integer variable
0 <= y <= 23.999
You'll need a MIP solver for this.
Related
When I reading this paper http://articles.adsabs.harvard.edu/cgi-bin/nph-iarticle_query?1976ApJ...209..214B&data_type=PDF_HIGH&whole_paper=YES&type=PRINTER&filetype=.pdf
I try to solve eq(49) numerically, it seems a fokker-planck equation, I find finite difference method doesn't work, it's unstable.
Does any one know how to solve it?
computational science stack exchange is where you can ask and hope for an answer. Or you could try its physics cousin. The equation, you quote, is integro-differential equation, fairly non-linear... Fokker-Plank looking equation. Definitely not the typical Fokker-Plank.
What you can try is to discretize the space part of the function g(x,t) using finite differences or finite-elements. After all, 0 < x < x_max and you have boundary conditions. You also have to discretize the corresponding integration. So maybe finite elements might be more appropriate? Finite elements means you can write g(x, t) as a series of a well chosen basis of compactly supported simple enough functions Bj(x) : j = 1...N in the interval [0, x_max]
g(x,t) = sum_j=1:N gj(t)*Bj(x)
That will turn your function into a (large) vector gj(t) = g(x_j, t), for j = 1, 1, ...., N. As a result, you will obtain a non-linear system of ODEs
dgj(t)/dt = Qj(g1(t), g2(t), ..., gN(t))
j = 1 ... N
After that use something like Runge-Kutta to integrate numerically the ODE system.
I have a problem which often occurs in energy system modelling (modelling of storages) where charging of a storage system is not allowed to when it is discharging and vice versa.
Let x and y be continous positive decision variables that represent the charging/discharging power of the storage unit. Now I am trying to express the following logic in equations or inequalities:
if x > 0:
y = 0
if y > 0:
x = 0
I know this can be realized with binary variables via the following equations if b is a binary variable and x_ub and y_ub represent the respective upper bound of the variables:
x <= b * x_ub
y <= (1-b) * y_ub
This results in a mixed-integer linear programm (MILP) and of course induces all the properties of a MILP which lay in higher runtimes, other interpretations of the solution, ...
Another way to get a pure LP formulation is to work with (very low) costs that are attached to x and y so that only one of both variables is set to a positive value. But this also affects my objective, has other disadvantages as is is somewhat of a hotfix.
So here's my question: Is there any way to express the logic above (if/else) in a purely linear program (LP)?
Thanks in advance!
I have a set of first 25 Zernike polynomials. Below are shown few in Cartesin co-ordinate system.
z2 = 2*x
z3 = 2*y
z4 = sqrt(3)*(2*x^2+2*y^2-1)
:
:
z24 = sqrt(14)*(15*(x^2+y^2)^2-20*(x^2+y^2)+6)*(x^2-y^2)
I am not using 1st since it is piston; so I have these 24 two-dim ANALYTICAL functions expressed in X-Y Cartesian co-ordinate system. All are defined over unit circle, as they are orthogonal over unit circle. The problem which I am describing here is relevant to other 2D surfaces also apart from Zernike Polynomials.
Suppose that origin (0,0) of the XY co-ordinate system and the centre of the unit circle are same.
Next, I take linear combination of these 24 polynomials to build a 2D wavefront shape. I use 24 random input coefficients in this combination.
w(x,y) = sum_over_i a_i*z_i (i=2,3,4,....24)
a_i = random coefficients
z_i = zernike polynomials
Upto this point, everything is analytical part which can be done on paper.
Now comes the discretization!
I know that when you want to re-construct a signal (1Dim/2Dim), your sampling frequency should be at least twice the maximum frequency present in the signal (Nyquist-Shanon principle).
Here signal is w(x,y) as mentioned above which is nothing but a simple 2Dim
function of x & y. I want to represent it on computer now. Obviously I can not take all infinite points from -1 to +1 along x axis and same for y axis.
I have to take finite no. of data points (which are called sample points or just samples) on this analytical 2Dim surface w(x,y)
I am measuring x & y in metres, and -1 <= x <= +1; -1 <= y <= +1.
e.g. If I divide my x-axis from -1 to 1, in 50 sample points then dx = 2/50= 0.04 metre. Same for y axis. Now my sampling frequency is 1/dx i.e. 25 samples per metre. Same for y axis.
But I took 50 samples arbitrarily; I could have taken 10 samples or 1000 samples. That is the crux of the matter here: how many samples points?How will I determine this number?
There is one theorem (Nyquist-Shanon theorem) mentioned above which says that if I want to re-construct w(x,y) faithfully, I must sample it on both axes so that my sampling frequency (i.e. no. of samples per metre) is at least twice the maximum frequency present in the w(x,y). This is nothing but finding power spectrum of w(x,y). Idea is that any function in space domain can be represented in spatial-frequency domain also, which is nothing but taking Fourier transform of the function! This tells us how many (spatial) frequencies are present in your function w(x,y) and what is the maximum frequency out of these many frequencies.
Now my question is first how to find out this maximum sampling frequency in my case. I can not use MATLAB fft2() or any other tool since it means already I have samples taken across the wavefront!! Obviously remaining option is find it analytically ! But that is time consuming and difficult since I have 24 polynomials & I will have to use then continuous Fourier transform i.e. I will have to go for pen and paper.
Any help will be appreciated.
Thanks
Key Assumptions
You want to use the "Nyquist-Shanon" theorem to determine sampling frequency
Obviously remaining option is find it analytically ! But that is time
consuming and difficult since I have 21 polynomials & I have to use
continuous Fourier transform i.e. done by analytically.
Given the assumption I have made (and noting that consideration of other mathematical techniques is out of scope for StackOverflow), you have no option but to calculate the continuous Fourier Transform.
However, I believe you haven't considered all the options for calculating the transform other than a laborious paper exercise e.g.
Numerical approximation of the continuous F.T. using code
Symbolic Integration e.g. Wolfram Alpha
Surely a numerical approximation of the Fourier Transform will be adequate for your solution?
I am assuming this is for coursework or research rather, so all you really care about as a physicist is a solution that is the quickest solution that is accurate within the scope of your problem.
So to conclude, IMHO, don't waste time searching for a more mathematically elegant solution or trick and just solve the problem with one of the above methods
for i = 1....n do
j=1
while j*j<=i do j=j+1
I need to find the asysmptotic running time in theta(?) notation.
I found that
3(1) + 5(2) + 7(3) + 9(4).....+.......
and I tried to find the answere using the summation by parts.
but I couldn't....Can anyone explain or give me some clue.
The overall complexity of the code snippet can be rewritten as:
for i = 1 to n
do for j = 1 to floor(sqrt(n))
Hence, we get the overall complexity as sigma of sqrt(i) when i varies from 1 to n.
Unfortunately, there is no elementary formula for a series of sum of square roots, so we have to depend on integration.
Integration of sqrt(i) with limits would be n sqrt(n) (Ignoring constant factors).
Hence, the overall time complexity of the loop is n sqrt(n).
Using Sigma notation, you may proceed methodically:
To obtain theta, you should find out the formula of the summation of floored square root of i (which is not obvious).
To be safe, I chose Big Oh.
I was reading this page on operation performance in .NET and saw there's a really huge difference between the division operation and the rest.
Then, the modulo operator is slow, but how much with respect to the cost of a conditional block we can use for the same purpose?
Let's assume we have a positive number y that can't be >= 20. Which one is more efficient as a general rule (not only in .NET)?
This:
x = y % 10
or this:
x = y
if (x >= 10)
{
x -= 10
}
How many times are you calling the modulo operation? If it's in some tight inner loop that's getting called many times a second, maybe you should look at other ways of preventing array overflow. If it's being called < (say) 10,000 times, I wouldn't worry about it.
As for performance between your snippets - test them (with some real-world data if possible). You don't know what the compiler/JITer and CPU are doing under the hood. The % could be getting optimized to an & if the 2nd argument is constant and a power of 2. At the CPU level you're talking about the difference between division and branch prediction, which is going to depend on the rest of your code.