I am new to GNUPLOT. I am trying to plot 3d vector fields. However I am having trouble defining a function of three variables f(x,y,z). Can anyone show me how to do this correctly?
Defining your own functions in gnuplot is pretty intuitive. According to the gnuplot documentation the syntax is as follows
<func-name>( <dummy1> {,<dummy2>} ... {,<dummy5>} ) = <expression>
Examples:
w = 2
q = floor(tan(pi/2 - 0.1))
f(x) = sin(w*x)
sinc(x) = sin(pi*x)/(pi*x)
delta(t) = (t == 0)
ramp(t) = (t > 0) ? t : 0
min(a,b) = (a < b) ? a : b
comb(n,k) = n!/(k!*(n-k)!)
len3d(x,y,z) = sqrt(x*x+y*y+z*z)
plot f(x) = sin(x*a), a = 0.2, f(x), a = 0.4, f(x)
There is also a large set of built-in mathematical functions which you can use (in the definition of your own function).
For piecewise defined functions you can use the fact that undefined values are ignored. Therefore, the function
y(x) = x < 0 ? 1/0 : x
is only defined for positive arguments.
Powers are defined by **. Hence f(x)=x*x is identical to f(x)=x**2
If you have still problems in defining your own function, please feel free to ask. (Shouldn't a 3d-function only depend on x and y, i.e., f(x,y)=...?)
For examples of 3d plots, also see the gnuplot demo site.
Related
I would like to vectorize a function with a condition, meaning to calculate its values with array arithmetic. np.vectorize handles vectorization, but it does not work with array arithmetic, so it is not a complete solution
An answer was given as the solution in the question "How to vectorize a function which contains an if statement?" but did not prevent errors here; see the MWE below.
import numpy as np
def myfx(x):
return np.where(x < 1.1, 1, np.arcsin(1 / x))
y = myfx(x)
This runs but raises the following warnings:
<stdin>:2: RuntimeWarning: divide by zero encountered in true_divide
<stdin>:2: RuntimeWarning: invalid value encountered in arcsin
What is the problem, or is there a better way to do this?
I think this could be done by
Getting the indices ks of x for which x[k] > 1.1 for each k in ks.
Applying np.arcsin(1 / x[ks]) to the slice x[ks], and using 1 for the rest of the elements.
Recombining the arrays.
I am not sure about the efficiency, though.
The statement np.where(x < 1.1, 1, np.arcsin(1 / x)) is equivalent to
mask = x < 1.1
a = 1
b = np.arcsin(1 / x)
np.where(mask, a, b)
Notice that you're calling np.arcsin on all the elements of x, regardless of whether 1 / x <= 1 or not. Your basic plan is correct. You can do the operations in-place on an output array using the where keyword of np.arcsin and np.reciprocal, without having to recombine anything:
def myfx(x):
mask = (x >= 1.1)
out = np.ones(x.shape)
np.reciprocal(x, where=mask, out=out) # >= 1.1 implies != 0
return np.arcsin(out, where=mask, out=out)
Using np.ones ensures that the unmasked elements of out are initialized correctly. An equivalent method would be
out = np.empty(x.shape)
out[~mask] = 1
You can always find an arithmetic expression that prevents the "divide by zero".
Example:
def myfx(x):
return np.where( x < 1.1, 1, np.arcsin(1/np.maximum(x, 1.1)) )
The values where x<1.1 in the right wing are not used, so it's not an issue computing np.arcsin(1/1.1) where x < 1.1.
I'd like to write a LP problem in the standard format with MatOptInterface, e.i.:
min c'*x
S.t A*x .== b
x >= 0
Now, how can one write this problem with MathOptInterface? I'm having many issues, one of them is how to define the variable "model". For example, if I try to run:
x = add_variables(model,3)
I first would need to declare this model variable. But I don't know how one is supposed to do this on MathOptInterface.
IIUC in your situation model has to be an argument to be specified by the user of your function.
The user can then pass GLPK.Optimizer(), Tulip.Optimizer() or any other optimizer inheriting from MathOptInterface.AbstractOptimizer.
See e.g. Manual#A complete example.
Alternatively you can look at MOI.Utilities.Model but I don't know how to get an optimizer to solve that model.
Here is how to implement the LP solver for standard Simplex format:
function SolveLP(c,A,b,model::MOI.ModelLike)
x = MOI.add_variables(model, length(c));
MOI.set(model, MOI.ObjectiveFunction{MOI.ScalarAffineFunction{Float64}}(),
MOI.ScalarAffineFunction(MOI.ScalarAffineTerm.(c, x), 0.0))
MOI.set(model, MOI.ObjectiveSense(), MOI.MIN_SENSE)
for xi in x
MOI.add_constraint(model, MOI.SingleVariable(xi), MOI.GreaterThan(0.0))
end
for (i,row) in enumerate(eachrow(A))
row_function = MOI.ScalarAffineFunction(MOI.ScalarAffineTerm.(row, x), 0.0);
MOI.add_constraint(model, row_function, MOI.EqualTo(b[i]))
end
MOI.optimize!(model)
p = MOI.get(model, MOI.VariablePrimal(), x);
return p
end
For the model, just choose something like GLPK.Optimizer()
So I am trying to calculate the value of a parameter, beta, by minimizing the chi-square function. To do this, I'm using the scipy.optimize.minimize() function. I can't seem to get the code to do what I want. Is there a way to do this? I am open to other ways of approaching the problem.
For some background, the variables vr, rms and delta are both 1D tuples of the same length, and zeff, H and beta are parameters. I am trying to calculate an optimized beta value.
def chisq(beta,vr, delta,rvs,rms,zeff,H):
c = -(H/(1+zeff))*(beta/3)
model = c*np.multiply(rms,delta)
q = (vr-model)**2
p = model**-1
ratio = np.multiply(p,q)
chisq = np.sum(ratio)
return chisq
initial_guess = 0.47663662075855323
res = opt.minimize(chisq,initial_guess,args = (beta,delta,rvs,rms,zeff,H))
I usually get an error saying the dimensions of the function don't match the syntax for the minimize() function.
In your case beta is the optimization variable, so you don't need to pass it as an extra argument to the function chisq:
res = opt.minimize(chisq, x0=initial_guess, args=(vr, delta, rvs, rms, zeff, H))
I am trying to graph two functions, but i want to graph one function for a condition but graph using another function if another condition is met.
A simple example would be:
if x > 0
then sin(x)
else cos(x)
It would then graph cos and sin depending on the x value, there being an obvious gap at x = 0, as cos(0) = 1 and sin(0) = 0.
EDIT: There is a built-in way. I'll leave my original answer below for posterity, but try using the piecewise() function:
plot(piecewise(((cos(x),x<0), (sin(x), 0<x))))
See it here.
I would guess that there's a built-in way to do this, but I don't know it. You can multiply your functions by the Heaviside Step Function to accomplish this task. The step function is 1 if x > 0 and 0 if x < 0, so multiplying this into your functions and then summing them together will select only one of them based on the sign of x, that is to say:
f(x) := heaviside(x) * sin(x) + heaviside(-x) * cos(x)
If x > 0, heaviside(x) = 1 and heaviside(-x) = 0, so f(x) = sin(x).
If x < 0, heaviside(x) = 0 and heaviside(-x) = 1, so f(x) = cos(x).
See it in action here. In general, note that if you want the transition to be at x = a, then you could do heaviside(x-a) and heaviside(-x+a), respectively. If you want N functions, you'll have to have (N-1) multiplied step functions on each term, each with their own (x-a_i) argument. I hope someone else can contribute a cleaner solution.
I am trying to follow the tutorial of using the optimization tool box in MATLAB. Specifically, I have a function
f = exp(x(1))*(4*x(1)^2+2*x(2)^2+4*x(1)*x(2)+2*x(2)+1)+b
subject to the constraint:
(x(1))^2+x(2)-1=0,
-x(1)*x(2)-10<=0.
and I want to minimize this function for a range of b=[0,20]. (That is, I want to minimize this function for b=0, b=1,b=2 ... and so on).
Below is the steps taken from the MATLAB's tutorial webpage(http://www.mathworks.com/help/optim/ug/nonlinear-equality-and-inequality-constraints.html), how should I change the code so that, the optimization will run for 20 times, and save the optimal values for each b?
Step 1: Write a file objfun.m.
function f = objfun(x)
f = exp(x(1))*(4*x(1)^2+2*x(2)^2+4*x(1)*x(2)+2*x(2)+1)+b;
Step 2: Write a file confuneq.m for the nonlinear constraints.
function [c, ceq] = confuneq(x)
% Nonlinear inequality constraints
c = -x(1)*x(2) - 10;
% Nonlinear equality constraints
ceq = x(1)^2 + x(2) - 1;
Step 3: Invoke constrained optimization routine.
x0 = [-1,1]; % Make a starting guess at the solution
options = optimoptions(#fmincon,'Algorithm','sqp');
[x,fval] = fmincon(#objfun,x0,[],[],[],[],[],[],...
#confuneq,options);
After 21 function evaluations, the solution produced is
x, fval
x =
-0.7529 0.4332
fval =
1.5093
Update:
I tried your answer, but I am encountering problem with your step 2. Bascially, I just fill the my step 2 to your step 2 (below the comment "optimization just like before").
%initialize list of targets
b = 0:1:20;
%preallocate/initialize result vectors using zeros (increases speed)
opt_x = zeros(length(b));
opt_fval = zeros(length(b));
>> for idx = 1, length(b)
objfun = #(x)objfun_builder(x,b)
%optimization just like before
x0 = [-1,1]; % Make a starting guess at the solution
options = optimoptions(#fmincon,'Algorithm','sqp');
[x,fval] = fmincon(#objfun,x0,[],[],[],[],[],[],...
#confuneq,options);
%end the stuff I fill in
opt_x(idx) = x
opt_fval(idx) = fval
end
However, it gave me the output is:
Error: "objfun" was previously used as a variable, conflicting
with its use here as the name of a function or command.
See "How MATLAB Recognizes Command Syntax" in the MATLAB
documentation for details.
There are two things you need to change about your code:
Creation of the objective function.
Multiple optimizations using a loop.
1st Step
For more flexibility with regard to b, you need to set up another function that returns a handle to the desired objective function, e.g.
function h = objfun_builder(x, b)
h = #(x)(objfun(x));
function f = objfun(x)
f = exp(x(1))*(4*x(1)^2+2*x(2)^2+4*x(1)*x(2)+2*x(2)+1) + b;
end
end
A more elegant and shorter approach are anonymous functions, e.g.
objfun_builder = #(x,b)(exp(x(1))*(4*x(1)^2+2*x(2)^2+4*x(1)*x(2)+2*x(2)+1) + b);
After all, this works out to be the same as above. It might be less intuitive for a Matlab-beginner, though.
2nd Step
Instead of placing an .m-file objfun.m in your path, you will need to call
objfun = #(x)(objfun_builder(x,myB));
to create an objective function in your workspace. In order to loop over the interval b=[0,20], use the following loop
%initialize list of targets
b = 0:1:20;
%preallocate/initialize result vectors using zeros (increases speed)
opt_x = zeros(length(b))
opt_fval = zeros(length(b))
%start optimization of list of targets (`b`s)
for idx = 1, length(b)
objfun = #(x)objfun_builder(x,b)
%optimization just like before
opt_x(idx) = x
opt_fval(idx) = fval
end