Problems for plotting the negative part of the solutions of a differential equation using Mathematica/Wolfram - physics

I solved a differential equation using math, of the form y'' +0.6 y + 0.333 x^4 y - x^2 y + 0.03/x^2 y = 0. The solution is purely numerical. The equation has no solutions from x = -1 to x= 1. This is to be expected. When I try to plot the negative part together with the positive part in the same graph, and a spacing involving the unsolved region, I can't. Unfortunately this type of question is not accepted in the Wolfram newsgroups. That's why I'm using the internet. I appreciate an answer.
An answer about the problem.

Provide more information about the boundary conditions so we can help you
Clear["`*"]
s = NDSolve[{y''[x] + 0.6 y[x] + 0.333 x^4 y[x] - x^2 y[x] +
0.03/x^2 y[x] == 0, y[1] == 1, y'[1] == 1}, y, {x, 0, 30}]
Plot[Evaluate[y[x] /. s], {x, 0, 30}, PlotRange -> All]

Related

Julia Jump : Getting all feasible solutions to mip

I would like to have instead of only the vector of optimal solution to a mip , all the feasible (suboptimal) vectors.
I found some old questions here, but I am not sure how they work.
First of all, is there any new library tool/way to do that automatically ?
I tried this but, it did nothing:
if termination_status(m) == MOI.FEASIBLE_POINT
println(x)
end
optimize!(m);
If not, what's the easiest way?
I thought of scanning the optimal solution till I find the first non -zero decision variable, then constraint this variable to be zero and solving the model again.
for i in 1:active_variables
if value.(z[i])==1
#constraint(m, x[i] == 0)
break
end
end
optimize!(m);
But I see this problem with this method** :
Ιf I constraint x[i] to be zero, in the next step I will want maybe to drop again this constraint? This comes down to whether there can exist two(or more) different solutions in which x[i]==1
JuMP supports returning multiple solutions.
Documentation: https://jump.dev/JuMP.jl/stable/manual/solutions/#Multiple-solutions
The workflow is something like:
using JuMP
model = Model()
#variable(model, x[1:10] >= 0)
# ... other constraints ...
optimize!(model)
if termination_status(model) != OPTIMAL
error("The model was not solved correctly.")
end
an_optimal_solution = value.(x; result = 1)
optimal_objective = objective_value(model; result = 1)
for i in 2:result_count(model)
#assert has_values(model; result = i)
println("Solution $(i) = ", value.(x; result = i))
obj = objective_value(model; result = i)
println("Objective $(i) = ", obj)
if isapprox(obj, optimal_objective; atol = 1e-8)
print("Solution $(i) is also optimal!")
end
end
But you need a solver that supports returning multiple solutions, and to configure the right solver-specific options.
See this blog post: https://jump.dev/tutorials/2021/11/02/tutorial-multi-jdf/
The following is an example of all-solution finder for a boolean problem. Such problems are easier to handle since the solution space is easily enumerated (even though it can still grow exponentially big).
First, let's get the packages and define the sample problem:
using Random, JuMP, HiGHS, MathOptInterface
function example_knapsack()
profit = [5, 3, 2, 7, 4]
weight = [2, 8, 4, 2, 5]
capacity = 10
minprofit = 10
model = Model(HiGHS.Optimizer)
set_silent(model)
#variable(model, x[1:5], Bin)
#objective(model, FEASIBILITY_SENSE, 0)
#constraint(model, weight' * x <= capacity)
#constraint(model, profit' * x >= minprofit)
return model
end
(it is a knapsack problem from the JuMP docs).
Next, we use recursion to explore the tree of all possible solutions. The tree does not go down branches with no solution (so the running time is not always exponential):
function findallsol(model, x)
perm = shuffle(1:length(x))
res = Vector{Float64}[]
_findallsol!(res, model, x, perm, 0)
return res
end
function _findallsol!(res, model, x, perm, depth)
n = length(x)
depth > n && return
optimize!(model)
if termination_status(model) == MathOptInterface.OPTIMAL
if depth == n
push!(res, value.(x))
return
else
idx = perm[depth+1]
v = value(x[idx])
newcon = #constraint(model, x[idx] == v)
_findallsol!(res, model, x, perm, depth + 1)
delete(model, newcon)
newcon = #constraint(model, x[idx] == 1 - v)
_findallsol!(res, model, x, perm, depth + 1)
delete(model, newcon)
end
end
return
end
Now we can:
julia> m = example_knapsack()
A JuMP Model
Maximization problem with:
Variables: 5
...
Names registered in the model: x
julia> res = findallsol(m, m.obj_dict[:x])
5-element Vector{Vector{Float64}}:
[1.0, 0.0, 0.0, 1.0, 1.0]
[0.0, 0.0, 0.0, 1.0, 1.0]
[1.0, 0.0, 1.0, 1.0, 0.0]
[1.0, 0.0, 0.0, 1.0, 0.0]
[0.0, 1.0, 0.0, 1.0, 0.0]
And we get a vector with all the solutions.
If the problem in question is a boolean problem, this method might be used, as is. In case it has non-boolean variables, the recursion will have to split the feasible space in some even fashion. For example, choosing a variable and cutting its domain in half, and recursing to each half with a smaller domain on this variable (to ensure termination).
P.S. This is not the optimal method. This problem has been well studied. Possible terms to search for are 'model counting' (especially in the boolean domain).
(UPDATE: Changed objective to use FEASIBLE)

Linear programming : how to turn a decision variable (binary) as 1 if values in an array exceeds a certain threshold

I have an array which holds a linear expression in terms of decision variable. Let's say decision variables take values such that the array = [1.7 , 0.3, 0]. Now what I want is the following :
1) If any of the values from the above array is > 0.5, then decision variables : y1 (binary) = 1, else 0.
so y1 should turn out to be [1, 0, 0]
2) If any of the values from the above array is > 0.5, then decision variables : y2 (real-valued) = the
value, else 0. Hence y2 = [1.7, 0, 0]
3) If any value in the array is > 0 and <= 0.5, then decision variables : y3 (binary) = 1, else 0. Hence
y3 = [0, 1, 0]
I know that big M formulation can help, but I am struggling to find a way.
Can somebody please help me with the formulation of above 3 points. I am working on pyomo and gurobi for programming the problem.

solving a sparse non linear system of equations using scipy.optimize.root

I want to solve the following non-linear system of equations.
Notes
the dot between a_k and x represents dot product.
the 0 in the first equation represents 0 vector and 0 in the second equation is scaler 0
all the matrices are sparse if that matters.
Known
K is an n x n (positive definite) matrix
each A_k is a known (symmetric) matrix
each a_k is a known n x 1 vector
N is known (let's say N = 50). But I need a method where I can easily change N.
Unknown (trying to solve for)
x is an n x 1 a vector.
each alpha_k for 1 <= k <= N a scaler
My thinking.
I am thinking of using scipy root to find x and each alpha_k. We essentially have n equations from each row of the first equation and another N equations from the constraint equations to solve for our n + N variables. Therefore we have the required number of equations to have a solution.
I also have a reliable initial guess for x and the alpha_k's.
Toy example.
n = 4
N = 2
K = np.matrix([[0.5, 0, 0, 0], [0, 1, 0, 0],[0,0,1,0], [0,0,0,0.5]])
A_1 = np.matrix([[0.98,0,0.46,0.80],[0,0,0.56,0],[0.93,0.82,0,0.27],[0,0,0,0.23]])
A_2 = np.matrix([[0.23, 0,0,0],[0.03,0.01,0,0],[0,0.32,0,0],[0.62,0,0,0.45]])
a_1 = np.matrix(scipy.rand(4,1))
a_2 = np.matrix(scipy.rand(4,1))
We are trying to solve for
x = [x1, x2, x3, x4] and alpha_1, alpha_2
Questions:
I can actually brute force this toy problem and feed it to the solver. But how do I do I solve this toy problem in such a way that I can extend it easily to the case when I have let's say n=50 and N=50
I will probably have to explicitly compute the Jacobian for larger matrices??.
Can anyone give me any pointers?
I think the scipy.optimize.root approach holds water, but steering clear of the trivial solution might be the real challenge for this system of equations.
In any event, this function uses root to solve the system of equations.
def solver(x0, alpha0, K, A, a):
'''
x0 - nx1 numpy array. Initial guess on x.
alpha0 - nx1 numpy array. Initial guess on alpha.
K - nxn numpy.array.
A - Length N List of nxn numpy.arrays.
a - Length N list of nx1 numpy.arrays.
'''
# Establish the function that produces the rhs of the system of equations.
n = K.shape[0]
N = len(A)
def lhs(x_alpha):
'''
x_alpha is a concatenation of x and alpha.
'''
x = np.ravel(x_alpha[:n])
alpha = np.ravel(x_alpha[n:])
lhs_top = np.ravel(K.dot(x))
for k in xrange(N):
lhs_top += alpha[k]*(np.ravel(np.dot(A[k], x)) + np.ravel(a[k]))
lhs_bottom = [0.5*x.dot(np.ravel(A[k].dot(x))) + np.ravel(a[k]).dot(x)
for k in xrange(N)]
lhs = np.array(lhs_top.tolist() + lhs_bottom)
return lhs
# Solve the system of equations.
x0.shape = (n, 1)
alpha0.shape = (N, 1)
x_alpha_0 = np.vstack((x0, alpha0))
sol = root(lhs, x_alpha_0)
x_alpha_root = sol['x']
# Compute norm of residual.
res = sol['fun']
res_norm = np.linalg.norm(res)
# Break out the x and alpha components.
x_root = x_alpha_root[:n]
alpha_root = x_alpha_root[n:]
return x_root, alpha_root, res_norm
Running on the toy example, however, only produces the trivial solution.
# Toy example.
n = 4
N = 2
K = np.matrix([[0.5, 0, 0, 0], [0, 1, 0, 0],[0,0,1,0], [0,0,0,0.5]])
A_1 = np.matrix([[0.98,0,0.46,0.80],[0,0,0.56,0],[0.93,0.82,0,0.27],
[0,0,0,0.23]])
A_2 = np.matrix([[0.23, 0,0,0],[0.03,0.01,0,0],[0,0.32,0,0],
[0.62,0,0,0.45]])
a_1 = np.matrix(scipy.rand(4,1))
a_2 = np.matrix(scipy.rand(4,1))
A = [A_1, A_2]
a = [a_1, a_2]
x0 = scipy.rand(n, 1)
alpha0 = scipy.rand(N, 1)
print 'x0 =', x0
print 'alpha0 =', alpha0
x_root, alpha_root, res_norm = solver(x0, alpha0, K, A, a)
print 'x_root =', x_root
print 'alpha_root =', alpha_root
print 'res_norm =', res_norm
Output is
x0 = [[ 0.00764503]
[ 0.08058471]
[ 0.88300129]
[ 0.85299622]]
alpha0 = [[ 0.67872815]
[ 0.69693346]]
x_root = [ 9.88131292e-324 -4.94065646e-324 0.00000000e+000
0.00000000e+000]
alpha_root = [ -4.94065646e-324 0.00000000e+000]
res_norm = 0.0

plotting a 2d function as surface in 3d space with `Plots.jl`

I have the following problem while plotting with Plots.jl. I like to plot the rosenbrock function
rosenbrock(x) = (1.0 - x[1])^2 + 100.0 * (x[2] - x[1]^2)^2
as surface, which expects a 2d Tuple{Float64,Float64} as input.
What I could come up with, is the following:
using Plots
gr()
rosenbrock(x) = (1.0 - x[1])^2 + 100.0 * (x[2] - x[1]^2)^2
ts = linspace(-1.0, 1.0, 100)
x = ts
y = map(rosenbrock, [(x, z) for (x,z) in zip(ts,ts)])
z = map(rosenbrock, [(x, y) for (x,y) in zip(ts,ts)])
# plot(x, x, z)
plot(x, y, z, st = [:surface, :contourf])
which yields this plot:
I think I messed up some dimensions, but I don't see what I got wrong.
Do I have to nest the calculation of the mappings for y and x to get the result?
After a quick investigation of the Rosenbrock function I found, and correct me if Im wrong, but you need to specify the y-vector you arent supposed to nest it within z or anything like that
Someone else tried this same thing as shown here but using Plots
the solution is as follows as done by Patrick Kofod Mogensen
using Plots
function rosenbrock(x::Vector)
return (1.0 - x[1])^2 + 100.0 * (x[2] - x[1]^2)^2
end
default(size=(600,600), fc=:heat)
x, y = -1.5:0.1:1.5, -1.5:0.1:1.5
z = Surface((x,y)->rosenbrock([x,y]), x, y)
surface(x,y,z, linealpha = 0.3)
This results in
side note
Im glad I searched for this as I've been searching for a 3D plotter for Julia other than PyPlot (as it can be a bit of a hassle to set up for the users of my program) and this even looks better and images can be rotated.

How to use "SCNVector4Make()"?

I have a short question. I don't know which values I have to put in this function and I can't find any valuable examples on the internet.
This is my function:
I already set up a node and everything else.
node.rotation = SCNVector4Make(x,y,z,w);
What are the values for x, y, z, and w when I want to turn my object with an angle of 45 degrees?
The first value is for "x"
SCNVector4Make(1,0,0,0)
The second is "Y"
SCNVector4Make(0,1,0,0)
The third is "Z"
SCNVector4Make(0,0,1,0)
The fourth "W" is rotation in radians. To rotate your object on the "x" axis 45 degs. It will look like so...
SCNVector4Make(1,0,0,M_PI/4)
M_PI is equal to 180 degs.
from the SCNNode reference:
The four-component rotation vector specifies the direction of the rotation axis in the first three components and the angle of rotation (in radians) in the fourth.
In Swift 4.2 you can use the following values for 45 degrees rotation in SCNVector4Make(x, y, z, w):
X-axis:
node.rotation = SCNVector4Make(1, 0, 0, .pi/4)
Y-axis:
node.rotation = SCNVector4Make(0, 1, 0, .pi/4)
Z-axis:
node.rotation = SCNVector4Make(0, 0, 1, .pi/4)
Remember, w parameter must be in Radians,
so 3.14159 / 4 = 0.78539 radians
(or 180 / 4 = 45 degrees).