Julia JuMP - `max` in the goal function Error: No method matching isless - optimization

There is a error in my code, can anyone help me please?
My code:
function funP(u, τ::Float64)
w = (τ*max(u, 0)) + ((1-τ)*max(-u, 0))
return w
end
τ = 0.2
modelquant = Model(with_optimizer(OSQP.Optimizer))
#variable(modelquant, β[i=0:1])
#variable(modelquant, erro[1:T])
#constraint(modelquant,[i=1:T], erro[i] >= contratos[i] - β[0] - β[1]*spot[i])
#constraint(modelquant,[i=1:T], erro[i] >= -contratos[i] + β[0] + β[1]*spot[i])
#objective(modelquant, Min, sum(funP(erro[i], τ) for i=1:T))
optimize!(modelquant)
objective_value(modelquant)
𝐁 = JuMP.value.(β)
The error is:
julia> #objective(modelquant, Min, sum(funP(erro[i], τ) for i=1:T))
ERROR: MethodError: no method matching isless(::Int64, ::VariableRef)
Closest candidates are:
isless(::Missing, ::Any) at missing.jl:87
isless(::Real, ::AbstractFloat) at operators.jl:166
isless(::Integer, ::ForwardDiff.Dual{Ty,V,N} where N where V) where Ty at C:\Users\Fernanda.julia\packages\ForwardDiff\kU1ce\src\dual.jl:140
...
Stacktrace:
[1] max(::VariableRef, ::Int64) at .\operators.jl:417
[2] funP(::VariableRef, ::Float64) at .\REPL[8]:2
[3] macro expansion at C:\Users\Fernanda.julia\packages\MutableArithmetics\bPWR4\src\rewrite.jl:276 [inlined]
[4] macro expansion at C:\Users\Fernanda.julia\packages\JuMP\qhoVb\src\macros.jl:830 [inlined]
[5] top-level scope at .\REPL[20]:1
Thank you so much!

You need to re-engineer your model to replace the max function with a binary variable.
In your case the code will look like this (check for typos):
#variable(modelquant, erro_neg[1:T], Bin)
#variable(modelquant, erro_pos[1:T], Bin)
#constraint(modelquant,for i=1:T,erro_neg[i]+erro_pos[i]==1)
#constraint(modelquant,for i=1:T,erro[i]*erro_pos[i] >= 0)
#constraint(modelquant,for i=1:T,erro[i]*erro_neg[i] <= 0)
#objective(modelquant, Min, sum( τ*erro_neg[i]*erro[i]+ (1-τ)*erro[i]*erro_npos[i] for i=1:T))
Please note that in my version you could actually safely remove the Bin condition from erro_neg and erro_pos and the model will still work (you need to test empirically what your solver prefers)

Related

Optimization of piecewise functions in Julia

Extremely new to Julia, so please pardon any obvious oversights on my end
I am trying to estimate a piecewise likelihood function through optimization. I have the code functional in R, but have begun translating it to Julia in the hopes of faster estimation, for eventual bootstrapping
Here is the current block of code that I am trying (v and x are already as 1000x1 vectors elsewhere defined elsewhere):
function est(a,b)
function pwll(v,x)
if v>4
ILL=pdf(Poisson(exp(a+b*x)), v)
elseif v==4
ILL=pdf(Poisson(exp(a+b*x)), 4)+pdf(Poisson(exp(a+b*x)),3)+pdf(Poisson(exp(a+b*x)),2)
else v==0
ILL=pdf(Poisson(exp(a+b*x)), 1)+pdf(Poisson(exp(a+b*x)), 0)
end
return(ILL)
end
ILL=pwll.(v, x)
function fixILL(x)
if x==0
x=0.00000000000000001
else
x=x
end
end
ILL=fixILL.(ILL)
LILL=log10.(ILL)
LL=-1*LILL
return(sum(LL))
end
using Optim
params0=[1,1]
optimize(est, params0)
And the error message(s) I am getting are:
ERROR: InexactError: Int64(NaN)
Stacktrace:
[1] Int64(x::Float64)
# Base ./float.jl:788
[2] x_of_nans(x::Vector{Int64}, Tf::Type{Int64}) (repeats 2 times)
# NLSolversBase ~/.julia/packages/NLSolversBase/kavn7/src/NLSolversBase.jl:60
[3] NonDifferentiable(f::Function, x::Vector{Int64}, F::Int64; inplace::Bool)
# NLSolversBase ~/.julia/packages/NLSolversBase/kavn7/src/objective_types/nondifferentiable.jl:11
[4] NonDifferentiable(f::Function, x::Vector{Int64}, F::Int64)
# NLSolversBase ~/.julia/packages/NLSolversBase/kavn7/src/objective_types/nondifferentiable.jl:10
[5] promote_objtype(method::NelderMead{Optim.AffineSimplexer, Optim.AdaptiveParameters}, x::Vector{Int64}, autodiff::Symbol, inplace::Bool, args::Function)
# Optim ~/.julia/packages/Optim/tP8PJ/src/multivariate/optimize/interface.jl:63
[6] optimize(f::Function, initial_x::Vector{Int64}; inplace::Bool, autodiff::Symbol, kwargs::Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
# Optim ~/.julia/packages/Optim/tP8PJ/src/multivariate/optimize/interface.jl:86
[7] optimize(f::Function, initial_x::Vector{Int64})
# Optim ~/.julia/packages/Optim/tP8PJ/src/multivariate/optimize/interface.jl:83
[8] top-level scope
# ~/Documents/Projects/ki_new/peicewise_ll.jl:120
I understand that it seems the error is coming from the function to be optimized being non-differentiable. A fairly direct translation works well in R, using the built in optim() function.
Can anyone provide any insight?
I have tried the above code displayed above, with multiple variations. The function to be optimized is functional, I am struggling with the optimization (the issues of which may stem from function being inefficiently written)
Here's an adapted version of your code which produces a solution:
using Distributions, Optim
function pwll(v, x, a, b)
d = Poisson(exp(a+b*x))
if v > 4
return pdf(d, v)
elseif v == 4
return pdf(d, 4) + pdf(d, 3) + pdf(d, 2)
else
return pdf(d, 1) + pdf(d, 0)
end
end
fixILL(x) = iszero(x) ? 1e-17 : x
est(a, b, v, x) = sum(-1 .* log10.(fixILL.(pwll.(v, x, a, b))))
v = 4; x = 0.5 # Defining these here as they are not given in your post
obj(input; v = v, x = x) = est(input[1], input[2], v, x)
optimize(obj, [1.0, 1.0])
I have no idea whether this is correct of course, check this against some sort of known result if you can.

Domain error when using Nelder Mead algorithm in Julia

I am struggling with optimization in Julia.
I used to use Matlab but I am trying to work on Julia instead.
The following is the code I wrote.
using Optim
V = fill(1.0, (18,14,5))
agrid = range(-2, stop=20, length=18)
dgrid = range(0.01, stop=24, length=14)
#zgrid = [0.5; 0.75; 1.0; 1.25; 1.5]
zgrid = [0.7739832502827438; 0.8797631785217791; 1.0; 1.1366695315439874; 1.2920176239404275]
# function
function adj_utility(V,s_a,s_d,s_z,i_z,c_a,c_d)
consumption = s_z + 1.0125*s_a + (1-0.018)*s_d - c_a - c_d - 0.05*(1-0.018)*s_d
if consumption >= 0
return (1/(1-2)) * (( (consumption^0.88) * (c_d^(1-0.88)) )^(1-2))
end
if consumption < 0
return -99999999
end
end
# Optimization
i_a = 1
i_d = 3
i_z = 1
utility_adj(x) = -adj_utility(V,agrid[i_a],dgrid[i_d],zgrid[i_z],i_z,x[1],x[2])
result1 = optimize(utility_adj, [1.0, 1.0], NelderMead())
If I use zgrid = [0.5; 0.75; 1.0; 1.25; 1.5], then the code works.
However, if I use zgrid = [0.7739832502827438; 0.8797631785217791; 1.0; 1.1366695315439874; 1.2920176239404275], I got an error message "DomainError with -0.3781249999999996"
In the function, if the consumption is less than 0 then the value should be -9999999 so I am not sure why I am getting this message.
Any help would be appreciated.
Thank you.
Raising negative numbers to non-integer powers returns complex numbers, which is where your error is coming from.
julia> (-0.37)^(1-0.88)
ERROR: DomainError with -0.37:
Exponentiation yielding a complex result requires a complex argument.
Replace x^y with (x+0im)^y, Complex(x)^y, or similar.
Stacktrace:
[1] throw_exp_domainerror(::Float64) at ./math.jl:37
[2] ^(::Float64, ::Float64) at ./math.jl:888
[3] top-level scope at REPL[5]:1
You have a constraint that consumption must be strictly positive, but if you want consumption to be a real number you will need constraints that c_d is positive as well. You can either add this directly to your objective function as above, or you can use one of the constrained optimization algorithms in NLopt, which is available in Julia via the NLopt package.

Is it okay to use complex control flow in tf.function?

I have the following Python function and I want to wrap it into #tf.function (originally the input arguments are numpy arrays, but for the sake of executing on GPU it's not a problem to convert them to TF tensors).
def reproject(before_frame, motion_vecs):
reprojected_image = np.zeros((before_frame.shape[0], before_frame.shape[1], before_frame.shape[2]))
for row_idx in range(before_frame.shape[0]):
for col_idx in range(before_frame.shape[1]):
for c_idx in range(before_frame.shape[2]):
diff_u = int(round(
(before_frame.shape[1] * motion_vecs[row_idx][col_idx][0])
))
diff_v = int(round(
(before_frame.shape[0] * motion_vecs[row_idx][col_idx][1])
))
before_pixel_position = (
row_idx + diff_v,
col_idx + diff_u
)
if before_pixel_position[0] < before_frame.shape[0] and before_pixel_position[1] < before_frame.shape[1] \
and before_pixel_position[0] > 0 and before_pixel_position[1] > 0:
reprojected_image[row_idx][col_idx][c_idx] = before_frame[
before_pixel_position[0]
][
before_pixel_position[1]
][c_idx]
return reprojected_image
I can see that in Tensorflow tutorials people use vectorized_map or map_fn instead of loops, and tf.cond instead of the if operator. So is using these functions the only option for control flow, and if so, what are the reasons behind it?

Variable defined outside a while loop not defined inside?

I'm attempting to write a Newton-Raphson solver in Julia. The Newton-Raphson method is shown in this image.
f(x) = x^2.5 - 3x^1.5 - 10
fprime(x) = 2.5x^1.5 - 4.5x^0.5
x = zeros(1000)
x[1] = 10
δ = 1 # a relatively large number compared to what we want the error to be
iter = 1
while δ > 1e-6
x[iter + 1] = x[iter] - f(x[iter])/fprime(x[iter])
iter += 1
δ = abs(x[iter] - x[iter + 1])
if iter == 100
break
end
end
println("The solution is ")
show(x[iter])
However, when I run the code, I get an error saying iter is not defined, even though I defined it just before the start of the loop. Is there some scoping problem I'm completely missing?
ERROR: LoadError: UndefVarError: iter not defined
Stacktrace:
[1] top-level scope at /Users/natemcintosh/Documents/Julia/Learning_julia.jl:11 [inlined]
[2] top-level scope at ./none:0
[3] include_string(::Module, ::String, ::String) at ./loading.jl:1002
[4] (::getfield(Atom, Symbol("##120#125")){String,String,Module})() at /Users/natemcintosh/.julia/packages/Atom/Pab0Z/src/eval.jl:120
[5] withpath(::getfield(Atom, Symbol("##120#125")){String,String,Module}, ::String) at /Users/natemcintosh/.julia/packages/CodeTools/8CjYJ/src/utils.jl:30
[6] withpath at /Users/natemcintosh/.julia/packages/Atom/Pab0Z/src/eval.jl:46 [inlined]
[7] #119 at /Users/natemcintosh/.julia/packages/Atom/Pab0Z/src/eval.jl:117 [inlined]
[8] hideprompt(::getfield(Atom, Symbol("##119#124")){String,String,Module}) at /Users/natemcintosh/.julia/packages/Atom/Pab0Z/src/repl.jl:76
[9] macro expansion at /Users/natemcintosh/.julia/packages/Atom/Pab0Z/src/eval.jl:116 [inlined]
[10] (::getfield(Atom, Symbol("##118#123")){Dict{String,Any}})() at ./task.jl:85
in expression starting at /Users/natemcintosh/Documents/Julia/Learning_julia.jl:10
I've tried printing x at the beginning of the while loop and it knows what x is, but thinks iter is undefined.
First let me give the solution:
There are three possible approaches
Approach 1. Prepend global before iter += 1 and change it to global iter += 1 and all will work (note however the comment below about δ - because it will not work correctly unless you also prepend global before δ = abs(x[iter] - x[iter + 1]), i.e. the code will run but will produce wrong results - approaches 2 and 3 do not have this problem).
Approach 2. Wrap your code inside a function like this:
f(x) = x^2.5 - 3x^1.5 - 10
fprime(x) = 2.5x^1.5 - 4.5x^0.5
function sol(f, fprime)
x = zeros(1000)
x[1] = 10
δ = 1 # a relatively large number compared to what we want the error to be
iter = 1
while δ > 1e-6
x[iter + 1] = x[iter] - f(x[iter])/fprime(x[iter])
iter += 1
δ = abs(x[iter] - x[iter + 1])
if iter == 100
break
end
end
println("The solution is ")
show(x[iter])
end
sol(f, fprime) # now we call it
Solution 3. Wrap your code in a let block by changing line function sol(f, fprime) in solution 2 to simply say let (you do not need to call sol then).
Now the reason why you have to do it.
In Julia 1.0 while introduces a new scope. The scoping rules in Julia 1.0 are that each variable that is assigned to inside a while loop is considered a local variable (this has changed, because Julia 0.6 distinguished hard and soft local scope, in Julia 1.0 this distinction is gone - all local scopes are the same).
In your code you assign values to two variables: iter and δ inside the loop. This means that they are treated by Julia as local so you are not allowed to access their value before they have a value assigned inside the loop.
You want to read iter in line x[iter + 1] = x[iter] - f(x[iter])/fprime(x[iter]) but assign a value to it only in the following line.
As for δ the thing is more tricky. You assign a value to it, but it is used in a loop condition while δ > 1e-6. However, this condition operates on variables defined in outer scope (global in the original case). So all will work, but the condition while δ > 1e-6 will always see that δ is equal to 1 as it looks at the value of the variable outside of the loop. So this condition will never trigger (and you will always run 100 iterations). In summary the code that does what you want is (although if you did not fix δ assignment you would not get a warning):
f(x) = x^2.5 - 3x^1.5 - 10
fprime(x) = 2.5x^1.5 - 4.5x^0.5
x = zeros(1000)
x[1] = 10
δ = 1 # a relatively large number compared to what we want the error to be
iter = 1
while δ > 1e-6
x[iter + 1] = x[iter] - f(x[iter])/fprime(x[iter])
global iter += 1
global δ = abs(x[iter] - x[iter + 1])
if iter == 100
break
end
end
println("The solution is ")
show(x[iter])
Finally notice that the line x[iter + 1] = x[iter] - f(x[iter])/fprime(x[iter]) works fine even if there is an assignment in it, because you do not rebind variable x in it, but only change one element of an array (so x points to the same address in memory and Julia treats it as a global variable all the time).
Also you might want to read this https://docs.julialang.org/en/latest/manual/variables-and-scoping/ in the Julia manual or the answer to this question Julia Variable scope is similar

LoadError using approximate bayesian criteria

I am getting an error that is confusing me.
using DifferentialEquations
using RecursiveArrayTools # for VectorOfArray
using DiffEqBayes
f2 = #ode_def_nohes LotkaVolterraTest begin
dx = x*(1 - x - A*y)
dy = rho*y*(1 - B*x - y)
end A B rho
u0 = [1.0;1.0]
tspan = (0.0,10.0)
p = [0.2,0.5,0.3]
prob = ODEProblem(f2,u0,tspan,p)
sol = solve(prob,Tsit5())
t = collect(linspace(0,10,200))
randomized = VectorOfArray([(sol(t[i]) + .01randn(2)) for i in 1:length(t)])
data = convert(Array,randomized)
priors = [Uniform(0.0, 2.0), Uniform(0.0, 2.0), Uniform(0.0, 2.0)]
bayesian_result_abc = abc_inference(prob, Tsit5(), t, data,
priors;num_samples=500)
Returns the error
ERROR: LoadError: DimensionMismatch("first array has length 400 which does not match the length of the second, 398.")
while loading..., in expression starting on line 20.
I have not been able to locate any array of size 400 or 398.
Thanks for your help.
Take a look at https://github.com/JuliaDiffEq/DiffEqBayes.jl/issues/52, that was due to an error in passing the t. This has been fixed on master so you can use that or wait some time, we will have a new release soon with the 1.0 upgrades which will have this fixed too.
Thanks!