I have a question regarding to programming simple quadratic assignment problem with Gurobi? As you know the objective function is not linear. Can we model and solve it with Gurobi (I am using Gurobi/python interface)
Related
I try to find the optimum of a data-driven function represented as a Tensorflow model.
Means I trained a model to approximate a function and now want to find the optimum of this approximated function using a algorithm and software package/python library like ipopt, ipyopt, casadi, .... Or is there a possibility to do this directly in Tensorflow. I also have to define constraints, so I can't just use simple autodiff to do gradient decent and optimize my input.
Is there any idea how to realize this in an efficient way?
Maybe this image visualizes my problem to better understand what I'm looking for.
I am reading the following paper about forecasting interest rates: https://onlinelibrary.wiley.com/doi/full/10.1002/for.2783.
In section 3.2.3 - the Hull-White model, it mentions that the parameters can be found by solving the optimization problem to minimize the difference between actual and model interest rates:
optimisation problem
The model interest rate is given by the solution of the Hull-White equation which involves a stochastic integral as shown here:
interest rate solution
Is there a well known method to deal with this kind of problem? Thank you!
I am working on a non-convex optimization these days and the question comes to my mind about the application of non-convex optimization in deep learning. How can be sure that our objective function is convex? Thanks
The standard definition is if f(θx + (1 − θ)y) ≤ θf(x) + (1 − θ)f(y) for 0≤θ≤1 and the domain of x,y is also convex.
So if you could prove that for your function, you would know it's convex.
In deep learning its very difficult to be sure that your objective function is Non Convex thats why initialization and hyperparameter tuning becomes very important
I am currently co-supervising a high school student on a research project and she is using PySCIPOpt. We would like to use PySCIPOpt to implement some machine learning method on branching.
We are using the problem here https://miplib.zib.de/instance_details_milo-v13-4-3d-3-0.html. We would like to know if there is a function we can call on PySCIPOpt that gives us the coefficient matrix and RHS vector of this problem, to which we can modify some numbers, and resend it through PySCIPOpt to optimize. The purpose of doing this is to generate more training data to be used on a package such as Scikit-learn.
I have looked through the source code and could only find functions such as chgLhs and chgRhs, but this seems more difficult to use than just editing the entries of the coefficient matrix and RHS vector directly.
Thank you for your help!
I am looking for optimization modelling libraries in python like CVXPY and Pyomo with support of complex variables (variables with real and imaginary part) and non-linear problems. CVXPY support complex variables but doesn't support nonlinear function for constraints. On the other hand, Pyomo can support nonlinear problems but doesn't support complex variables.
In conclusion: I am working on a large scale nonlinear and nonconvex optimization problem with some comlex variables and I am looking for something like cvxpy for these types of problems.
Any suggestions!
Thanks