I was wondering what information is passed by the pyomo script to the solver (eg Cbc). In specific, I want to ask whether, whatever constraints and objective function I code, does the solver ask python to do the computation of these functions or are they done in the language the solver is written.
Computations are done in the language the solver is written in. In most cases Pyomo takes your model and will output it to a file using the .lp or .nl format for linear and nonlinear models respectively. The solver will read the file, create its own representation of the model, solve the problem, and write a .sol file with the solution. Pyomo will then read the .sol file and load the solution back into the Pyomo model in Python. The one exception to this workflow is if you're using the direct or persistent interface to Gurobi. In that case no files are written but I believe all the computations are still done in the language of the solver.
Related
I try to find the optimum of a data-driven function represented as a Tensorflow model.
Means I trained a model to approximate a function and now want to find the optimum of this approximated function using a algorithm and software package/python library like ipopt, ipyopt, casadi, .... Or is there a possibility to do this directly in Tensorflow. I also have to define constraints, so I can't just use simple autodiff to do gradient decent and optimize my input.
Is there any idea how to realize this in an efficient way?
Maybe this image visualizes my problem to better understand what I'm looking for.
I am currently co-supervising a high school student on a research project and she is using PySCIPOpt. We would like to use PySCIPOpt to implement some machine learning method on branching.
We are using the problem here https://miplib.zib.de/instance_details_milo-v13-4-3d-3-0.html. We would like to know if there is a function we can call on PySCIPOpt that gives us the coefficient matrix and RHS vector of this problem, to which we can modify some numbers, and resend it through PySCIPOpt to optimize. The purpose of doing this is to generate more training data to be used on a package such as Scikit-learn.
I have looked through the source code and could only find functions such as chgLhs and chgRhs, but this seems more difficult to use than just editing the entries of the coefficient matrix and RHS vector directly.
Thank you for your help!
I've been using Gurobi to solve an MILP problem, and Pyomo for generating the model. Gurobi supports returning a Solution pool, and I want to be able to generate multiple solutions using this pool. Is this supported in Pyomo?
I've tried using model.solCount, and model.params.SolutionNumber, but I found out that it works for gurobipy models, and not models in Pyomo.
Is it possible to somehow load(iteratively) these solutions into the model?
If it isn't, what are my other options, if I have to do this with Pyomo?
You should be able to use Gurobi's feature of writing solution files to disk. Just set the parameter SolFiles to some name and Gurobi will save all solutions:
from pyomo.opt import SolverFactory
opt = SolverFactory('gurobi')
opt.options['Solfiles'] = 'solution'
I am trying to understand how the internal flow goes in mxnet when we call forward . Is there any way to get source code of mxnet?
This really depends on what your symbolic graph looks like. I assume you use MXNet with Python (Python documentation). There you can choose to use the MXNet symbol library or the Gluon library.
Now, you were asking whether one can inspect the code, and, yes, you can find it on GitHub. The folder python contains the python interface and src contains all MXNet sources. What happens on forward is eventually defined by the MXNet execution engine, which tracks input/output dependencies of operators and neural network layers, allocate memory on the different devices (CPU, GPUs). There is a general architecture documentation for this.
I suppose you are interested in what each and every operation does, such as argmax (reduction), tanh (unary math operation) or convolution (complex neural network operation). This you can find in the operator folder of MXNet. This requires a whole documentation in itself and there is a special forum for MXNet specifics here, but I will give a short orientation:
Each operation in a (symbolic) execution graph needs a defined forward and backward operation. It also needs to define its output shape, so that it can be chained with other operations. If that operator needs weights, it needs to define the amount of memory it requires, so MXNet can allocate it.
Each operation requires several implementations for a) CPU b) GPU (CUDA) c) wrapper around cuDNN
All unary math operations follow the same pattern, so they are all defined in a similar way in mshadow_op.h (e.g. relu).
This is all I can tell you based on your quite broad question.
I haven't tried Tensorflow yet but still curious, how does it store, and in what form, data type, file type, the acquired learning of a machine learning code for later use?
For example, Tensorflow was used to sort cucumbers in Japan. The computer used took a long time to learn from the example images given about what good cucumbers look like. In what form the learning was saved for future use?
Because I think it would be inefficient if the program should have to re-learn the images again everytime it needs to sort cucumbers.
Ultimately, a high level way to think about a machine learning model is three components - the code for the model, the data for that model, and metadata needed to make this model run.
In Tensorflow, the code for this model is written in Python, and is saved in what is known as a GraphDef. This uses a serialization format created at Google called Protobuf. Common serialization formats include Python's native Pickle for other libraries.
The main reason you write this code is to "learn" from some training data - which is ultimately a large set of matrices, full of numbers. These are the "weights" of the model - and this too is stored using ProtoBuf, although other formats like HDF5 exist.
Tensorflow also stores Metadata associated with this model - for instance, what should the input look like (eg: an image? some text?), and the output (eg: a class of image aka - cucumber1, or 2? with scores, or without?). This too is stored in Protobuf.
During prediction time, your code loads up the graph, the weights and the meta - and takes some input data to give out an output. More information here.
Are you talking about the symbolic math library, or the idea of tensor flow in general? Please be more specific here.
Here are some resources that discuss the library and tensor flow
These are some tutorials
And here is some background on the field
And this is the github page
If you want a more specific answer, please give more details as to what sort of work you are interested in.
Edit: So I'm presuming your question is more related to the general field of tensor flow than any particular application. Your question still is too vague for this website, but I'll try to point you toward a few resources you might find interesting.
The tensorflow used in image recognition often uses an ANN (Artificial Neural Network) as the object on which to act. What this means is that the tensorflow library helps in the number crunching for the neural network, which I'm sure you can read all about with a quick google search.
The point is that tensorflow isn't a form of machine learning itself, it more serves as a useful number crunching library, similar to something like numpy in python, in large scale deep learning simulations. You should read more here.