Pyomo Mixed Integer Linear Optimization Code to multiple set of variables with binary variables and prices - optimization

Helly everyone,
I am a new user to pyomo and have some problem at the moment.
I am trying to develop a multi-time period mixed-integer optimization problem in python with pyomo. I have 4 technologies for which I want to optimize the capacity over 12 periods (1-12). If technology 1 gets chosen in a period technology 2 is not chosen in that period. The same goes for technologies 3 and 4. Each of these technologies has its own price per period. I set up a list for all the variables for each technologies in each period(x11-x124), for the binary variable of each technology in each period and for the price of each technology in each period. However, I am unable to write a working objective function for all these variables.
I would appreciate any help!
Below is the image of the code I have tried. I have also tried. I however get the error: list indices must be integers or slices, not str.
I have also tried first transforming the lists into numpy.arrays. I however then get an error because I cannot use numpy in a pyomo optimization
enter image description here

Related

Optimizing Parameters using AI technique

I know that my question is general, but I'm new to AI area.
I have an experiment with some parameters (almost 6 parameters). Each one of them is independent one, and I want to find the optimal solution for maximum or minimum the output function. However, if I want to do it in traditional programming technique it will take much time since i will use six nested loops.
I just want to know which AI technique to use for this problem? Genetic Algorithm? Neural Network? Machine learning?
Update
Actually, the problem could have more than one evaluation function.
It will have one function that we should minimize it (Cost)
and another function the we want to maximize it (Capacity)
Maybe another functions can be added.
Example:
Construction a glass window can be done in a million ways. However, we want the strongest window with lowest cost. There are many parameters that affect the pressure capacity of the window such as the strength of the glass, Height and Width, slope of the window.
Obviously, if we go to extreme cases (Largest strength glass, with smallest width and height, and zero slope) the window will be extremely strong. However, the cost for that will be very high.
I want to study the interaction between the parameters in specific range.
Without knowing much about the specific problem it sounds like Genetic Algorithms would be ideal. They've been used a lot for parameter optimisation and have often given good results. Personally, I've used them to narrow parameter ranges for edge detection techniques with about 15 variables and they did a decent job.
Having multiple evaluation functions needn't be a problem if you code this into the Genetic Algorithm's fitness function. I'd look up multi objective optimisation with genetic algorithms.
I'd start here: Multi-Objective optimization using genetic algorithms: A tutorial
First of all if you have multiple competing targets the problem is confused.
You have to find a single value that you want to maximize... for example:
value = strength - k*cost
or
value = strength / (k1 + k2*cost)
In both for a fixed strength the lower cost wins and for a fixed cost the higher strength wins but you have a formula to be able to decide if a given solution is better or worse than another. If you don't do this how can you decide if a solution is better than another that is cheaper but weaker?
In some cases a correctly defined value requires a more complex function... for example for strength the value could increase up to a certain point (i.e. having a result stronger than a prescribed amount is just pointless) or a cost could have a cap (because higher than a certain amount a solution is not interesting because it would place the final price out of the market).
Once you find the criteria if the parameters are independent a very simple approach that in my experience is still decent is:
pick a random solution by choosing n random values, one for each parameter within the allowed boundaries
compute target value for this starting point
pick a random number 1 <= k <= n and for each of k parameters randomly chosen from the n compute a random signed increment and change the parameter by that amount.
compute the new target value from the translated solution
if the new value is better keep the new position, otherwise revert to the original one.
repeat from 3 until you run out of time.
Depending on the target function there are random distributions that work better than others, also may be that for different parameters the optimal choice is different.
Some time ago I wrote a C++ code for solving optimization problems using Genetic Algorithms. Here it is: http://create-technology.blogspot.ro/2015/03/a-genetic-algorithm-for-solving.html
It should be very easy to follow.

Reproducing Excel Solver GRG Nonlinear Optimization in Visual Basic .NET

I am trying to re-produce the following Excel Solver GRG Nonlinear optimization using the Microsoft Solver Foundation in VB.NET (numbers are simplified for the sake of this example):
Objective: Total Gas Rate = 100000
Variable: Well 1 Oil Rate
Constraints: 0 <= Well 1 Gas Rate <= 1000, Well 2 Gas Rate = 2000
This optimization is subject to the following relationships:
Well 1 Gas Rate = Well 1 Oil Rate * 5
Total Gas Rate = Well 1 Gas Rate + Well 2 Gas Rate
Is it possible to solve such a problem using Solver Foundation? When trying to implement this, the two things I struggled with are:
It appears that Solver Foundation models only have two GoalKind's: minimum and maximum. In my case, I am trying to optimize for a specific value. Is there any way to do this?
How do I define the above relationships? I would think the latter would be defined as part of the goal definition (e.g., model.AddGoal("total_gas_rate", GoalKind.[not sure what goes here], Well1PGasRate + Well2PGasRate), but how do I define the other one?
Thanks!
This answer only addresses part 1 of your question and in a conceptual manner, but hopefully it is helpful. If you're trying to optimize for a specific value target and the output of your function is output then you could try something like this pseudocode:
minimize(absolute_value(output/target-1))
Effectively, this will give you a value that reaches zero as the output of your function nears the target value. So you can still use an optimization engine that minimizes the final output of your function.

Calculate Bell Curve Values

In an MS Access 2007 app, which manages contracts and changes for large construction projects, I need to create a Bell Curve representing a Contract Value, over a time period.
For example, a $500m contract runs for, say, 40 months, and I need a Bell Curve that distributes the Contract Value over these 40 months. The idea is to present a starting point for cashflow projections during the life of the contract.
Using VBA, I had thought to create the 'monthly' values and store them in a temp table, for later use in a report chart. However, I'm stuck trying to work out an algorithm.
Any suggestions on how I might tackle this would be most appreciated.
You will need the =NORMSDIST() function borrowed from Excel as follows:
Public Function Normsdist(X As Double) As Double
Normsdist = Excel.WorksheetFunction.Normsdist(X)
End Function
Requires some knowledge of statistics to use this function to distribute a cash flow over x periods, assuming a standard-normal. I created an Excel sheet to demonstrate how this function is used and posted it here:
Normal Distribution of a cash flow sample .XLSX
If for some reason you hated the idea of using an Excel function, you can pull any statistics text or search for the formula that generates a series of normal values. In your case you want to distribute the cash flow over three standard deviations in each tail. So that's a total of six (6) standard deviations. To divide 40 months over 6 standard deviations it's 6/40 = 0.15 standard deviations each data point (month). Use a for/next/step or similar loop to generate that to a temporary table, as you suggested, and graph it with a column chart (as seen in the above Excel example). Will take just a little VBA coding to make this variable as to user-supplied number of months and total contract.
A standard-normal distribution has a mean of 0 and standard deviation of 1. If you want a flatter bell curve, you can use the NormDist function instead where you can specify the mean and st. dev.

Optimization through machine learning

I've got a system that takes 15 points out of a 17 by 17 grid as input (order doesn't matter), and generates a single scalar as output. The system is not representable by a formal function.
The goal is to find the optimal 15 points so that the output scalar is minimum. Solving this problem exhaustively simply takes too much time to be practical as each run takes 14 seconds.
I've started taking a machine learning course online. But this problem does seem to be rather unsophisticated and I wonder if anyone can point me to the right direction. Any help is greatly appreciated!
Use simulated annealing. I guess this will be close to optimal here.
Therefore, start with a random distribution of the 15 points. Then, in each iteration change one and accept the new state if the resulting scalar value is lower. If it is larger, accept with a certain probability (a Boltzmann factor). Eventually you have to try this for a small number of randomly chosen initial states and afterwards accept the lowest value.

optimizing a function to find global and local peaks with R

Y
I have 6 parameters for which I know maxi and mini values. I have a complex function that includes the 6 parameters and return a 7th value (say Y). I say complex because Y is not directly related to the 6 parameters; there are many embeded functions in between.
I would like to find the combination of the 6 parameters which returns the highest Y value. I first tried to calculate Y for every combination by constructing an hypercube but I have not enough memory in my computer. So I am looking for kinds of markov chains which progress in the delimited parameter space, and are able to overpass local peaks.
when I give one combination of the 6 parameters, I would like to know the highest local Y value. I tried to write a code with an iterative chain like a markov's one, but I am not sure how to process when the chain reach an edge of the parameter space. Obviously, some algorythms should already exist for this.
Question: Does anybody know what are the best functions in R to do these two things? I read that optim() could be appropriate to find the global peak but I am not sure that it can deal with complex functions (I prefer asking before engaging in a long (for me) process of code writing). And fot he local peaks? optim() should not be able to do this
In advance, thank you for any lead
Julien from France
Take a look at the Optimization and Mathematical Programming Task View on CRAN. I've personally found the differential evolution algorithm to be very fast and robust. It's implemented in the DEoptim package. The rgenoud package is another good candidate.
I like to use the Metropolis-Hastings algorithm. Since you are limiting each parameter to a range, the simple thing to do is let your proposal distribution simply be uniform over the range. That way, you won't run off the edges. It won't be fast, but if you let it run long enough, it will do a good job of sampling your space. The samples will congregate at each peak, and will spread out around them in a way that reflects the local curvature.