Controlling the parameters of python cvxopt solver while performing SDP - optimization

I have a python code to solve the following simple Semidefinite Program:
Input: Two Real 4 x 4 matrices, A and B
Output: 1 - q where q = maximum over all p, such that:
0 < p < 1
A - p B is positive semi-definite
When I view an instance of the above problem in verbose mode, the Python code produces the following message.
10:Channel_Flow pavithran$ python stackoverflow_sdp.py
*** Dualizing the problem... ***
[ #################################################################### ] 100%
[ #################################################################### ] 100%
--------------------------
cvxopt CONELP solver
--------------------------
pcost dcost gap pres dres k/t
0: 5.5546e-01 5.5546e-01 2e+01 3e+00 2e+00 1e+00
1: -4.3006e-01 -7.3065e-02 3e+00 6e-01 3e-01 6e-01
2: -4.9751e+01 2.1091e+00 9e+03 2e+01 8e+00 6e+01
3: -3.4525e+02 7.6511e-02 9e+03 2e+00 1e+00 3e+02
4: -3.4496e+04 7.6337e-02 9e+05 2e+00 1e+00 3e+04
5: -3.4496e+06 7.6337e-02 9e+07 2e+00 1e+00 3e+06
6: -3.4496e+08 7.6337e-02 9e+09 2e+00 1e+00 3e+08
Certificate of dual infeasibility found.
cvxopt status: dual infeasible
*** Dual Solution not found
Traceback (most recent call last):
File "stackoverflow_sdp.py", line 42, in <module>
simple_sdp(A,B)
File "stackoverflow_sdp.py", line 31, in simple_sdp
prob.solve(verbose = 2)
File "/Library/Python/2.7/site-packages/picos/problem.py", line 4246, in solve
raise Exception("\033[1;31m no Primals retrieved from the dual problem \033[0m")
Exception: no Primals retrieved from the dual problem
10:Channel_Flow pavithran$
There are several parameters. I would like to know if it is possible to specify a bound on any of the parameters, to terminate the SDP, except specifying the maximum number of iterations. For instance, can we specify a limit on "gap", "pres", "dres"?

Related

How to deal with the error when using Gurobi with cvxpy :Unable to retrieve attribute 'BarIterCount'

How to deal with the error when using Gurobi with cvxpy :AttributeError: Unable to retrieve attribute 'BarIterCount'.
I have an Integer programming problem, using cvxpy and set gurobi as a solver.
When the number of variables is small, the result is ok. After the number of variables reaches a level of like 43*13*6, then the error occurred. I suppose it may be caused by the scale of the problem, in which the gurobi solver can not estimate the BarIterCount, which is the max Iterations needed.
Thus, I wonder, is there any way to manually set the BarItercount attribute of gurobi through the interface of the CVX? Or whether there exists another way to solve this problem?
Thanks for any suggestions you may provide for me.
The trace log is as follows:
If my model is small, like I set a number which indicates the scale of model as 3, then the program is ok. The trace is :
Using license file D:\software\lib\site-packages\gurobipy\gurobi.lic
Restricted license - for non-production use only - expires 2022-01-13
Parameter OutputFlag unchanged
Value: 1 Min: 0 Max: 1 Default: 1
D:\software\lib\site-packages\cvxpy\reductions\solvers\solving_chain.py:326: DeprecationWarning: Deprecated, use Model.addMConstr() instead
solver_opts, problem._solver_cache)
Changed value of parameter QCPDual to 1
Prev: 0 Min: 0 Max: 1 Default: 0
Gurobi Optimizer version 9.1.0 build v9.1.0rc0 (win64)
Thread count: 16 physical cores, 32 logical processors, using up to 32 threads
Optimize a model with 126 rows, 370 columns and 2689 nonzeros
Model fingerprint: 0x70d49530
Variable types: 0 continuous, 370 integer (369 binary)
Coefficient statistics:
Matrix range [1e+00, 7e+00]
Objective range [1e+00, 1e+00]
Bounds range [1e+00, 1e+00]
RHS range [1e+00, 6e+00]
Found heuristic solution: objective 7.0000000
Presolve removed 4 rows and 90 columns
Presolve time: 0.01s
Presolved: 122 rows, 280 columns, 1882 nonzeros
Variable types: 0 continuous, 280 integer (279 binary)
Root relaxation: objective 4.307692e+00, 216 iterations, 0.00 seconds
Nodes | Current Node | Objective Bounds | Work
Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time
0 0 4.30769 0 49 7.00000 4.30769 38.5% - 0s
H 0 0 6.0000000 4.30769 28.2% - 0s
0 0 5.00000 0 35 6.00000 5.00000 16.7% - 0s
0 0 5.00000 0 37 6.00000 5.00000 16.7% - 0s
0 0 5.00000 0 7 6.00000 5.00000 16.7% - 0s
Cutting planes:
Gomory: 4
Cover: 9
MIR: 4
StrongCG: 1
GUB cover: 9
Zero half: 1
RLT: 1
Explored 1 nodes (849 simplex iterations) in 0.12 seconds
Thread count was 32 (of 32 available processors)
Solution count 2: 6 7
Optimal solution found (tolerance 1.00e-04)
Best objective 6.000000000000e+00, best bound 6.000000000000e+00, gap 0.0000%
If the number is 6, then error occurs:
-------------------------------------------------------
Using license file D:\software\lib\site-packages\gurobipy\gurobi.lic
Restricted license - for non-production use only - expires 2022-01-13
Parameter OutputFlag unchanged
Value: 1 Min: 0 Max: 1 Default: 1
D:\software\lib\site-packages\cvxpy\reductions\solvers\solving_chain.py:326: DeprecationWarning: Deprecated, use Model.addMConstr() instead
solver_opts, problem._solver_cache)
Changed value of parameter QCPDual to 1
Prev: 0 Min: 0 Max: 1 Default: 0
Gurobi Optimizer version 9.1.0 build v9.1.0rc0 (win64)
Thread count: 16 physical cores, 32 logical processors, using up to 32 threads
Traceback (most recent call last):
File "model.py", line 274, in <module>
problem.solve(solver=cp.GUROBI,verbose=True)
File "D:\software\lib\site-packages\cvxpy\problems\problem.py", line 396, in solve
return solve_func(self, *args, **kwargs)
File "D:\software\lib\site-packages\cvxpy\problems\problem.py", line 754, in _solve
self.unpack_results(solution, solving_chain, inverse_data)
File "D:\software\lib\site-packages\cvxpy\problems\problem.py", line 1058, in unpack_results
solution = chain.invert(solution, inverse_data)
File "D:\software\lib\site-packages\cvxpy\reductions\chain.py", line 79, in invert
solution = r.invert(solution, inv)
File "D:\software\lib\site-packages\cvxpy\reductions\solvers\qp_solvers\gurobi_qpif.py", line 59, in invert
s.NUM_ITERS: model.BarIterCount,
File "src\gurobipy\model.pxi", line 343, in gurobipy.gurobipy.Model.__getattr__
File "src\gurobipy\model.pxi", line 1842, in gurobipy.gurobipy.Model.getAttr
File "src\gurobipy\attrutil.pxi", line 100, in gurobipy.gurobipy.__getattr
AttributeError: Unable to retrieve attribute 'BarIterCount'
Hopefully this can provide more hint for solution.
BarIterCount is the number of barrier iterations performed to solve an LP. This is not a limit on the number of iterations and it should only be queried when the current optimization process has been finished. You cannot set this attribute either, of course.
To actually limit the number of iterations the barrier algorithm is allowed to take, you can use the parameter BarIterLimit.
Please inspect your log file for further information about the solver's behavior.

On run 'example/sumo/grid.py'.FatalFlowError:'Not enough vehicles have spawned! Bad start?'

I want to simulate a jam simulation on the grid example,
So I try to increase the number of row and column or increase the number of num_cars_left/nums_cars_right/nums_cars_top/nums_cars_bot.
For example:
n_rows = 5
n_columns = 5
num_cars_left = 50
num_cars_right = 50
num_cars_top = 50
num_cars_bot = 50
So, then run it by command, there is an error:
Loading configuration... done.
Success.
Loading configuration... done.
Traceback (most recent call last):
File "examples/sumo/grid.py", line 237, in <module>
exp.run(1, 1500)
File "/home/dnl/flow/flow/core/experiment.py", line 118, in run
state = self.env.reset()
File "/home/dnl/flow/flow/envs/loop/loop_accel.py", line 167, in reset
obs = super().reset()
File "/home/dnl/flow/flow/envs/base_env.py", line 520, in reset
raise FatalFlowError(msg=msg)
flow.utils.exceptions.FatalFlowError:
Not enough vehicles have spawned! Bad start?
Missing vehicles / initial state:
- human_994: ('human', 'bot4_0', 0, 446, 0)
- human_546: ('human', 'top0_5', 0, 466, 0)
- human_886: ('human', 'bot3_0', 0, 366, 0)
- human_689: ('human', 'bot1_0', 0, 396, 0)
.....
And then I checked the 'flow/flow/envs/base_env.py'
There is a description of it:
# check to make sure all vehicles have been spawned
if len(self.initial_ids) > len(initial_ids):
missing_vehicles = list(set(self.initial_ids) - set(initial_ids))
msg = '\nNot enough vehicles have spawned! Bad start?\n' \
'Missing vehicles / initial state:\n'
for veh_id in missing_vehicles:
msg += '- {}: {}\n'.format(veh_id, self.initial_state[veh_id])
raise FatalFlowError(msg=msg)
So, my question is: if there is a limit number of rows, columns, nums_cars_left(right/bot/top) if I want to simulate a traffic jam on grid, how to do?
The grid example examples/sumo/grid.py doesn't use inflows by default,
instead it spawns the vehicles directly on the input edges. So if you increase the number of vehicles, you have to increase the size of the edges they spawn on. I tried your example and this setting works for me:
inner_length = 300
long_length = 500
short_length = 500
n_rows = 5
n_columns = 5
num_cars_left = 50
num_cars_right = 50
num_cars_top = 50
num_cars_bot = 50
The length of the edges the vehicles spawn on is short_length, it is the one you want to increase if the vehicles don't have enough room to be added.
Also, changing the number of rows and columns doesn't change anything because 50 vehicles will be added to each of them; so in this case you will have 20 input edges of each 50 vehicles, 1000 vehicles total, which will be quite laggy.
If you want to use continuous inflows instead of one-time spawning, have a look at the use_inflows parameter in the grid_example function in examples/sumo/grid.py, and what this parameter does when it's set to True.

PuLP - COIN-CBC error: How to add constraint with double inequality and relaxation?

I want to add this set of constraints:
-M(1-X_(i,j,k,n) )≤S_(i,j,k,n)-ToD_(i,j,k,n)≤M(1-X_(i,j,k,n) ) ∀i,j,k,n
Where M is a big number, S is a integer variable that takes values between 0 and 1440. ToD is a 4-dimensional matrix that takes values from an Excel sheet. X i dual variable, it takes as values 0-1.
I try to implement in code as following:
for n in range(L):
for k in range(M):
for i in range(N):
for j in range(N):
if (i != START_POINT_S & i != END_POINT_T & j != START_POINT_S & j != END_POINT_T):
prob += (-BIG_NUMBER*(1-X[i][j][k][n])) <= (S[i][j][k][n] - ToD[i][j][k][n]), ""
and another constraint as follows:
for i in range(N):
for j in range(N):
for k in range(M):
for n in range(L):
if (i != START_POINT_S & i != END_POINT_T & j != START_POINT_S & j != END_POINT_T):
prob += S[i][j][k][n] - ToD[i][j][k][n] <= BIG_NUMBER*(1-X[i][j][k][n]), ""
According to my experience, in code, those two constraints are totally equivalent to what we want. The problem is that PuLP and CBC won't accept them. The produce the following errors:
PuLP:
Traceback (most recent call last):
File "basic_JP.py", line 163, in <module>
prob.solve()
File "C:\Users\dimri\Desktop\Filesystem\Projects\deliverable_B4\lib\site-packa
ges\pulp\pulp.py", line 1643, in solve
status = solver.actualSolve(self, **kwargs)
File "C:\Users\dimri\Desktop\Filesystem\Projects\deliverable_B4\lib\site-packa
ges\pulp\solvers.py", line 1303, in actualSolve
return self.solve_CBC(lp, **kwargs)
File "C:\Users\dimri\Desktop\Filesystem\Projects\deliverable_B4\lib\site-packa
ges\pulp\solvers.py", line 1366, in solve_CBC
raise PulpSolverError("Pulp: Error while executing "+self.path)
pulp.solvers.PulpSolverError: Pulp: Error while executing C:\Users\dimri\Desktop
\Filesystem\Projects\deliverable_B4\lib\site-packages\pulp\solverdir\cbc\win\64\
cbc.exe
and CBC:
Welcome to the CBC MILP Solver
Version: 2.9.0
Build Date: Feb 12 2015
command line - C:\Users\dimri\Desktop\Filesystem\Projects\deliverable_B4\lib\sit
e-packages\pulp\solverdir\cbc\win\64\cbc.exe 5284-pulp.mps branch printingOption
s all solution 5284-pulp.sol (default strategy 1)
At line 2 NAME MODEL
At line 3 ROWS
At line 2055 COLUMNS
Duplicate row C0000019 at line 10707 < X0001454 C0000019 -1.000000000000e+
00 >
Duplicate row C0002049 at line 10708 < X0001454 C0002049 -1.000000000000e+
00 >
Duplicate row C0000009 at line 10709 < X0001454 C0000009 1.000000000000e+
00 >
Duplicate row C0001005 at line 10710 < X0001454 C0001005 1.000000000000e+
00 >
At line 14153 RHS
At line 16204 BOUNDS
Bad image at line 17659 < UP BND X0001454 1.440000000000e+03 >
At line 18231 ENDATA
Problem MODEL has 2050 rows, 2025 columns and 5968 elements
Coin0008I MODEL read with 5 errors
There were 5 errors on input
** Current model not valid
Option for printingOptions changed from normal to all
** Current model not valid
No match for 5284-pulp.sol - ? for list of commands
Total time (CPU seconds): 0.02 (Wallclock seconds): 0.02
I don't know what's the problem, any help? I am new to this, if information are not enough let me know what I should add.
Alright, I have searched for hours, but right after I posted this question I found the answer. These kinds of problems are mainly because of the names of the variables or the constraints. That is what caused something to duplicate. I am really not used to that kind of software that is why it took me so long to find and answer. Anyway, the problem for me was when I was defining the variables:
# define X[i,j,k,n]
lower_bound_X = 0 # lower bound for variable X
upper_bound_X = 1 # upper bound for variable X
X = LpVariable.dicts(name="X",
indexs=(range(N), range(N), range(M), range(L)),
lowBound=lower_bound_X,
upBound=upper_bound_X,
cat=LpInteger)
and
# define S[i,j,k,n]
lower_bound_S = 0 # lower bound for variable S
upper_bound_S = 1440 # upper bound for variable S
S = LpVariable.dicts(name="X",
indexs=(range(N),
range(N), range(M), range(L)),
lowBound=lower_bound_S,
upBound=upper_bound_S,
cat=LpInteger)
As you see in the definition of S I obviously forgot to change the name of the variable to S because I copy-pasted it. Anyway, the right way to define S is like this:
# define S[i,j,k,n]
lower_bound_S = 0 # lower bound for variable S
upper_bound_S = 1440 # upper bound for variable S
S = LpVariable.dicts(name="S",
indexs=(range(N), range(N), range(M), range(L)),
lowBound=lower_bound_S,
upBound=upper_bound_S,
cat=LpInteger)
This is how I got my code running.

IPython doesn't answer when calling self-defined function

def nast(L):
i=len(L)-1
while L != [1 for i in range(len(L))]:
if L[i]==0:
L[i]=1
break
i=i-1
for j in range(i+1,len(L)):
L[j]=0
return L
L = [0,0,1,0,1]
I would like to give this function 'L' list, but when I do this, I get nothing, IPython kernel seems to be frozen; when I use "Interrupt current kernet" option, I get:
KeyboardInterrupt Traceback (most recent call last)
<ipython-input-3-000635d72af9> in <module>()
----> 1 nast(L)
<ipython-input-1-7918814a171f> in nast(L)
1 def nast(L):
2 i=len(L)-1
----> 3 while L != [1 for i in range(len(L))]:
4 if L[i]==0:
5 L[i]=1
KeyboardInterrupt:
I wonder what is wrong, thank you for help in advance.
When you do this:
while L != [1 for i in range(len(L))]:
The i variable leaks out of the list comprehension, so after that line, i is always len(L)-1, and your while loop is always checking the last item in L.
This was fixed in Python 3, so your code works there (at least, it finishes - I don't know if it's doing what you expect). To do it in Python 2, you'll need to call one of your i variables something else.

"TypeError: bad operand type for unary ~: 'float'" not down to NA (not available)?

I'm trying to filter a pandas data frame. Following #jezrael's answer here I can use the following to count up the rows I will be removing:
mask= ((analytic_events['section']==2) &
~(analytic_events['identifier'].str[0].str.isdigit()))
print (mask.sum())
However when I run this on my data I get the following error:
TypeError Traceback (most recent call last)
in
1 mask= ((analytic_events['section']==2) &
----> 2 ~(analytic_events['identifier'].str[0].str.isdigit()))
3
4 print (mask.sum())
c:\program files\python37\lib\site-packages\pandas\core\generic.py in invert(self)
1454 def invert(self): 1455 try:
-> 1456 arr = operator.inv(com.values_from_object(self))
1457 return self.array_wrap(arr)
1458 except Exception:
TypeError: bad operand type for unary ~: 'float'
The accepted wisdom for that error, bad operand type for unary ~: 'float', is that the unary operator encountered a NA value (for example, see this answer)
The problem is that I do not have any such missing data. Here's my analysis. Running
analytic_events[analytic_events['section']==2]['identifier'].str[0].value_counts(dropna=False)
gives the results:
2 1207791
3 39289
1 533
. 56
Or running
analytic_events[analytic_events['section']==2]['identifier'].str[0].str.isdigit().value_counts(dropna=False)
gives the results
True 1247613
False 56
(Note that the amounts above sum to the total number of rows, i.e. there are none missing.)
Using the more direct method suggested in #jezrael's answer below
analytic_events[analytic_events['section']==2]['identifier'].isnull().sum()
analytic_events[analytic_events['section']==2]['identifier'].str[0].isnull().sum()
both produce the output zero. So there are no NA (not available) values.
Why am I getting the error
TypeError: bad operand type for unary ~: 'float'
from the code at the start of this post?
I believe you need filter by first condition and then again in filtered values:
m1 = analytic_events['section']==2
mask = ~analytic_events.loc[m1, 'identifier'].str[0].str.isdigit()
print (mask.sum())