Gurobi Warning and Inconsistency in Optimal Value [some integer variables take values larger than the maximum supported value (2000000000)] - gurobi

I am using Gurobi version 8.1.0 and Python API version 3.6 to solve MIP problems. I have two models in which I believe that their global optima are equal. However, I found out that they are not equal in one of my simulations. I then tried to warm-start the model that I believe the solution is incorrect (model-1) with the solution from another model (model-2). In other words, the problem is to maximize the objective function and the objective value of model-1 is 42.3333, but I believe it should be 42.8333. Therefore, I use the solution from model-2 with the objective value of 42.8333 to warm-start to model-1.
What is weird is that the solution from model-2 should not be feasible to model-1 since the objective value is greater than 42.3333 and the problem is maximization. However, it turns out that it is a feasible warm start and now the optimal value of model-1 is 42.8333. How can the same model have multiple optima?
Changed value of parameter timeLimit to 10800.0
Prev: 1e+100 Min: 0.0 Max: 1e+100 Default: 1e+100
Changed value of parameter LogFile to output/inconsistent_Model-1.log
Prev: gurobi.log Default:
Optimize a model with 11277 rows, 15150 columns and 165637 nonzeros
Model has 5050 general constraints
Variable types: 0 continuous, 15150 integer (5050 binary)
Coefficient statistics:
Matrix range [1e+00, 1e+00]
Objective range [1e-02, 1e+00]
Bounds range [1e+00, 1e+00]
RHS range [1e+00, 5e+01]
Presolve removed 6167 rows and 7008 columns
Presolve time: 0.95s
Presolved: 5110 rows, 8142 columns, 37608 nonzeros
Presolved model has 3058 SOS constraint(s)
Variable types: 0 continuous, 8142 integer (4403 binary)
Warning: Markowitz tolerance tightened to 0.0625
Warning: Markowitz tolerance tightened to 0.125
Warning: Markowitz tolerance tightened to 0.25
Warning: Markowitz tolerance tightened to 0.5
Root relaxation: objective 4.333333e+01, 4856 iterations, 2.15 seconds
Nodes | Current Node | Objective Bounds | Work
Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time
0 0 43.33333 0 587 - 43.33333 - - 3s
0 0 43.26667 0 243 - 43.26667 - - 4s
0 0 43.20000 0 1282 - 43.20000 - - 4s
0 0 43.20000 0 567 - 43.20000 - - 4s
0 0 43.18333 0 1114 - 43.18333 - - 5s
0 0 43.16543 0 2419 - 43.16543 - - 5s
0 0 43.15556 0 1575 - 43.15556 - - 5s
0 0 43.15333 0 2271 - 43.15333 - - 5s
0 0 43.13333 0 727 - 43.13333 - - 5s
0 0 43.12778 0 1698 - 43.12778 - - 5s
0 0 43.12500 0 1146 - 43.12500 - - 5s
0 0 43.12500 0 1911 - 43.12500 - - 6s
0 0 43.11927 0 1859 - 43.11927 - - 6s
0 0 43.11845 0 2609 - 43.11845 - - 7s
0 0 43.11845 0 2631 - 43.11845 - - 7s
0 0 43.11845 0 2642 - 43.11845 - - 7s
0 0 43.11845 0 2462 - 43.11845 - - 8s
0 0 43.11845 0 2529 - 43.11845 - - 8s
0 0 43.11845 0 2529 - 43.11845 - - 9s
0 2 43.11845 0 2531 - 43.11845 - - 14s
41 35 43.09874 17 957 - 43.09874 - 29.4 15s
94 84 42.93207 33 716 - 43.09874 - 22.1 31s
117 101 42.91940 40 2568 - 43.09874 - 213 37s
264 175 infeasible 92 - 43.09874 - 133 73s
273 181 infeasible 97 - 43.09874 - 277 77s
293 191 42.42424 17 1828 - 43.09874 - 280 90s
369 249 42.40111 52 2633 - 43.09874 - 311 105s
383 257 42.39608 59 3062 - 43.09874 - 329 152s
408 265 42.39259 65 2819 - 43.09874 - 386 162s
419 274 41.51399 66 2989 - 43.09874 - 401 170s
454 282 41.29938 71 3000 - 43.09874 - 390 182s
462 280 infeasible 74 - 43.09874 - 423 192s
479 287 infeasible 78 - 43.09874 - 419 204s
498 293 40.51287 81 2564 - 43.09874 - 435 207s
526 307 40.16638 86 2619 - 43.09874 - 419 227s
584 330 42.63100 33 621 - 43.09874 - 404 236s
628 333 infeasible 37 - 43.09874 - 394 252s
661 345 42.37500 26 25 - 43.09874 - 396 288s
684 353 infeasible 30 - 43.09874 - 426 290s
842 370 infeasible 69 - 43.09874 - 348 306s
944 379 infeasible 86 - 43.09874 - 321 370s
1009 395 42.36667 22 25 - 43.09874 - 350 409s
* 1031 243 3 42.3333333 43.09874 1.81% 343 409s
1056 203 43.00000 19 141 42.33333 43.09874 1.81% 362 411s
1194 222 cutoff 23 42.33333 43.00000 1.57% 325 430s
1199 219 cutoff 25 42.33333 43.00000 1.57% 349 450s
1202 212 cutoff 29 42.33333 43.00000 1.57% 361 472s
1211 200 infeasible 47 42.33333 42.91851 1.38% 380 498s
1226 169 infeasible 43 42.33333 42.91471 1.37% 395 511s
Cutting planes:
Gomory: 2
Cover: 15
Implied bound: 1
Clique: 26
MIR: 17
Inf proof: 1
Zero half: 8
Explored 1426 nodes (502432 simplex iterations) in 512.68 seconds
Thread count was 4 (of 4 available processors)
Solution count 1: 42.3333
Optimal solution found (tolerance 1.00e-04)
Warning: some integer variables take values larger than the maximum
supported value (2000000000)
Best objective 4.233333333333e+01, best bound 4.233333333333e+01, gap 0.0000%
In addition to the above, I also received this warning:
"Optimal solution found (tolerance 1.00e-04)
Warning: some integer variables take values larger than the maximum
supported value (2000000000)". What does it mean? Thank you so much!

It looks like you are encountering some numerical troubles. The root relaxation required an increased Markowitz tolerance, which indicates an ill-conditioned matrix. This may lead to inconsistencies as you have observed in the two different "optimal" solutions.
The warning about too large values means that there are integer variables with solution values so large that the integer feasibility tolerance can not reliably be checked anymore. If you have a variable with solution value in the range of 1e+9 it probably doesn't matter anymore whether they are integer or not. So you could probably also simplify your model by making them continuous variables.
You should check for violations in the two solutions for both models (see here) to see how feasible the solutions actually are.

Related

Create group based on the data available in SQL server table having specific condition?

There is a table which contains SQL server blocking chain data, like below.
I am trying to pull only those blocking chain groups whose average wait time is greater than 20 seconds.Group can be identified like - It starts from where it founds blocked value as 0 and ends on where it found again blocked value as 0. And last found with 0 value should not be consider in group
Blocking_time SPID blocked WAIT_MS Blocking_Chain_tree_details_by_Session_id_and_header Wait_type
7/28/19 5:14 AM 130 0   HEAD -  SPID (130) - EL.dbo.test;1
7/28/19 5:14 AM 292 130 1   |      |-----  SPID (292) - EL.dbo.test123;1 PAGELATCH_EX
7/28/19 5:14 AM 949 130 1   |      |-----  SPID (949) - EL.dbo.sstest123;1 PAGELATCH_EX
7/28/19 5:32 AM 106 130 1   |      |-----  SPID (106) - EL.dbo.checjmark;1 PAGELATCH_EX
7/28/19 5:32 AM 130 0   HEAD -  SPID (130) - Eli.dbo.sss;1
7/28/19 5:32 AM 292 130 1   |      |-----  SPID (292) - EL.dbo.variable;1 PAGELATCH_EX
7/28/19 5:32 AM 949 130 1   |      |-----  SPID (949) - Eldbo.anything;1 PAGELATCH_EX
7/28/19 5:32 AM 1578 130 12000   |      |-----  SPID (1578) - EL.dbo.something;1 PAGELATCH_EX
7/28/19 9:20 AM 196 513 21700   |      |-----  SPID (196) - (#P1 uniqueidentifier,#P2 int,#P3 int,#P4 int,#P5 int,#P6 int,#P7 int,#P8 int,#P ... LCK_M_IX
NA
Actual result should be like as below-
Blocking_time SPID blocked WAIT_MS Blocking_Chain_tree_details_by_Session_id_and_header Wait_type
7/28/19 5:32 AM 130 0   HEAD -  SPID (130) - Eli.dbo.sss;1
7/28/19 5:32 AM 292 130 1   |      |-----  SPID (292) - EL.dbo.variable;1 PAGELATCH_EX
7/28/19 5:32 AM 949 130 1   |      |-----  SPID (949) - Eldbo.anything;1 PAGELATCH_EX
7/28/19 5:32 AM 1578 130 12000   |      |-----  SPID (1578) - EL.dbo.something;1 PAGELATCH_EX
7/28/19 9:20 AM 196 513 21700   |      |-----  SPID (196) - (#P1 uniqueidentifier,#P2 int,#P3 int,#P4 int,#P5 int,#P6 int,#P7 int,#P8 int,#P ... LCK_M_IX
You can use a window function for this. So long as you put your grouping columns in the PARTITION BY you'll be able to get the MAX value for the group. Then you can filter to just the groups where the max time is over 20 seconds.
SELECT *
FROM
(
SELECT Blocking_time,
SPID,
blocked,
WAIT_MS,
Blocking_Chain_tree_details_by_Session_id_and_header,
Wait_type,
MAX(WAIT_MS) OVER (PARTITION BY Blocking_time ORDER BY Blocking_time ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) [Max_WAIT_MS]
FROM <YourTable>
) rawData
WHERE [Max_WAIT_MS] > 20000

ID rows containing values greater than corresponding values based on a criteria from another row

I have a grouped dataframe. I have created a flag that identifies if values in a row are less than the group maximums. This works fine. However I want to unflag rows where the value contained in a third column is greater than the value in the same (third) column within each group. I have a feeling there shoule be an elegant and pythonic way to do this but I can't figure it out.
The flag I have shown in the code compares the maximum value of tour_duration within each hh_id to the corresponding value of "comp_expr" and if found less, assigns "1" to the column flag. However, I want values in the flag column to be 0 if min(arrivaltime) for each subgroup tour_id > max(arrivaltime) for the tour_id whose tour_duration is found to be maximum within each hh_id. For example, in the given data, tour_id 16300 has the highest value of tour_duration. But tour_id 16200 has min arrivaltime 1080 which is < max(arrivaltime) for tour_id 16300 (960). So flag for all tour_id 16200 should be 0.
Kindly assist.
import pandas as pd
import numpy as np
stops_data = pd.DataFrame({'hh_id': [20044,20044,20044,20044,20044,20044,20044,20044,20044,20044,20044,20122,20122,20122,20122,20122,20122,20122,20122,20122,20122,20122,20122,20122,],'tour_id':[16300,16300,16100,16100,16100,16100,16200,16200,16200,16000,16000,38100,38100,37900,37900,37900,38000,38000,38000,38000,38000,38000,37800,37800],'arrivaltime':[360,960,900,900,900,960,1080,1140,1140,420,840,300,960,780,720,960,1080,1080,1080,1080,1140,1140,480,900],'tour_duration':[600,600,60,60,60,60,60,60,60,420,420,660,660,240,240,240,60,60,60,60,60,60,420,420],'comp_expr':[1350,1350,268,268,268,268,406,406,406,974,974,1568,1568,606,606,606,298,298,298,298,298,298,840,840]})
stops_data['flag'] = np.where(stops_data.groupby(['hh_id'])
['tour_duration'].transform(max) < stops_data['comp_expr'],0,1)
This is my current output:Current dataset and output
This is my desired output, please see flag column: Desired output, see changed flag values in bold
>>> stops_data.loc[stops_data.tour_id
.isin(stops_data.loc[stops_data.loc[stops_data
.groupby(['hh_id','tour_id'])['arrivaltime'].idxmin()]
.groupby('hh_id')['arrivaltime'].idxmax()]['tour_id']), 'flag'] = 0
>>> stops_data
hh_id tour_id arrivaltime tour_duration comp_expr flag
0 20044 16300 360 600 1350 0
1 20044 16300 960 600 1350 0
2 20044 16100 900 60 268 1
3 20044 16100 900 60 268 1
4 20044 16100 900 60 268 1
5 20044 16100 960 60 268 1
6 20044 16200 1080 60 406 0
7 20044 16200 1140 60 406 0
8 20044 16200 1140 60 406 0
9 20044 16000 420 420 974 0
10 20044 16000 840 420 974 0
11 20122 38100 300 660 1568 0
12 20122 38100 960 660 1568 0
13 20122 37900 780 240 606 1
14 20122 37900 720 240 606 1
15 20122 37900 960 240 606 1
16 20122 38000 1080 60 298 0
17 20122 38000 1080 60 298 0
18 20122 38000 1080 60 298 0
19 20122 38000 1080 60 298 0
20 20122 38000 1140 60 298 0
21 20122 38000 1140 60 298 0
22 20122 37800 480 420 840 0
23 20122 37800 900 420 840 0

Parsing Json Data from select query in SQL Server

I have a situation where I have a table that has a single varchar(max) column called dbo.JsonData. It has x number of rows with x number of properties.
How can I create a query that will allow me to turn the result set from a select * query into a row/column result set?
Here is what I have tried:
SELECT *
FROM JSONDATA
FOR JSON Path
But it returns a single row of the json data all in a single column:
JSON_F52E2B61-18A1-11d1-B105-00805F49916B
[{"Json_Data":"{\"Serial_Number\":\"12345\",\"Gateway_Uptime\":17,\"Defrost_Cycles\":0,\"Freeze_Cycles\":2304,\"Float_Switch_Raw_ADC\":1328,\"Bin_status\":2304,\"Line_Voltage\":0,\"ADC_Evaporator_Temperature\":0,\"Mem_Sw\":1280,\"Freeze_Timer\":2560,\"Defrost_Timer\":593,\"Water_Flow_Switch\":3328,\"ADC_Mid_Temperature\":2560,\"ADC_Water_Temperature\":0,\"Ambient_Temperature\":71,\"Mid_Temperature\":1259,\"Water_Temperature\":1259,\"Evaporator_Temperature\":1259,\"Ambient_Temperature_Off_Board\":0,\"Ambient_Temperature_On_Board\":0,\"Gateway_Info\":\"{\\\"temp_sensor\\\":0.00,\\\"temp_pcb\\\":82.00,\\\"gw_uptime\\\":17.00,\\\"winc_fw\\\":\\\"19.5.4\\\",\\\"gw_fw_version\\\":\\\"0.0.0\\\",\\\"gw_fw_version_git\\\":\\\"2a75f20-dirty\\\",\\\"gw_sn\\\":\\\"328\\\",\\\"heap_free\\\":11264.00,\\\"gw_sig_csq\\\":0.00,\\\"gw_sig_quality\\\":0.00,\\\"wifi_sig_strength\\\":-63.00,\\\"wifi_resets\\\":0.00}\",\"ADC_Ambient_Temperature\":1120,\"Control_State\":\"Bin Full\",\"Compressor_Runtime\":134215680}"},{"Json_Data":"{\"Serial_Number\":\"12345\",\"Gateway_Uptime\":200,\"Defrost_Cycles\":559,\"Freeze_Cycles\":510,\"Float_Switch_Raw_ADC\":106,\"Bin_status\":0,\"Line_Voltage\":119,\"ADC_Evaporator_Temperature\":123,\"Mem_Sw\":113,\"Freeze_Timer\":0,\"Defrost_Timer\":66,\"Water_Flow_Switch\":3328,\"ADC_Mid_Temperature\":2560,\"ADC_Water_Temperature\":0,\"Ambient_Temperature\":71,\"Mid_Temperature\":1259,\"Water_Temperature\":1259,\"Evaporator_Temperature\":54,\"Ambient_Temperature_Off_Board\":0,\"Ambient_Temperature_On_Board\":0,\"Gateway_Info\":\"{\\\"temp_sensor\\\":0.00,\\\"temp_pcb\\\":82.00,\\\"gw_uptime\\\":199.00,\\\"winc_fw\\\":\\\"19.5.4\\\",\\\"gw_fw_version\\\":\\\"0.0.0\\\",\\\"gw_fw_version_git\\\":\\\"2a75f20-dirty\\\",\\\"gw_sn\\\":\\\"328\\\",\\\"heap_free\\\":10984.00,\\\"gw_sig_csq\\\":0.00,\\\"gw_sig_quality\\\":0.00,\\\"wifi_sig_strength\\\":-60.00,\\\"wifi_resets\\\":0.00}\",\"ADC_Ambient_Temperature\":1120,\"Control_State\":\"Defrost\",\"Compressor_Runtime\":11304}"},{"Json_Data":"{\"Seri...
What am I missing?
I can't specify the columns explicitly because the json strings aren't always the same.
This what I expect:
Serial_Number Gateway_Uptime Defrost_Cycles Freeze_Cycles Float_Switch_Raw_ADC Bin_status Line_Voltage ADC_Evaporator_Temperature Mem_Sw Freeze_Timer Defrost_Timer Water_Flow_Switch ADC_Mid_Temperature ADC_Water_Temperature Ambient_Temperature Mid_Temperature Water_Temperature Evaporator_Temperature Ambient_Temperature_Off_Board Ambient_Temperature_On_Board ADC_Ambient_Temperature Control_State Compressor_Runtime temp_sensor temp_pcb gw_uptime winc_fw gw_fw_version gw_fw_version_git gw_sn heap_free gw_sig_csq gw_sig_quality wifi_sig_strength wifi_resets LastModifiedDateUTC Defrost_Cycle_time Freeze_Cycle_time
12345 251402 540 494 106 0 98 158 113 221 184 0 0 0 1259 1259 1259 33 0 0 0 Freeze 10833 0 78 251402 19.5.4 0.0.0 2a75f20-dirty 328.00000000 10976 0 0 -61 0 2018-03-20 11:15:28.000 0 0
12345 251702 540 494 106 0 98 178 113 517 184 0 0 0 1259 1259 1259 22 0 0 0 Freeze 10838 0 78 251702 19.5.4 0.0.0 2a75f20-dirty 328.00000000 10976 0 0 -62 0 2018-03-20 11:15:42.000 0 0
...
Thank you,
Ron

Get Value Difference and Time Stamp Difference from SQL Table that is not Ideal

This problem is way over my head. Can a report be created from the table below that will search the common date stamps and return Tank1Level difference? A few issues can occur, like the day on the time stamp can change and there can be 3 to 5 entries in the database that per tank filling process.
The report would show how much the tank was filled with the last t_stamp and last T1_Lot.
Here is the Data Tree
index T1_Lot Tank1Level Tank1Temp t_stamp quality_code
30 70517 - 1 43781.1875 120 7/10/2017 6:43 192
29 70517 - 1 242.6184692 119 7/10/2017 0:54 192
26 70617 - 2 242.6184692 119 7/10/2017 0:51 192
23 70617 - 2 44921.03516 134 7/8/2017 14:22 192
22 70617 - 2 892.652771 107 7/8/2017 8:29 192
21 62917 - 3 892.652771 107 7/8/2017 8:28 192
20 62917 - 3 42352.94141 124 7/6/2017 13:15 192
19 62917 - 3 5291.829102 121 7/6/2017 8:06 192
18 62917 - 2 5273.518066 121 7/6/2017 8:05 192
17 60817 - 2 444.0375366 97 7/6/2017 7:23 192
16 60817 - 2 476.0814819 97 7/5/2017 18:09 192
11 62817 - 3 45374.23047 113 6/30/2017 11:38 192
Here is what the report should look like.
At 7/10/2017 6:43 T1_Lot = 70517 - 1, Tank1Level difference = 43,629., and took 5:52.
At 7/8/2017 14:22 T1_Lot = 70517 - 1, Tank1Level difference = 44,028, and took 5:54.
At 7/6/2017 13:15 T1_Lot = 62917 - 3, Tank1Level difference = 41877, and took 5:10.
Here is how that was calculated.
Find the top time stamp with a value > 40,000 in Tank1Level,
Then Find the Next > 40000 in Tank Level.
Go one index up..
or it could be done with less than 8 hours accumulated
as you can see from the second report line there is data that should be ignored.
Report that last t_stamp of the series with the T1_Lot.
Calculate the difference in Tank1Level and report
Then Calculate the t_stamp difference in hh:mm and report.
Based on the data you provided, a self join might work.
from yourTable beforeFill join yourTable afterFill on beforeFill.t1_lot = afterFill.t1_lot
and beforeFill.index = afterFill.index - 1

VB import Text file into Excel\VB

I have the following text file which I'm trying to automate into a line Graph in excel.. which logs every 5 minutes up until from 08:00 till 18:00 so there is quite a few rows
TIME Rec-Created Rec-Deleted Rec-Updated Rec-read Rec-wait Committed bi- writes Bi-reads DB-Writes DB-READ db-access Checkpoints Flushed
08:09:00 37 0 5 21276 0 1894 33 3 109 43 47691 1 0
08:14:00 49 0 144 20378 0 1225 143 0 88 192 53377 0 0
08:19:00 44 0 237 19902 0 1545 283 6 317 120 49668 2 0
08:24:00 51 0 129 12570 0 626 191 3 164 58 37811 1 0
08:29:00 61 0 49 14138 0 541 86 3 116 77 36836 1 0
08:34:00 59 0 144 58536 0 1438 209 3 143 3753 135427 1 0
08:39:00 85 0 178 28309 0 1822 209 6 209 80 70950 2 0
08:44:00 57 0 157 17940 0 554 132 3 170 92 47561 1 0
08:49:00 115 0 217 29961 0 1867 186 3 333 193 76057 1 0
08:54:00 111 0 225 23320 0 540 198 6 275 246 64138 2 0
08:59:00 41 0 152 15638 0 359 187 3 368 103 44558 1 0
I'm not too concerned about the Line graph part but more the trying to get the data into excel in the correct format.
I'm assuming I would need to use an array, but currently that is little advanced for me at the moment as Im still trying to get to grips VB and this is really my first venture into this world...(as you can see from my previous post)
any help or guidance would be appreciated..
(im studying the VB for Dummies and Visual Basic Fundamentals: Development for Absolute Beginners from the channel9 MSDN)
thanks in advance
I would probably create typed dataset, with all of your columns. Lets call it YourDataset.
Then read the file in and add rows to your table for each row in the file. Not fully functional but an outline of a solution.
dim typedDataset = new YourDataset
Using reader As StreamReader = New StreamReader("file.txt")
line = reader.ReadLine
dim rowData = line.Split(" ")
'add a new row to typed dataset based on data above
End Using
That is how you would get your data into vb.net, it would be sitting in a table just like the excel table, at that point if you didn't care about excel you could use a graphing control like on this page. And see it with a datagrid view https://msdn.microsoft.com/en-us/library/dd489237(v=vs.140).aspx
But going to excel you would need to follow a guide like the one I have in the following link. You need to use the Microsoft.Office.Interop.Excel
http://www.codeproject.com/Tips/669509/How-to-Export-Data-to-Excel-in-VB-NET