How import cplex in google colab? - google-colaboratory

!apt install cplex-utils
!pip install cplex
solver = SolverFactory('cplex')
res_NLP= solver.solve(HN_model)
The error is:
WARNING: Could not locate the 'cplex' executable, which is required
for solver
cplex
--------------------------------------------------------------------------- ApplicationError Traceback (most recent call
last) in ()
1 solver = SolverFactory('cplex')
----> 2 res_NLP= solver.solve(HN_model)
2 frames
/usr/local/lib/python3.7/dist-packages/pyomo/opt/solver/shellcmd.py in
available(self, exception_flag)
123 if exception_flag:
124 msg = "No executable found for solver '%s'"
--> 125 raise ApplicationError(msg % self.name)
126 return False
127 return True
ApplicationError: No executable found for solver 'cplex'

Within IBM Watson Studio, CPLEX comes pre-installed in the Notebooks. But with other Notebook cloud providers, you need to find a way to install it, or else call CPLEX as a service in the IBM Cloud.
You could try to use dowml : https://xavier-nodet.medium.com/submit-decision-optimization-jobs-to-wml-using-dowml-be26e0de6b7f
Or directly wml : https://pypi.org/project/ibm-watson-machine-learning/
With google colab
!pip install cplex
!pip install docplex
from docplex.mp.model import Model
mdl = Model(name='buses')
nbbus40 = mdl.integer_var(name='nbBus40')
nbbus30 = mdl.integer_var(name='nbBus30')
mdl.add_constraint(nbbus40*40 + nbbus30*30 >= 300, 'kids')
mdl.minimize(nbbus40*500 + nbbus30*400)
mdl.export("buses.lp")
!cat buses.lp
works fine and gives
Requirement already satisfied: cplex in /usr/local/lib/python3.7/dist-packages (20.1.0.1)
Requirement already satisfied: docplex in /usr/local/lib/python3.7/dist-packages (2.22.213)
Requirement already satisfied: six in /usr/local/lib/python3.7/dist-packages (from docplex) (1.15.0)
\ This file has been generated by DOcplex
\ ENCODING=ISO-8859-1
\Problem name: buses
Minimize
obj: 500 nbBus40 + 400 nbBus30
Subject To
kids: 40 nbBus40 + 30 nbBus30 >= 300
Bounds
Generals
nbBus40 nbBus30
End

From the error message, SolverFactory seems to be a Pyomo class, and require the CPLEX Interactive executable program to be available locally on the machine where the Pyomo code is executed.
Unless you have a way to install arbitrary executable files on the platform you use, which I very highly doubt if you're not using your own computer, you will have to find another way. Alex's answer proposes two...

When you're working in the colab you need to install cplex from pip. When you install cplex from pip, you need to use the cplex_direct interface in pyomo in order to avoid such errors, since the cplex interface will use the shell approach to solve the problem.
Using Google Colab, this should work
!pip install pyomo -q
!pip install cplex -q
import pyomo.environ as pyo
model = pyo.ConcreteModel()
model.s = pyo.Set(initialize=[1,2,3,4,5])
model.x = pyo.Var(model.s, domain=pyo.NonNegativeReals)
model.c = pyo.Constraint(expr=model.x[model.s.last()]>=5)
model.obj = pyo.Objective(expr=sum(model.x[s] for s in model.s), sense=pyo.minimize)
solver = pyo.SolverFactory('cplex_direct')
solver.solve(model)
model.x.display()
x : Size=5, Index=s
Key : Lower : Value : Upper : Fixed : Stale : Domain
1 : 0 : 0.0 : None : False : False : NonNegativeReals
2 : 0 : 0.0 : None : False : False : NonNegativeReals
3 : 0 : 0.0 : None : False : False : NonNegativeReals
4 : 0 : 0.0 : None : False : False : NonNegativeReals
5 : 0 : 5.0 : None : False : False : NonNegativeReals
I don't use CPLEX a lot, therefore, I'm not fully sure, but I think this free approach should have a limit in the number of var, constraints or others

Related

Python Panel dashboard causing BufferError and RuntimeErrors

I have struggled for some time to create a data streaming interface using Panel.
Essentially I have approximately 20 named python objects that I monitor and read the spectral output from.
I want to have a dashboard displaying this in the form of 20 plots which must continuously overwrite themselves as the spectral output must be displayed over the same x-range (channels).
The dashboard runs fine for some time and then I either get:
a) RuntimeError: _pending_writes should be non-None when we have a document lock, and we should have the lock when the document changes
or
b) BufferError: Existing exports of data: object cannot be re-sized {PYTHON_ENV_PATH}/lib/python3.6/site-packages/bokeh/document/document.py:500: RuntimeWarning: coroutine 'WSHandler.send_message' was never awaited gc.collect()
I've drafted up a MRE as follows:
import numpy as np
import pandas as pd
import hvplot.streamz
import numpy as np
import panel as pn
from streamz.dataframe import PeriodicDataFrame
pn.extension()
#object from which data is collected:
class data_gen:
def __init__(self,name,size=1024,sets=4):
self.name = name
self.size = size
self.sets = sets
def get_data(self):
return np.random.randn(self.sets,self.size)
#Have a dictionary of items with name:
data_dict = {
"a" : data_gen("a"),
"b" : data_gen("b"),
"c" : data_gen("c"),
"d" : data_gen("d"),
"e" : data_gen("e"),
"f" : data_gen("f"),
}
#Generate dataframe
def name_dataFrame(**kwargs):
dct = {}
for name,dg in data_dict.items():
d = dg.get_data()
sets, size = d.shape
t_dict ={}
for i in range(sets):
t_dict[i] = {
c : d[i,c] for c in range(size)
}
t_df = pd.DataFrame(t_dict).transpose()
dct[name] = t_df
df = pd.concat(dct).transpose()
return df
#Have it be streamed
df = PeriodicDataFrame(name_dataFrame, interval='10s')
#Compose panel layout
pn_realtime = pn.Column("# Data Dashboard")
for name in data_dict:
pn_realtime.append(
(pn.Row(f"""##Name: {name}""")))
pn_realtime.append(pn.Row(
df[name].hvplot.line(backlog=1024, width = 600, height=500, xlabel="n", ylabel="f(n)", grid=True)
))
pn_realtime.servable()
My set up is:
# Name Version Build Channel
panel 0.12.1 pyhd3eb1b0_0
hvplot 0.7.3 pyhd3eb1b0_1
pandas 1.1.5 py36ha9443f7_0
streamz 0.6.3 pyhd3eb1b0_0
Python 3.6.13 :: Anaconda, Inc.
Ubuntu 20.04.3 LTS (Focal Fossa)
I'm pretty new to dashboard design (and pandas for that matter) so I wouldn't be surprised if there were a vastly simpler way to do what I am attempting to do.
My suspicion is that the appending of Panel objects is causing memory buffers to overfill and garbage collection cannot handle it. If so, what can I do?
Running this MRE on my beefier Windows machine with python 3.9.7 did not seem to crash, but perhaps that is simply because I've not run it for long enough?
I've also set ylims on the hvplot and that seemed to stop crashes from occurring (again maybe I did not run it for long enough), but due to the nature of my application, I cannot have static ylims.
I appreciate your time and input.
Cheers.

During use gluoncv and mxnet, gpu is not working

Version I used:
python 3.6.5
mxnet 1.5.0
cuda 9.2 (I also installed cuda 11.4 and cudnn 8.2.4 because I checked cmd and my NVIDIA used it)
cudnn 7.6.5
window10 64bit
Question:
I used mxnet and gluoncv for image segmentation and gpu problem occured consistently.
I install and uninstall almost every cuda versions(and cudnns) but it didn't help.
plus, I'm little confused that should I use mxnet-cu92 or something else?
when I first installed cuda 11.4, I installed mxnet-cu101(mxnet-cu112 didn't work for me)
but I found cu92 is for using gpu so I installed it again with cuda9.2.
and still not working
here is my code
ctx = mx.gpu(0)
model = gluoncv.model_zoo.get_model('fcn_resnet50_ade', pretrained=True, ctx=ctx) #deeplab_resnet101_ade #fcn_resnet50_ade
total_df = pd.DataFrame(columns=ADE20KSegmentation.CLASSES)
start = time.time()
Moly = []
Fences = {}
for i in range(len(image_file)):
if i%100==0:
print(i)
print(time.time()-start)
start = time.time()
img = mx.image.imread(image_file[i])
image = test_transform(mx.img.imresize(img, 1200, 1200), ctx)
output_array = model.predict(image)
predict_index = mx.nd.argmax(output_array,1).asnumpy()
holy = find_fence(predict_index)
Moly.append(holy)
flat = predict_index.flatten()
output_dict = {}
for index, cls in enumerate(ADE20KSegmentation.CLASSES):
num_pixel = len(np.where(flat==index)[0])
output_dict[cls] = round(num_pixel/1440000, 4)
total_df = total_df.append(output_dict, ignore_index=True)
for names, holy in zip(image_names, Moly):
Fences[names] = holy
and I got "MXNetError: C:\Jenkins\workspace\mxnet-tag\mxnet\src\ndarray\ndarray.cc:1285: GPU is not enabled" this error on
model = gluoncv.model_zoo.get_model('fcn_resnet50_ade', pretrained=True, ctx=ctx)
this code.
what should I do now...?

Docplex ! Interrupt the execution

I execute a program in cplex python (docplex), it has arrived at a gap 48% with 41 solutions. it's already been 2 days, I ask if I can interrupt it and have a result without restarting the execution with limit gap.
If you run on Windows you could try CTRL C
If that does not work,
what you could do is run again your model with 1 new solution each time and then save the current solution and then each time you abort you have the latest solution
Example with the zoo story:
from docplex.mp.model import Model
from docplex.mp.progress import *
mdl = Model(name='buses')
nbbus40 = mdl.integer_var(name='nbBus40')
nbbus30 = mdl.integer_var(name='nbBus30')
mdl.add_constraint(nbbus40*40 + nbbus30*30 >= 300, 'kids')
mdl.minimize(nbbus40*500 + nbbus30*400)
mdl.parameters.mip.limits.solutions=1
while mdl.solve(log_output=False):
for v in mdl.iter_integer_vars():
print(v," = ",v.solution_value)
print("status : ",mdl.solve_details.status)
if ("optimal solution" in str(mdl.solve_details.status)):
break
that gives
nbBus40 = 8.0
nbBus30 = 0
status : solution limit exceeded
nbBus40 = 7.0
nbBus30 = 1.0
status : solution limit exceeded
nbBus40 = 6.0
nbBus30 = 2.0
status : integer optimal solution

should I trust the product of 500 probabilities

I thought special techniques are needed. But experiments show little difference.
import numpy as np
import tensorflow as tf
p = np.random.rand(500)
print(f'prod : {np.prod(p)}')
print(f'exp-sum-log: {np.exp(sum(np.log(p)))}')
e = tf.constant(p)
print(f'tensorflow : {tf.math.reduce_prod(e)}')
prod : 1.564231010023949e-224
exp-sum-log: 1.5642310100240046e-224
tensorflow : 1.5642310100239522e-224
prod : 7.854750422663386e-232
exp-sum-log: 7.854750422664323e-232
tensorflow : 7.854750422663366e-232
prod : 3.635104367139144e-211
exp-sum-log: 3.635104367137875e-211
tensorflow : 3.63510436713914e-211

Is it possible to have SCIP and and python-zibopt work under windows?

Recently I want to try some open source solvers instead of CPLEX. I found that PICOS + zibopt may be a good choice. However, I can merely find instruction on how to make zibopt work with python under windows properly. I downloaded the windows libraries (.dll file) of scip, and I try to install python-zibopt according to the command "python setup.py install". The error " blockmemshell/memory.h no such file" always popped out. I felt that it is because my compiler, which is VS120COMNTOOL, doecn't find the scip solver. Is there any chance that I can make scip work under windows now?
Did you have a look at the current python interface of SCIP 3.1.0? It uses the library from the SCIP Optimization Suite so you don't have to link another LP solver to SCIP.
On Windows, please try this modified setup.py file:
import sys, os, readline, glob, platform
from distutils.core import setup
from distutils.extension import Extension
from Cython.Distutils import build_ext
from Cython.Build import cythonize
BASEDIR = os.path.dirname(os.path.abspath(__file__))
BASEDIR = os.path.dirname(BASEDIR)
BASEDIR = os.path.dirname(BASEDIR)
INCLUDEDIR = os.path.join(BASEDIR,'src')
BASEDIR = os.path.dirname(BASEDIR)
#identify compiler version
prefix = "MSC v."
i = sys.version.find(prefix)
if i == -1:
raise Exception('cannot determine compiler version')
i = i + len(prefix)
s, rest = sys.version[i:].split(" ", 1)
majorVersion = int(s[:-2]) - 6
minorVersion = int(s[2:3]) / 10.0
if platform.architecture()[0].find('64')>=0:
LIBDIR = os.path.join(BASEDIR,'vc'+str(majorVersion),'scip_spx','x64','Release')
else:
LIBDIR = os.path.join(BASEDIR,'vc'+str(majorVersion),'scip_spx','Release')
print('BASEDIR='+ BASEDIR)
print('INCLUDEDIR='+ INCLUDEDIR)
print('LIBDIR='+ LIBDIR)
def complete(text, state):
return (glob.glob(text+'*')+[None])[state]
readline.set_completer_delims(' \t\n;')
readline.parse_and_bind("tab: complete")
readline.set_completer(complete)
libscipopt = 'lib/libscipopt.so'
includescip = 'include/scip'
ext_modules = []
ext_modules += [Extension('pyscipopt.scip', [os.path.join('pyscipopt', 'scip.pyx')],
#extra_compile_args=['-g', '-O0', '-UNDEBUG'],
include_dirs=[INCLUDEDIR],
library_dirs=[LIBDIR],
#runtime_library_dirs=[os.path.abspath('lib')],
libraries=['spx', 'scip_spx'])]
#libraries=['scipopt', 'readline', 'z', 'gmp', 'ncurses', 'm'])]
setup(
name = 'pyscipopt',
version = '0.1',
description = 'wrapper for SCIP in Python',
author = 'Zuse Institute Berlin',
author_email = 'scip#zib.de',
license = 'MIT',
cmdclass = {'build_ext' : build_ext},
ext_modules = ext_modules,
packages=['pyscipopt']
)