Accessing GLPK options through PuLP on Google Colab Notebook - google-colaboratory

I am trying to solve a LP problem using PuLP on a Google Colab Notebook. To produce a sensitivity report, I want to use the '--ranges filename.txt' option of the GLPK solver. I have installed both PuLP and GLPK as follows:
!pip install pulp
!apt-get install -y -qq glpk-utils
Here is a small example I'm trying to solve:
from pulp import *
prob = LpProblem('Test_Problem',LpMaximize) # Model
x1=LpVariable("x1",0,100) #Variables
x2=LpVariable("x2",0,100)
prob += 5*x1 + 10*x2 # Objective
prob += x1 + 5*x2 <= 500 #Constraints
prob += 2*x1 + 3*x2 <= 200
prob.solve(GLPK(options=[])) # Solve Without '--ranges sensitivity.txt'
print("Status : ", LpStatus[prob.status]) # Output
print("Objective : ", value(prob.objective))
for v in prob.variables():
print(v.name," : ", v.varValue)
This runs fine and gives me the desired output. However, if I use 'options' and change the following line
prob.solve(GLPK(options=['--ranges sensitivity.txt']))
I get this error:
/usr/local/lib/python3.6/dist-packages/pulp/apis/glpk_api.py in actualSolve(self, lp)
91
92 if not os.path.exists(tmpSol):
---> 93 raise PulpSolverError("PuLP: Error while executing "+self.path)
94 status, values = self.readsol(tmpSol)
95 lp.assignVarsVals(values)
PulpSolverError: PuLP: Error while executing glpsol
I have checked that the same code with 'options' works fine on my computer and produces the correct sensitivity.txt file. But for some reason, it is not working on Colab. (I've installed GLPK using conda-forge in my laptop.)
What can I do to solve this?
Thanks!

The options argument to pass to GLPK_CMD need to have no spaces, so:
prob.solve(GLPK(msg=True, options=['--ranges', 'sensitivity.txt']))
Then it works. With your cases, GLPK gave an error without solving the problem saying:
Invalid option '--ranges sensitivity.txt'; try glpsol --help

Related

Kernel appears to have died on Jupyter notebook, julia 1.7.2

I'm running a julia kernel 1.7.2 in Jupyter notebook on Mac, but when I run the following code I get an error: The kernel appears to have died. It will restart automatically.
I have tried using ] build IJulia but it doesn't work.
This is my code:
Pkg.add("JuMP")
Pkg.add("Cbc")
using JuMP
using Cbc
model = Model(Cbc.Optimizer);
#variable(model, q[1:T] >= 0);
#variable(model, y[1:T] >= 0, Bin);
#variable(model, x[1:T] >= 0);
#objective(model, Min, sum( K*y[t] + h*x[t] for t in 1:T));
#constraint(model,[t = [1]] , x[t] == x0 + q[t] - d[t]);
#constraint(model,[t = 2:T], x[t] == x[t-1] + q[t] - d[t]);
#constraint(model,[t = 1:T], q[t] <= M*y[t]);
I hope you can help. Thank you in advance.
UPDATE:
I tried installing the Cbc in Julia REPL but it doesn't work.
I then ran the script in Julia in the terminal on Mac and got the following error:
Do you know how to fix this?
UPDATE:
I found out that the problem is that I have a new MacBook with an M1 chip. By installing a version of julia meant for Intel or Rosetta, it now works!

Testing a Jupyter Notebook

I am trying to come up with a method to test a number of Jupyter notebooks. A test should run when a new notebook is implemented in a Github branch and submitted for a pull request. The tests are not that complicated, they are mostly just testing if the notebook runs end-to-end and without any errors, and maybe a few asserts. However:
There are certain calls in some cells that need to be mocked, e.g. a call to download the data from a database.
There may be some magic cells in the notebooks which run a pip command or something else.
I am open to use any testing library, such as 'pytest' or unittest, although pytest is preferred.
I looked at a few libraries for testing notebooks such as nbmake, treon, and testbook, but I was unable to make them work. I also tried to convert the notebook to a python file, but the magic cells were converted to a get_ipython().run_cell_magic(...) call which became an issue, since pytest uses python and not ipython, and get_ipython() is only available in ipython.
So, I am wondering what is a good way to test jupyter notebooks with all of that in mind. Any help is appreciated.
One straightforward approach I've already used is to execute the entire notebook with nbconvert.
A notebook failed.ipynb raising an exception will result in a failed run thanks to the --execute option that tells nbconvert to execute the notebook prior to its conversion.
jupyter nbconvert --to notebook --execute failed.ipynb
# ...
# Exception: FAILED
echo $?
# 1
Another correct notebook passed.ipynb will result in a successful export.
jupyter nbconvert --to notebook --execute passed.ipynb
# [NbConvertApp] Converting notebook passed.ipynb to notebook
# [NbConvertApp] Writing 1172 bytes to passed.nbconvert.ipynb
echo $?
# 0
Cherry on the cake, you can do the same through the API and so wrap it in Pytest!
import nbformat
import pytest
from nbconvert.preprocessors import ExecutePreprocessor
#pytest.mark.parametrize("notebook", ["passed.ipynb", "failed.ipynb"])
def test_notebook_exec(notebook):
with open(notebook) as f:
nb = nbformat.read(f, as_version=4)
ep = ExecutePreprocessor(timeout=600, kernel_name='python3')
try:
assert ep.preprocess(nb) is not None, f"Got empty notebook for {notebook}"
except Exception:
assert False, f"Failed executing {notebook}"
Running the test gives.
pytest test_nbconv.py
# FAILED test_nbconv.py::test_notebook_exec[failed.ipynb] - AssertionError: Failed executing failed.ipynb
# PASSED test_nbconv.py::test_notebook_exec[passed.ipynb]
Notes
There is several output formats, I've used here notebook.
This doesn’t convert a notebook to a different format per se, instead it allows the running of nbconvert preprocessors on a notebook, and/or conversion to other notebook formats.
The python code example is just a quick draft it can be largely improved.
Here is my own solution using testbook. Let's say I have a notebook called my_notebook.ipynb with the following content:
The trick is to inject a cell before my call to bigquery.Client and mock it:
from testbook import testbook
#testbook('./my_notebook.ipynb')
def test_get_details(tb):
tb.inject(
"""
import mock
mock_client = mock.MagicMock()
mock_df = pd.DataFrame()
mock_df['week'] = range(10)
mock_df['count'] = 5
p1 = mock.patch.object(bigquery, 'Client', return_value=mock_client)
mock_client.query().result().to_dataframe.return_value = mock_df
p1.start()
""",
before=2,
run=False
)
tb.execute()
dataframe = tb.get('dataframe')
assert dataframe.shape == (10, 2)
x = tb.get('x')
assert x == 7

PGF / LaTeX Backend in Matplotlib via Jupyter Notebook SLURM Job on HPC System

I am a university student using my university's computing cluster.
I installed Tex Live to my home directory at ~/.local/texlive/. I have a file called mplrc. The MATPLOTLIBRC environment variable is set to the mplrc file. The mplrc file contains the following lines
backend: pgf
pgf.rcfonts: false
pgf.texsystem: pdflatex
pgf.preamble: \input{mpl_settings.tex}
text.usetex: true
font.family: serif
font.size: 12
The mpl_settings.tex file is in the same directory as the mplrc file and contains the following
\usepackage{amsmath}
\usepackage[T1]{fontenc}
\usepackage{gensymb}
\usepackage{lmodern}
\usepackage{siunitx}
On the cluster I am using, I must submit a SLURM job to run the Jupyter notebook. The example code I am trying to run within the notebook is
formula = (
r'$\displaystyle '
r'N = \int_{E_\text{min}}^{E_\text{max}} '
r'\int_0^A'
r'\int_{t_\text{min}}^{t_\text{max}} '
r'\Phi_0 \left(\frac{E}{\SI{1}{\GeV}}\right)^{\!\!-γ}'
r' \, \symup{d}A \, \symup{d}t \, \symup{d}E'
r'$'
)
def power_law_spectrum(energy, normalisation, spectral_index):
return normalisation * energy**(-spectral_index)
bin_edges = np.logspace(2, 5, 15)
bin_centers = 0.5 * (bin_edges[:-1] + bin_edges[1:])
y = power_law_spectrum(bin_centers, 1e-5, 2.5)
relative_error = np.random.normal(1, 0.2, size=len(y))
y_with_err = relative_error * y
fig, ax = plt.subplots()
ax.errorbar(
np.log10(bin_centers),
y_with_err,
xerr=[
np.log10(bin_centers) - np.log10(bin_edges[:-1]),
np.log10(bin_edges[1:]) - np.log10(bin_centers)
],
yerr=0.5 * y_with_err,
linestyle='',
)
ax.text(0.1, 0.1, formula, transform=plt.gca().transAxes)
ax.set_yscale('log')
fig.tight_layout(pad=0)
plt.show()
This generates an enormous error message, but the root of it is
RuntimeError: latex was not able to process the following string:
b'lp'
However, underneath that, I see what I think is the real problem
! LaTeX Error: File `article.cls' not found.
I've set my PATH so that it finds the right latex command, but what else needs to be set in order to find the article.cls file? It seems like it's something particular to the Python notebook. When running kpsewhich article.cls in a terminal within the Jupyterlab interface, the file gets found. But trying ! kpsewhich article.cls or subprocess.run(['kpsewhich', 'article.cls']) within the Python notebook does not find the file.
I figured it out. I forgot I had run a section of code which set
TEXINPUTS=/path/to/some/directory
Looks like I missed a : in my TEXINPUTS, so TeX was only looking in /path/to/some/directory
The solution was to have
TEXINPUTS=/path/to/some/directory:
That way it looked in my current directory, but also continued looking elsewhere.

"No graph definition files were found" - TensorBoard error

I used the following code in Pycharm:
import tensorflow as tf
sess = tf.Session()
a = tf.constant(value=5, name='input_a')
b = tf.constant(value=3, name='input_b')
c = tf.multiply(a,b, name='mult_c')
d = tf.add(a,b, name='add_d')
e = tf.add(c,d, name='add_e')
print(sess.run(e))
writer = tf.summary.FileWriter("./tb_graph", sess.graph)
Then, I pasted following line to the Anaconda Prompt:
tensorboard --logdir=="tb_graph"
I tried both with "" and '' as there were proposed: Tensorboard: No graph definition files were found. and it does nothing for me.
I had similar issue. The issue occurred when I specified 'logdir' folder inside single quotes instead of double quotes. Hope this may be helpful to you.
egs: tensorboard --logdir='my_graph' -> Tensorboard didn't detect the graph
tensorboard --logdir="my_graph" -> Tensorboard detected the graph
I checked the code on laptop with Ubuntu 16.04 and another one with Win10, so it probably isn't system-based error.
I also tried adding and removing --host=127.0.0.1 in An Prompt and checking several times both http://localhost:6006/ and http://desktop-.......:6006/.
Still same error:
No graph definition files were found.
To store a graph, create a tf.summary.FileWriter and pass the graph either via the constructor, or by calling its add_graph() method. You may want to check out the graph visualizer tutorial.
....
Please tell me what is wrong in the code/propmp command?
EDIT: On Ubuntu I used the normal terminal, of course.
EDIT2: I used both = and == in command prompt
The answer to my question is:
1) change "./new1_dir" into ".\\new1_dir"
and
2)put full track to file to anaconda propmpt: --logdir="C:\Users\Admin\Documents\PycharmProjects\try_tb\new1_dir"
Thanks #BugKiller for your help!
EDIT: Working only on Windows for me, but still better than nothing
EDIT2: Works on Ubuntu 16.04 too

Running Tensorflow on JupyterNotebook instead of on Terminal commands

I wish to run some Tensorflow code on JupyterNotebook.
If run it on terminal, then the link above gives instructions like this:
python src/validate_on_lfw.py ~/datasets/lfw/lfw_mtcnnpy_160 ~/models/facenet/20170512-110547
Question: how do I run it on Jupyter notebook ? Thanks
e.g.,
# Load the model
facenet.load_model(args.model)
Simply replace args.model with ~/models/facenet/20170512-110547
# Load the model
facenet.load_model('~/models/facenet/20170512-110547')
will give error
usage: ipykernel_launcher.py [-h] [--lfw_batch_size LFW_BATCH_SIZE]
[--image_size IMAGE_SIZE] [--lfw_pairs LFW_PAIRS]
[--lfw_file_ext {jpg,png}]
[--lfw_nrof_folds LFW_NROF_FOLDS]
lfw_dir model
ipykernel_launcher.py: error: too few arguments
sys.argv
Out[5]:
['/anaconda/envs/tensorflow/lib/python2.7/site-packages/ipykernel_launcher.py',
'-f',
'/Users/my_name/Library/Jupyter/runtime/kernel-770c12c9-8fbe-44f7-91dd-4b0a5c5d7537.json']
Ok, simple solution...
Simply run it on Terminal as the given GitHub suggested and in the mean time print out the sys.argv on terminal like this
sys.argv = ['src/validate_on_lfw.py', '/Users/../datasets/lfw/lfw_mtcnnpy_160', '/Users/../models/facenet/20170512-110547']
Then use these values of sys.argv in JupyterNotebook in def parse_arguments(argv) as default values, and it worked