Having problems declaring SUMO_HOME - sumo

I'm trying to run a test python code to use the traci library and it is returning "please declare environment SUMO_HOME".
I'm on Ubuntu 18.4.2 and Sumo 0.32.0.I solved this problem before by running
export SUMO_HOME=/home/gustavo/Downloads/sumo-0.32.0/tools/
,but this time it couldn't solve the problem. So I tried implementing a line inside the python file using the os library giving the same command but from the code itself:
os.system("export SUMO_HOME=/home/gustavo/Downloads/sumo-0.32.0/tool/")
And it also didn't work, so came here to ask for help. May any of you help me, please?
import os
import sys
import optparse
os.system("export SUMO_HOME=/home/gustavo/Downloads/sumo-0.32.0/tool/")
# we need to import some python modules from the $SUMO_HOME/tools directory
if 'SUMO_HOME' in os.environ:
tools = os.path.join(os.environ['SUMO_HOME=/home/gustavo/Downloads/sumo-0.32.0/tools/'], 'tools')
sys.path.append(tools)
else:
sys.exit("please declare environment variable 'SUMO_HOME'")
from sumolib import checkBinary # Checks for the binary in environ vars
import traci
def get_options():
opt_parser = optparse.OptionParser()
opt_parser.add_option("--nogui", action="store_true",
default=False, help="run the commandline version of sumo")
options, args = opt_parser.parse_args()
return options
# contains TraCI control loop
def run():
step = 0
while traci.simulation.getMinExpectedNumber() > 0:
traci.simulationStep()
print(step)
step += 1
traci.close()
sys.stdout.flush()
# main entry point
if __name__ == "__main__":
options = get_options()
# check binary
if options.nogui:
sumoBinary = checkBinary('sumo')
else:
sumoBinary = checkBinary('sumo-gui')
# traci starts sumo as a subprocess and then this script connects and runs
traci.start([sumoBinary, "-c", "demo.sumocfg",
"--tripinfo-output", "tripinfo.xml"])
run()
I expected for the steps to appear on the terminal.

The correct location is probably
export SUMO_HOME=/home/gustavo/Downloads/sumo-0.32.0
without the tools or tool suffix. It will not work from inside the python script with os.system but you could modify os.environ directly.
Furthermore you mixed up the call to os.environ in the script. It should read:
tools = os.path.join(os.environ['SUMO_HOME'], 'tools')

I swapped the if else part for another code :
try:
sys.path.append("/home/gustavo/Downloads/sumo-0.32.0/tools")
from sumolib import checkBinary
except ImportError:
sys.exit("please declare environment variable 'SUMO_HOME' as the root directory of your sumo installation (it should contain folders 'bin', 'tools' and 'docs')")
It solved the problem

Related

Testing a Jupyter Notebook

I am trying to come up with a method to test a number of Jupyter notebooks. A test should run when a new notebook is implemented in a Github branch and submitted for a pull request. The tests are not that complicated, they are mostly just testing if the notebook runs end-to-end and without any errors, and maybe a few asserts. However:
There are certain calls in some cells that need to be mocked, e.g. a call to download the data from a database.
There may be some magic cells in the notebooks which run a pip command or something else.
I am open to use any testing library, such as 'pytest' or unittest, although pytest is preferred.
I looked at a few libraries for testing notebooks such as nbmake, treon, and testbook, but I was unable to make them work. I also tried to convert the notebook to a python file, but the magic cells were converted to a get_ipython().run_cell_magic(...) call which became an issue, since pytest uses python and not ipython, and get_ipython() is only available in ipython.
So, I am wondering what is a good way to test jupyter notebooks with all of that in mind. Any help is appreciated.
One straightforward approach I've already used is to execute the entire notebook with nbconvert.
A notebook failed.ipynb raising an exception will result in a failed run thanks to the --execute option that tells nbconvert to execute the notebook prior to its conversion.
jupyter nbconvert --to notebook --execute failed.ipynb
# ...
# Exception: FAILED
echo $?
# 1
Another correct notebook passed.ipynb will result in a successful export.
jupyter nbconvert --to notebook --execute passed.ipynb
# [NbConvertApp] Converting notebook passed.ipynb to notebook
# [NbConvertApp] Writing 1172 bytes to passed.nbconvert.ipynb
echo $?
# 0
Cherry on the cake, you can do the same through the API and so wrap it in Pytest!
import nbformat
import pytest
from nbconvert.preprocessors import ExecutePreprocessor
#pytest.mark.parametrize("notebook", ["passed.ipynb", "failed.ipynb"])
def test_notebook_exec(notebook):
with open(notebook) as f:
nb = nbformat.read(f, as_version=4)
ep = ExecutePreprocessor(timeout=600, kernel_name='python3')
try:
assert ep.preprocess(nb) is not None, f"Got empty notebook for {notebook}"
except Exception:
assert False, f"Failed executing {notebook}"
Running the test gives.
pytest test_nbconv.py
# FAILED test_nbconv.py::test_notebook_exec[failed.ipynb] - AssertionError: Failed executing failed.ipynb
# PASSED test_nbconv.py::test_notebook_exec[passed.ipynb]
Notes
There is several output formats, I've used here notebook.
This doesn’t convert a notebook to a different format per se, instead it allows the running of nbconvert preprocessors on a notebook, and/or conversion to other notebook formats.
The python code example is just a quick draft it can be largely improved.
Here is my own solution using testbook. Let's say I have a notebook called my_notebook.ipynb with the following content:
The trick is to inject a cell before my call to bigquery.Client and mock it:
from testbook import testbook
#testbook('./my_notebook.ipynb')
def test_get_details(tb):
tb.inject(
"""
import mock
mock_client = mock.MagicMock()
mock_df = pd.DataFrame()
mock_df['week'] = range(10)
mock_df['count'] = 5
p1 = mock.patch.object(bigquery, 'Client', return_value=mock_client)
mock_client.query().result().to_dataframe.return_value = mock_df
p1.start()
""",
before=2,
run=False
)
tb.execute()
dataframe = tb.get('dataframe')
assert dataframe.shape == (10, 2)
x = tb.get('x')
assert x == 7

Test if notebook is running on Google Colab

How can I test if my notebook is running on Google Colab?
I need this test as obtaining / unzipping my training data is different if running on my laptop or on Colab.
Try importing google.colab
try:
import google.colab
IN_COLAB = True
except:
IN_COLAB = False
Or just check if it's in sys.modules
import sys
IN_COLAB = 'google.colab' in sys.modules
For environments using ipython
If you are sure that the script will be run using ipython which is the most typical usage, there is also the possibility to check the ipython interpreter used. I think it is a little bit more clear and you don't have to import any module.
if 'google.colab' in str(get_ipython()):
print('Running on CoLab')
else:
print('Not running on CoLab')
If you need to do it multiple times you might want to assign a variable so you don't have to repeat the str(get_ipython()).
RunningInCOLAB = 'google.colab' in str(get_ipython())
RunningInCOLAB is True if run in a Google Colab notebook.
For environments not using ipython
In this case you have to check ipython is used first, assuming that COLab will always use ipython.
RunningInCOLAB = 'google.colab' in str(get_ipython()) if hasattr(__builtins__,'__IPYTHON__') else False
you can check environment variable like this:
import os
if 'COLAB_GPU' in os.environ:
print("I'm running on Colab")
actually you can print out os.environ to check what's associated with colab and then check the key
Improved Solution for all Python environments
As none of the other answers given here worked for me, and I was not using iPython. I checked the environment variables they use in Colab and thus, the following is best for checking the environment:
import os
if os.getenv("COLAB_RELEASE_TAG"):
print("Running in Colab")
else:
print("NOT in Colab")
In a %%bash cell, use:
%%bash
[[ ! -e /colabtools ]] && exit # Continue only if running on Google Colab
# Do Colab-only stuff here
Or in Python equivalence
import os
if os.path.exists('/colabtools'):
# do stuff

Running Tensorflow on JupyterNotebook instead of on Terminal commands

I wish to run some Tensorflow code on JupyterNotebook.
If run it on terminal, then the link above gives instructions like this:
python src/validate_on_lfw.py ~/datasets/lfw/lfw_mtcnnpy_160 ~/models/facenet/20170512-110547
Question: how do I run it on Jupyter notebook ? Thanks
e.g.,
# Load the model
facenet.load_model(args.model)
Simply replace args.model with ~/models/facenet/20170512-110547
# Load the model
facenet.load_model('~/models/facenet/20170512-110547')
will give error
usage: ipykernel_launcher.py [-h] [--lfw_batch_size LFW_BATCH_SIZE]
[--image_size IMAGE_SIZE] [--lfw_pairs LFW_PAIRS]
[--lfw_file_ext {jpg,png}]
[--lfw_nrof_folds LFW_NROF_FOLDS]
lfw_dir model
ipykernel_launcher.py: error: too few arguments
sys.argv
Out[5]:
['/anaconda/envs/tensorflow/lib/python2.7/site-packages/ipykernel_launcher.py',
'-f',
'/Users/my_name/Library/Jupyter/runtime/kernel-770c12c9-8fbe-44f7-91dd-4b0a5c5d7537.json']
Ok, simple solution...
Simply run it on Terminal as the given GitHub suggested and in the mean time print out the sys.argv on terminal like this
sys.argv = ['src/validate_on_lfw.py', '/Users/../datasets/lfw/lfw_mtcnnpy_160', '/Users/../models/facenet/20170512-110547']
Then use these values of sys.argv in JupyterNotebook in def parse_arguments(argv) as default values, and it worked

automatically run %matplotlib inline in jupyter qtconsole

Is there a way to change the config file to make jupyter qtconsole run the following command on startup?:
%matplotlib inline
Add this line to the ipython_config.py file (not the ipython_qtconsole_config.py file):
c.InteractiveShellApp.matplotlib = 'inline'
In your ipython_config.py file you can specify commands to run at startup (including magic % commands) by setting c.InteractiveShellApp.exec_lines. For example,
c.InteractiveShellApp.exec_lines = """
%matplotlib inline
%autoreload 2
import your_favorite_module
""".split('\n')
Open the file ~/.ipython/profile_default/ipython_config.py, and
c.InteractiveShellApp.code_to_run = ''
==>
c.InteractiveShellApp.code_to_run = '%pylab inline'

How to Reload a Python3 C extension module?

I wrote a C extension (mycext.c) for Python 3.2. The extension relies on constant data stored in a C header (myconst.h). The header file is generated by a Python script. In the same script, I make use of the recently compiled module. The workflow in the Python3 myscript (not shown completely) is as follows:
configure_C_header_constants()
write_constants_to_C_header() # write myconst.h
os.system('python3 setup.py install --user') # compile mycext
import mycext
mycext.do_stuff()
This works perfectly fine the in a Python session for the first time. If I repeat the procedure in the same session (for example, in two different testcases of a unittest), the first compiled version of mycext is always (re)loaded.
How do I effectively reload a extension module with the latest compiled version?
You can reload modules in Python 3.x by using the imp.reload() function. (This function used to be a built-in in Python 2.x. Be sure to read the documentation -- there are a few caveats!)
Python's import mechanism will never dlclose() a shared library. Once loaded, the library will stay until the process terminates.
Your options (sorted by decreasing usefulness):
Move the module import to a subprocess, and call the subprocess again after recompiling, i.e. you have a Python script do_stuff.py that simply does
import mycext
mycext.do_stuff()
and you call this script using
subprocess.call([sys.executable, "do_stuff.py"])
Turn the compile-time constants in your header into variables that can be changed from Python, eliminating the need to reload the module.
Manually dlclose() the library after deleting all references to the module (a bit fragile since you don't hold all the references yourself).
Roll your own import mechanism.
Here is an example how this can be done. I wrote a minimal Python C extension mini.so, only exporting an integer called version.
>>> import ctypes
>>> libdl = ctypes.CDLL("libdl.so")
>>> libdl.dlclose.argtypes = [ctypes.c_void_p]
>>> so = ctypes.PyDLL("./mini.so")
>>> so.PyInit_mini.argtypes = []
>>> so.PyInit_mini.restype = ctypes.py_object
>>> mini = so.PyInit_mini()
>>> mini.version
1
>>> del mini
>>> libdl.dlclose(so._handle)
0
>>> del so
At this point, I incremented the version number in mini.c and recompiled.
>>> so = ctypes.PyDLL("./mini.so")
>>> so.PyInit_mini.argtypes = []
>>> so.PyInit_mini.restype = ctypes.py_object
>>> mini = so.PyInit_mini()
>>> mini.version
2
You can see that the new version of the module is used.
For reference and experimenting, here's mini.c:
#include <Python.h>
static struct PyModuleDef minimodule = {
PyModuleDef_HEAD_INIT, "mini", NULL, -1, NULL
};
PyMODINIT_FUNC
PyInit_mini()
{
PyObject *m = PyModule_Create(&minimodule);
PyModule_AddObject(m, "version", PyLong_FromLong(1));
return m;
}
there is another way, set a new module name, import it, and change reference to it.
Update: I have now created a Python library around this approach:
https://github.com/bergkvist/creload
https://pypi.org/project/creload/
Rather than using the subprocess module in Python, you can use multiprocessing. This allows the child process to inherit all of the memory from the parent (on UNIX-systems).
For this reason, you also need to be careful not to import the C extension module into the parent.
If you return a value that depends on the C extension, it might also force the C extension to become imported in the parent as it receives the return-value of the function.
import multiprocessing as mp
import sys
def subprocess_call(fn, *args, **kwargs):
"""Executes a function in a forked subprocess"""
ctx = mp.get_context('fork')
q = ctx.Queue(1)
is_error = ctx.Value('b', False)
def target():
try:
q.put(fn(*args, **kwargs))
except BaseException as e:
is_error.value = True
q.put(e)
ctx.Process(target=target).start()
result = q.get()
if is_error.value:
raise result
return result
def my_c_extension_add(x, y):
assert 'my_c_extension' not in sys.modules.keys()
# ^ Sanity check, to make sure you didn't import it in the parent process
import my_c_extension
return my_c_extension.add(x, y)
print(subprocess_call(my_c_extension_add, 3, 4))
If you want to extract this into a decorator - for a more natural feel, you can do:
class subprocess:
"""Decorate a function to hint that it should be run in a forked subprocess"""
def __init__(self, fn):
self.fn = fn
def __call__(self, *args, **kwargs):
return subprocess_call(self.fn, *args, **kwargs)
#subprocess
def my_c_extension_add(x, y):
assert 'my_c_extension' not in sys.modules.keys()
# ^ Sanity check, to make sure you didn't import it in the parent process
import my_c_extension
return my_c_extension.add(x, y)
print(my_c_extension_add(3, 4))
This can be useful if you are working in a Jupyter notebook, and you want to rerun some function without rerunning all your existing cells.
Notes
This answer might only be relevant on Linux/macOS where you have a fork() system call:
Python multiprocessing linux windows difference
https://rhodesmill.org/brandon/2010/python-multiprocessing-linux-windows/