Tensorflow Object Detection API - PYTHONPATH unsets itself, when using different terminal - tensorflow

This might be a silly question, but I didn't find anything on the internet. I am trying to run the object_detection API of tensorflow in Windows 10. I've installed all the packages correctly and I've also added the "models" directory to the PYTHONPATH, using windows cmd:
set PYTHONPATH=C:\Program Files\Tensorflow\models
And I also verify that the PYTHONPATH is set correctly, by typing set PYTHONPATH, which gives me this output:
PYTHONPATH=C:\Program Files\Tensorflow\models
Running the following test
python research/object_detection/builders/model_builder_tf2_test.py
I get the output:
Ran 20 tests in 37.331s
OK (Skipped 1)
Which is great. However, If I open another terminal and type
python research/object_detection/builders/model_builder_tf2_test.py
I get the error: ModuleNotFoundError: No Module named official
However, If i type set PYTHONPATH command to verify that the PYTHONPATH is correct I get:
PYTHONPATH=C:\Users\Kohli\AppData\Local\Programs\Python\Python37\Scripts\Tensorflow\models;C:\Users\Kohli\AppData\Local\Programs\Python\Python37\Scripts\Tensorflow\models\official;C:\Users\Kohli\AppData\Local\Programs\Python\Python37\Scripts\Tensorflow\models\research
which is different from the previous terminal output. Moreover, when I open the "Enviroment Variables" tab, I can see that PYTHONPATH exists.
If I add the models directory to PYTHONPATH once again, It will work, but Only for that particular terminal. However, I don't understand why this is happening. Does the "set PYTHONPATH" command creates a virtual environment for that particular terminal or something? How can i permanently add models dir to PYTHONPATH, without typing set PYTHONPATH or os.path.append('path/to/models') ?
Thanks in advance

Related

Running remote Pycharm interpreter with tensorflow and cuda (with module load)

I am using a remote computer in order to run my program on its GPU. My program contains some code with tensorflow functions, and for easier debugging with Pycharm I would like to connect via ssh with remote interpreter to the computer with the GPU. This part can be done easily since Pycharm has this option so I can connect there. However, tensorflow is not loaded automatically so I get import error.
Note that in our institution, we run module load cuda/10.0 and module load tensorflow/1.14.0 each time the computer is loaded. Now this part is the tricky one. Opening a remote terminal creates another session which is not related to the remote interpreter session so it's not affecting remote interpreter modules.
I know that module load in general configures env, however I am not sure how can I export the environment variables to Pycharm's environment variables that are configured before a run.
Any help would be appreciated. Thanks in advance.
The workaround after all was relatively simple: first, I have installed the EnvFile plugin, as it was explained here: https://stackoverflow.com/a/42708476/13236698
Them I created an .env file with a quick script on python, extracting all environment variables and their values from os.environ and wrote them to a file in the following format: <env_variable>=<variable_value>, and saved the file with .env extension. Then I loaded it to PyCharm, and voila - all tensorflow modules were loaded fine.

Redhat with httpd24 connecting to Informix using DBI

I'm at my wits' end on this. I have 2 RH7 boxes that I just installed httpd24 (v2.4.34) on. They were running httpd (v2.4.6) without any connection problems. Now when I try and run Perl scripts from the browser, they fail with...
install_driver(Informix) failed: Can't load '/usr/local/lib64/perl5/auto/DBD/Informix/Informix.so' for module DBD::Informix: libifsql.so: cannot open shared object file: No such file or directory at /usr/lib64/perl5/DynaLoader.pm line 190.
at (eval 5) line 3.
Compilation failed in require at (eval 5) line 3.
Perhaps a required shared library or dll isn't installed where expected
at /var/www/html/app/cgi-bin/test_informix_odbc.cgi line 35.
But when I run the same script from the command line, as 'apache', it runs just fine. All the ENV vars are set correctly.
Anyone run into anything similar before?
It would no longer use the LD_LIBRARY_PATH environment variable I was setting in httpd.conf.
Services are started in a fresh environment without any influence of user's environment (like environment variable values). As a consequence, information of all enabled collections will be lost during service start up.
Newer versions of httpd have stopped bringing the user environment in when the service is started. I found this little blurb in /opt/rh/httpd24/service-environment.
grep -r "LD_LIBRARY_PATH" /opt/rh/httpd24/
/opt/rh/httpd24/enable:export LD_LIBRARY_PATH=/opt/rh/httpd24/root/usr/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
I prepended the standard informix paths in /opt/rh/httpd24/enable.
export LD_LIBRARY_PATH=/opt/IBM/informix/lib:/opt/IBM/informix/lib/esql:/opt/rh/httpd24/root/usr/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
And everything is back to normal. Woohoo!

write outputs from a script run into singularity

I can't get the output of a script run through singularity.
I have a python script, at the end of which the output is saved with:
...
with open('saveOut.pkl','wb') as myFile:
pickle.dump(myTable,myFile)
I want to run this script with singularity on a distant machine. Since I am learning singularity, I made a 'sand box' debian image (not compiled into a single 'img' file yet) in the directory /tmp/debian; in this image I copied the python script test.py in /usr/src and I run it with the command:
sudo singularity exec /tmp/debian python3.5 /usr/src/test.py
The problem:
It works well as long as I have only displayed results. with the pickle example described above, I don't get any saveOut.pkl file anywhere: this file is just not written anywhere but I don't see any message. I tried to write an explicit path in the python script. For instance /usr/src/saveOut.pkl, but this is the same.
How could I write a result ?
What was your expected result i.e. in which directory did you expect
to find the output file?
I expect a file saveOutput.pkl anywhere, in the container or not, I don't care the location. Currently I don't get it at all: neither in the container's current directory, nor in the container's /usr/src/, nor on the host, nor anywhere.
Did you look for it on the host or in the container?
both, I don't see it anywhere
What's happening here is that your python script is writing the pickle file to its current location (/usr/src/ in the container). Then, since the output from your script is not persistent (due to the sandbox not being writable on execution), it gets deleted at the end of the run.
I believe you could change your script:
with open('/opt/saveOut.pkl','wb') as myFile:
pickle.dump(myTable,myFile)
and then bind the local directory and get the output you're looking for:
sudo singularity exec -B ./:/opt /tmp/debian python3.5 /usr/src/test.py
This worked for me, anyway.

Pycharm giving error on a script which is working from terminal (Module: Tensorflow)

I was working with the tensorflow(GPU version) module in Pycharm. If I run a script from terminal, it works as expected. However when I run the script from pycharm, it says:
ImportError: libcudart.so.7.5: cannot open shared object file: No
such file or directory
How do I resolve this?
Pycharm interpreter shows tensorflow as a package.
In the terminal, when I check for the version of tensorflow, it was the same as in pycharm (0.10.0rc0)
Looks like your CUDA_HOME or LD_LIBRARY_PATH configured correctly in the console, but not in PyCharm. You can check and compare their values, in console do
echo $CUDA_HOME
echo $LD_LIBRARY_PATH
In PyCharm (say, in your main script):
import os
print(os.environ.get('CUDA_HOME'))
print(os.environ.get('LD_LIBRARY_PATH'))
You can configure them for the given Run Configuration in Environment Variables section.
Better approach would be configuring those environment variables globally, so every process in the system would have an access to them. To do that you have to edit /etc/environment file and add original values, which you got from console.
Here are very similar problems: one, two, three.

Command line to run a program with both xlwt and abaqusConstants modules

Windows Machine, Python 2.4.
I have a program that imports both xlwt/xlrd and abaqusConstants module.
When I run my program with the command line: abaqus python abc.py, I get "ImportError: No module named xlwt/xlrd"
When I run my program with the command line: c:\python24\python.exe abc.py, I get "ImportError: No module named abaqusConstants".
The program ran perfectly when I ran it on my system where xlrd/xlwt was present in c:\python24\lib and Abaqus was installed in C-drive. When I tried to access xlrd/xlwt from my organisation's common share, the above problem appeared.
Is it because Abaqus is not present in the common share? How do I rectify this issue? Please tell me what command line to use.
the module abaqusConstants is only available in abaqus kernel executions of python so you need to be running it with abaqus python. Make sure that your PYTHONPATH variable is set properly to include the directory where xlwt/xlrd exists. see Using matplotlib (for python 2.6) with Abaqus 6.12 for a similar issue.