Well I figure eight hours is enough time trying to fix this on my own, so I'll just ask folks:
I am running tensorflow-gpu 1.1.0 just fine in my virtual environment named 'tensorflow' outside of jupyterhub and Jupyter notebook. That is, I can import tensorflow and run scripts with it using the gpu.
When I'm inside my tensorflow virtualenv and using jupyterhub, Jupyter can not seem to 'see' tensorflow. I get the following error:
ImportError: libcublas.so.8.0: cannot open shared object file: No such file or directory
1) This is a common seen error message indicating tensorflow install problems, yet my paths and environment variables seem fine. After all, I can use tensorflow-gpu just fine outside of Jupyter.
2) Typing 'which jupyter' shows ~/anaconda3/envs/hub/bin/jupyter, so I believe that I am referencing jupyter inside my virtualenv.
3) Pip freeze shows that I have jupyterhub and tensorflow-gpu. I even did a pip3 freeze and it shows both packages as well.
Any ideas? Can tensorflow-gpu be run from Jupyter notebooks?
I got the solution from here:
[https://github.com/jupyter/notebook/issues/1290][1]
Basically, something was 'wrong' with jupyter in that it could not read my LD_LIBRARY_PATH variable. I did put everything correctly inside .bashrc so I don't know why.
Switch to the command line (terminal). Switch into your virtual environment if you have one.
type in: jupyter notebook --generate-config
It will tell you the directory in which your jupyter configuration file is stored. If you want to list it again type: jupyter --config-dir
Mine jupyter_notebook_config.py file is located here: /home/me/.jupyter/jupyter_notebook_config.py
At the very top of this file, jupyter_notebook_config.py, add in the following code:
import os
c = get_config()
os.environ['LD_LIBRARY_PATH'] = '/usr/local/cuda-8.0/lib64:usr/local/cuda-8.0/lib64/libcudart.so.8.0'
c.Spawner.env.update('LD_LIBRARY_PATH')
Then restart jupyterhub or jupyter notebook (type in at the command line: jupyter notebook
Tensorflow gpu should work.
The same thing applies even if you are running jupyterhub. Make the changes in jupyter, not jupyterhub. (Each user of jupyterhub will have their own jupyter process, so make the changes not at the 'hub' level, but the jupyter notebook level.
Related
I have created a conda environment (say, called 'ds0') and installed some packages (e.g., python, pandas etc.). I then in vscode set the interpreter to be the one which I just created. I expect I could have my code working propery in the conda environment in vscode.
but then I have a problem, when I use the interactive mode in the ipynb file, I cannot import the packages,
e.g.,
import pandas as pd
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-11-f9ebee165770> in <module>
----> 1 import pandas as pd
ModuleNotFoundError: No module named 'pandas'
other code in the ipynb is running properly (e.g., to print a message etc.).
so it looks like vscode in the interactive mode is not using the right environment - but I did set the environment as 'ds0', and also if I save the ipynb file as a python script(e.g., as 'test1.py'), I can actually run it and vscode does recognize the pandas package.
so, how can I fix the issue so that I can run the code properly in the interacrive mode (e.g., in ipynb file).
** this is macbook m1 laptop, I just have the latest anaconda and vscode installed. I also tried to create additonal conda environments and the isse is the same. I am not sure if there is a problem with the ipykernel.
Thanks you!
I now have a solution to the problem - though I do not quite know if what I found was the culprit.
What I did and what did NOT solve the problem:
I installed ipython, ipykernel in the vscode terminal and the conda terminal (may be duplicated) manually - what happened was that, if I create a new ipynb file, I can select the both the interpreter and the kernel properly, and the code runs perfect - but if I open the previous ipynb file, I still have the problem - it looks like vscode is using different kernels for two different ipynb files
What I did: I reinstalled the Mac OS Big Sur (this may be unnecessary though) and I installed a clean miniconda and vscode - however this does not solve the problem. vscode still does not use the right kernel (but it looks like it uses the right interpreter/env which I created via conda). It looks like vscode is not using the right kernel (the icon in the upper right hand side suggests that the jupyter server local is 'disconnected'), also it puts a warning msg saying 'invalid version -final'
What I did and what DID solve the problem:
Then I deleted all the existing ipynb file and re- cloned from Github. this solves the problem - but I do not know why and how it solved the problem.
In the top right corner, you can see the name of the kernel used ("Python 3.8.1 64-bit: Idle", for example). Click on that, and select the kernel you want to use. That's how I fixed the problem. The Kernel is named within the ipynb file, and if there's a mismatch between that and your system, the notebook will not run.
I installed Uiniversal Sentence Encoder (Tensorflow 2) in 2 virtual environment with Ananconda. One is on Mac, anther is on Ubuntu.
All worked with following:
module_url = "https://tfhub.dev/google/universal-sentence-encoder/4"
model = hub.load(module_url)
Installed with:
conda create -n my-tf2-env python=3.6 tensorflow
conda init bash
conda activate my-tf2-env
conda install -c conda-forge tensorflow-hub
But, for unknown reason after 3 weeks, Mac does not work with following error which fails at:
model = hub.load(module_url)
Error: SavedModel file does not exist at: /var/folders/99/8rwn_9hx3jj9x3qz6yf0j2f00000gp/T/tfhub_modules/063d866c06683311b44b4992fd46003be952409c/{saved_model.pbtxt|saved_model.pb}
On Mac, I recreated new env with same procedure but has same error.
On Ubuntu, all works well.
I want to know how to fix Mac. Thank you for help.
What I attempted on Mac is that I tried to download "https://tfhub.dev/google/universal-sentence-encoder/4" to local drive and load it from local drive in future, not from web url. This process was not finished and not successful yet. I don't remember if there is anything downloaded to Mac with this attempt, that might corrupted Tensorflow-hub on login user account of my Mac.
This error usually occurs when the saved_model.pb is not present in the path specified in the module_url.
For example, if we consider the Folder structure as shown in the screenshot below,
The code,
import tensorflow_hub as hub
module_url = "https://tfhub.dev/google/universal-sentence-encoder/4"
model = hub.load(module_url)
and
import tensorflow_hub as hub
module_url = "/home/mothukuru/Downloads/Hub"
model = hub.load(module_url)
work successfully.
But if saved_model.pb is not present in that Folder as shown below,
Executing the code,
import tensorflow_hub as hub
module_url = "/home/mothukuru/Downloads/Hub"
model = hub.load(module_url)
results in the below error,
OSError: SavedModel file does not exist at: /home/mothukuru/Downloads/Hub/{saved_model.pbtxt|saved_model.pb}
In your specific case, executing the code while the Download of the Model was in progress might have resulted in the error.
As stated in the comment, deleting the Downloaded File can fix the problem.
Please let me know if this answer has not resolved your issue and I will be happy to modify it accordingly.
TF Published some additional guidelines on caching models apparently in response to questions about this issue.
In my case, I was running this locally on Mac via a jupyter notebook.
I was not sure how to "Delete the download file" as suggest in the other answer, but I found this resolved my issue:
https://www.tensorflow.org/hub/caching#reading_from_remote_storage
Reading from remote storage
Users can instruct the tensorflow_hub
library to directly read models from remote storage (GCS) instead of
downloading the models locally with
os.environ["TFHUB_MODEL_LOAD_FORMAT"] = "UNCOMPRESSED"
or by setting the command-line flag --tfhub_model_load_format to UNCOMPRESSED. This way, no caching directory is needed, which is especially helpful in environments that provide little disk space but a fast internet connection.
I ran that command in my notebook, and then the error was immediately resolved.
Note: I assume this is slower, especially if you do not have a fast internet connection, since what you are doing is telling the program to not locally cache (store) a copy and to just download it on demand.
I want to use Pycharm on my own laptop to connect to our linux cluster and use tensorflow-gpu on it.
However, it says:
ImportError: libcuda.so.1: cannot open shared object file: No such
file or directory.
When I use terminal to connect to cluster and use tensorflow GPU through terminal,there's no problem.
However ,when I use python remote interpreter in Pycharm,the error happens when importing tensorflow-gpu :
ImportError: libcuda.so.1: cannot open shared object file: No such
file or directory
Failed to load the native TensorFlow runtime.
The location 'libcuda.so.1' on the cluster is
'/usr/lib64/nvidia/libcuda.so.1'
and
'/usr/lib64/libcuda.so.1'.
I tried to add them as LD_LIBRARY_PATH to Environment variables in Pycharm run configuration :
LD_LIBRARY_PATH=/usr/lib64/libcuda.so.1\;/usr/lib64/nvidia/libcuda.so.1
but it doesn't work.
I can use other packages like numpy,sklearn normally.
What's more,the corresponding version of my Tensorflow GPU is CUDA 9.0.If the error is about CUDA,it should be like can not find libcuda.so.9,however it shows libcuda.so.1.
I can also use tensorflow-GPU through terminal and ssh well,so I think the problem might be from Pycharm settings?
What do I need to do about Pycharm settings apart from adding LD_LIBRARY_PATH to Environment variables?
I want to make some nice charts like you can see here or here.
Normal Querying via %%bq -n data works fine.
I installed datalab like described.
If i try to make a chart with this %chart line -d data -f field1,field2 logic, something happens, but no plot appears.
The solution is not mentioned in the "Using in Jupyter" section in the installation readme, but on another wiki page I found: Jupyter Kernel and Notebook Extensions
.
It's, to install a nbextension:
jupyter nbextension install --py datalab.notebook --sys-prefix
That worked fine for me.
I have installed Tensorflow and Keras by Anaconda (Conda Forge's packages) on Windows. This kind of installation has set Theano as primary backend, so I have checked the keras.json file, surprisingly it has set Tensorflow as the main backend. Furthermore, if I try to remove Theano's installation, Keras stop working. I am supposing that each time I run Keras, it override the json file.
How could I permanently set Tensorflow as primary backend?
In Windows,
Try launching the Anaconda prompt from start -> Anaconda* -> Anaconda prompt
*Anaconda followed by your version, for me it is Anaconda3 (64-bit)
Check whether you can see the below as the first line
set "KERAS_BACKEND=theano"
In this case, by default you can go to the below directory (If you have set a custom install directory you might have to navigate there)
C:\Users\yourusername\AppData\Local\Continuum\Anaconda3\etc\conda\activate.d
and open the keras_activate batch file using notepad
Inside the file edit the line saying
set "KERAS_BACKEND=theano" to
set "KERAS_BACKEND=tensorflow"
You are set to use Keras with tensorflow backend.
As the question ages with time, for anybody coming across this now, the suggestion will be to use tensorflow.keras which is available since TensorFlow 1.15
https://www.tensorflow.org/api_docs/python/tf/keras