vscode, the interactive mode, I can open ipynb but cannot import the modules which I have installed - pandas

I have created a conda environment (say, called 'ds0') and installed some packages (e.g., python, pandas etc.). I then in vscode set the interpreter to be the one which I just created. I expect I could have my code working propery in the conda environment in vscode.
but then I have a problem, when I use the interactive mode in the ipynb file, I cannot import the packages,
e.g.,
import pandas as pd
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-11-f9ebee165770> in <module>
----> 1 import pandas as pd
ModuleNotFoundError: No module named 'pandas'
other code in the ipynb is running properly (e.g., to print a message etc.).
so it looks like vscode in the interactive mode is not using the right environment - but I did set the environment as 'ds0', and also if I save the ipynb file as a python script(e.g., as 'test1.py'), I can actually run it and vscode does recognize the pandas package.
so, how can I fix the issue so that I can run the code properly in the interacrive mode (e.g., in ipynb file).
** this is macbook m1 laptop, I just have the latest anaconda and vscode installed. I also tried to create additonal conda environments and the isse is the same. I am not sure if there is a problem with the ipykernel.
Thanks you!

I now have a solution to the problem - though I do not quite know if what I found was the culprit.
What I did and what did NOT solve the problem:
I installed ipython, ipykernel in the vscode terminal and the conda terminal (may be duplicated) manually - what happened was that, if I create a new ipynb file, I can select the both the interpreter and the kernel properly, and the code runs perfect - but if I open the previous ipynb file, I still have the problem - it looks like vscode is using different kernels for two different ipynb files
What I did: I reinstalled the Mac OS Big Sur (this may be unnecessary though) and I installed a clean miniconda and vscode - however this does not solve the problem. vscode still does not use the right kernel (but it looks like it uses the right interpreter/env which I created via conda). It looks like vscode is not using the right kernel (the icon in the upper right hand side suggests that the jupyter server local is 'disconnected'), also it puts a warning msg saying 'invalid version -final'
What I did and what DID solve the problem:
Then I deleted all the existing ipynb file and re- cloned from Github. this solves the problem - but I do not know why and how it solved the problem.

In the top right corner, you can see the name of the kernel used ("Python 3.8.1 64-bit: Idle", for example). Click on that, and select the kernel you want to use. That's how I fixed the problem. The Kernel is named within the ipynb file, and if there's a mismatch between that and your system, the notebook will not run.

Related

Running remote Pycharm interpreter with tensorflow and cuda (with module load)

I am using a remote computer in order to run my program on its GPU. My program contains some code with tensorflow functions, and for easier debugging with Pycharm I would like to connect via ssh with remote interpreter to the computer with the GPU. This part can be done easily since Pycharm has this option so I can connect there. However, tensorflow is not loaded automatically so I get import error.
Note that in our institution, we run module load cuda/10.0 and module load tensorflow/1.14.0 each time the computer is loaded. Now this part is the tricky one. Opening a remote terminal creates another session which is not related to the remote interpreter session so it's not affecting remote interpreter modules.
I know that module load in general configures env, however I am not sure how can I export the environment variables to Pycharm's environment variables that are configured before a run.
Any help would be appreciated. Thanks in advance.
The workaround after all was relatively simple: first, I have installed the EnvFile plugin, as it was explained here: https://stackoverflow.com/a/42708476/13236698
Them I created an .env file with a quick script on python, extracting all environment variables and their values from os.environ and wrote them to a file in the following format: <env_variable>=<variable_value>, and saved the file with .env extension. Then I loaded it to PyCharm, and voila - all tensorflow modules were loaded fine.

How can I fix the module import error when I run it on the command line after making SSH connection?

I am not working on lego mindstorm project using ev3-brick and successfully connected pc and ev3-brick using Pycharm and then transferred the code from my laptop to EV3-brick.
But whenever I try to implement the python file on SSH shell using terminal, the module import error occurs that no module named 'libs'. But since I transferred all the file and even set the path using export PYTHONPATH="${PYTHONPATH}:/home/robot/csse120/libs", it has to import all the modules. It seems that when I ran another file not using terminal, it correctly imports other modules or packages but whenever I run on the shell, it cannot find files on other packages.
I even tried inserting following code which is,
import os
os.environ['PATH'] += ':/home/robot/csse120/libs'
but it didn't work also in another file, it says it cannot import paho-mqtt thought it worked well when I didn't run it on the shell.
Can you help me solve this problem? I would appreciate your help.
enter image description here

Pycharm giving error on a script which is working from terminal (Module: Tensorflow)

I was working with the tensorflow(GPU version) module in Pycharm. If I run a script from terminal, it works as expected. However when I run the script from pycharm, it says:
ImportError: libcudart.so.7.5: cannot open shared object file: No
such file or directory
How do I resolve this?
Pycharm interpreter shows tensorflow as a package.
In the terminal, when I check for the version of tensorflow, it was the same as in pycharm (0.10.0rc0)
Looks like your CUDA_HOME or LD_LIBRARY_PATH configured correctly in the console, but not in PyCharm. You can check and compare their values, in console do
echo $CUDA_HOME
echo $LD_LIBRARY_PATH
In PyCharm (say, in your main script):
import os
print(os.environ.get('CUDA_HOME'))
print(os.environ.get('LD_LIBRARY_PATH'))
You can configure them for the given Run Configuration in Environment Variables section.
Better approach would be configuring those environment variables globally, so every process in the system would have an access to them. To do that you have to edit /etc/environment file and add original values, which you got from console.
Here are very similar problems: one, two, three.

tensorflow gpu can not be called from jupyterhub/jupyter notebook, why?

Well I figure eight hours is enough time trying to fix this on my own, so I'll just ask folks:
I am running tensorflow-gpu 1.1.0 just fine in my virtual environment named 'tensorflow' outside of jupyterhub and Jupyter notebook. That is, I can import tensorflow and run scripts with it using the gpu.
When I'm inside my tensorflow virtualenv and using jupyterhub, Jupyter can not seem to 'see' tensorflow. I get the following error:
ImportError: libcublas.so.8.0: cannot open shared object file: No such file or directory
1) This is a common seen error message indicating tensorflow install problems, yet my paths and environment variables seem fine. After all, I can use tensorflow-gpu just fine outside of Jupyter.
2) Typing 'which jupyter' shows ~/anaconda3/envs/hub/bin/jupyter, so I believe that I am referencing jupyter inside my virtualenv.
3) Pip freeze shows that I have jupyterhub and tensorflow-gpu. I even did a pip3 freeze and it shows both packages as well.
Any ideas? Can tensorflow-gpu be run from Jupyter notebooks?
I got the solution from here:
[https://github.com/jupyter/notebook/issues/1290][1]
Basically, something was 'wrong' with jupyter in that it could not read my LD_LIBRARY_PATH variable. I did put everything correctly inside .bashrc so I don't know why.
Switch to the command line (terminal). Switch into your virtual environment if you have one.
type in: jupyter notebook --generate-config
It will tell you the directory in which your jupyter configuration file is stored. If you want to list it again type: jupyter --config-dir
Mine jupyter_notebook_config.py file is located here: /home/me/.jupyter/jupyter_notebook_config.py
At the very top of this file, jupyter_notebook_config.py, add in the following code:
import os
c = get_config()
os.environ['LD_LIBRARY_PATH'] = '/usr/local/cuda-8.0/lib64:usr/local/cuda-8.0/lib64/libcudart.so.8.0'
c.Spawner.env.update('LD_LIBRARY_PATH')
Then restart jupyterhub or jupyter notebook (type in at the command line: jupyter notebook
Tensorflow gpu should work.
The same thing applies even if you are running jupyterhub. Make the changes in jupyter, not jupyterhub. (Each user of jupyterhub will have their own jupyter process, so make the changes not at the 'hub' level, but the jupyter notebook level.

How to automatically reload modules in IPython?

Before I begin, I want to say that I am not a programmer; I am a geek and an engineer. Thus, I love coding and use it academically. Stackoverflow taught me more than 80% of what I know about python.
My problem is I need to manually reload the modules in my scripts by first importing importlib into my terminal and than using importlib.reload(*modulename*) to reload them. I want my IPython terminal to automatically reload the modules in my python scripts when I run them through my IPython terminal. This functionally was provided in previous version using the magic command %autoreload, which does not seem to work for me.
I have looked # the IPython documentation (link 1), tried using the %load_ext autoreload command (link 2) and the import ipy_autoreload followed by %autoreload 2 command (link 3). I found more than 4 other answers in stackoverflow telling me to do the things in either link 2 or 3; it didn't work for me. If anyone knows how to bring back autoreloading, it would make my fingers a bit happier.
Link 1: https://ipython.org/ipython-doc/3/config/extensions/autoreload.html
Link 2: https://stackoverflow.com/a/18216967/5762140
Link 3: https://stackoverflow.com/a/4765191/5762140
I am using a 64 bit installation of Windows 7. I have IPython 4.0.1 which came with my installation of Anaconda3 (3.18.9 64bit). Screenies about my error traceback from the IPython terminal when i try to use %load_ext autoreload can be provided on request.
All the links you have above use commands within ipython. You should try editing your config file. Open up your terminal and complete the following steps.
Step 1: Make sure you have the latest ipython version installed
$ ipython --version
Step 2: find out where your config file is
$ ipython profile create
Step 3: Open the config file with an editor based on the location of your config file. I use atom. For example:
$ atom ~/.ipython/profile_default/ipython_config.py
Step 4: Look for the following lines in the config file:
c.InteractiveShellApp.extensions = []
change it to:
c.InteractiveShellApp.extensions = ['autoreload']
and then uncomment that line
find:
c.InteractiveShellApp.exec_lines = []
change it to:
c.InteractiveShellApp.exec_lines = ['%autoreload 2']
and then uncomment that line
Done.