X11 forwarding with PyCharm and Docker Interpreter - matplotlib

I am developing a project in PyCharm using a Docker interpreter, but I am running into issues when doing most "interactive" things. e.g.,
import matplotlib.pyplot as plt
plt.plot([1, 2, 3], [4, 5, 6])
gives
RuntimeError: Invalid DISPLAY variable
I can circumvent this using
import matplotlib
matplotlib.use('agg')
which gets rid of the error, but no plot is produced when I do plt.show(). I also get the same error as in the thread [pycharm remote python console]: "cannot connect to X server" error with import pandas when trying to debug after importing Pandas, but I cannot SSH into my docker container, so the solution proposed there doesn't work. I have seen the solution of passing "-e DISPLAY=$DISPLAY" into the "docker run" command, but I don't believe PyCharm has any functionality for specifying command-line parameters like this with a Docker interpreter. Is there any way to set up some kind of permanent, generic X11 forwarding (if that is indeed the root cause) so that the plots will be appropriately passed to the DISPLAY on my local machine? More generally, has anyone used matplotlib with a Docker interpreter in PyCharm successfully?

Here's the solution I came up with. I hope this helps others. The steps are as follows:
Install and run Socat
socat TCP-LISTEN:6000,reuseaddr,fork UNIX-CLIENT:\"$DISPLAY\"
Install and run XQuartz (probably already installed)
Edit the PyCharm run/debug configuration for your project, setting the appropriate address for the DISPLAY variable (in my case 192.168.0.6:0)
Running/debugging the project results in a new quartz popup displaying the plotted graph, without any need to save to an image, etc.

Run xhost + on the host and add these options to the docker run: -e DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix

Related

vscode, the interactive mode, I can open ipynb but cannot import the modules which I have installed

I have created a conda environment (say, called 'ds0') and installed some packages (e.g., python, pandas etc.). I then in vscode set the interpreter to be the one which I just created. I expect I could have my code working propery in the conda environment in vscode.
but then I have a problem, when I use the interactive mode in the ipynb file, I cannot import the packages,
e.g.,
import pandas as pd
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-11-f9ebee165770> in <module>
----> 1 import pandas as pd
ModuleNotFoundError: No module named 'pandas'
other code in the ipynb is running properly (e.g., to print a message etc.).
so it looks like vscode in the interactive mode is not using the right environment - but I did set the environment as 'ds0', and also if I save the ipynb file as a python script(e.g., as 'test1.py'), I can actually run it and vscode does recognize the pandas package.
so, how can I fix the issue so that I can run the code properly in the interacrive mode (e.g., in ipynb file).
** this is macbook m1 laptop, I just have the latest anaconda and vscode installed. I also tried to create additonal conda environments and the isse is the same. I am not sure if there is a problem with the ipykernel.
Thanks you!
I now have a solution to the problem - though I do not quite know if what I found was the culprit.
What I did and what did NOT solve the problem:
I installed ipython, ipykernel in the vscode terminal and the conda terminal (may be duplicated) manually - what happened was that, if I create a new ipynb file, I can select the both the interpreter and the kernel properly, and the code runs perfect - but if I open the previous ipynb file, I still have the problem - it looks like vscode is using different kernels for two different ipynb files
What I did: I reinstalled the Mac OS Big Sur (this may be unnecessary though) and I installed a clean miniconda and vscode - however this does not solve the problem. vscode still does not use the right kernel (but it looks like it uses the right interpreter/env which I created via conda). It looks like vscode is not using the right kernel (the icon in the upper right hand side suggests that the jupyter server local is 'disconnected'), also it puts a warning msg saying 'invalid version -final'
What I did and what DID solve the problem:
Then I deleted all the existing ipynb file and re- cloned from Github. this solves the problem - but I do not know why and how it solved the problem.
In the top right corner, you can see the name of the kernel used ("Python 3.8.1 64-bit: Idle", for example). Click on that, and select the kernel you want to use. That's how I fixed the problem. The Kernel is named within the ipynb file, and if there's a mismatch between that and your system, the notebook will not run.

Pycharm giving error on a script which is working from terminal (Module: Tensorflow)

I was working with the tensorflow(GPU version) module in Pycharm. If I run a script from terminal, it works as expected. However when I run the script from pycharm, it says:
ImportError: libcudart.so.7.5: cannot open shared object file: No
such file or directory
How do I resolve this?
Pycharm interpreter shows tensorflow as a package.
In the terminal, when I check for the version of tensorflow, it was the same as in pycharm (0.10.0rc0)
Looks like your CUDA_HOME or LD_LIBRARY_PATH configured correctly in the console, but not in PyCharm. You can check and compare their values, in console do
echo $CUDA_HOME
echo $LD_LIBRARY_PATH
In PyCharm (say, in your main script):
import os
print(os.environ.get('CUDA_HOME'))
print(os.environ.get('LD_LIBRARY_PATH'))
You can configure them for the given Run Configuration in Environment Variables section.
Better approach would be configuring those environment variables globally, so every process in the system would have an access to them. To do that you have to edit /etc/environment file and add original values, which you got from console.
Here are very similar problems: one, two, three.

How to automatically reload modules in IPython?

Before I begin, I want to say that I am not a programmer; I am a geek and an engineer. Thus, I love coding and use it academically. Stackoverflow taught me more than 80% of what I know about python.
My problem is I need to manually reload the modules in my scripts by first importing importlib into my terminal and than using importlib.reload(*modulename*) to reload them. I want my IPython terminal to automatically reload the modules in my python scripts when I run them through my IPython terminal. This functionally was provided in previous version using the magic command %autoreload, which does not seem to work for me.
I have looked # the IPython documentation (link 1), tried using the %load_ext autoreload command (link 2) and the import ipy_autoreload followed by %autoreload 2 command (link 3). I found more than 4 other answers in stackoverflow telling me to do the things in either link 2 or 3; it didn't work for me. If anyone knows how to bring back autoreloading, it would make my fingers a bit happier.
Link 1: https://ipython.org/ipython-doc/3/config/extensions/autoreload.html
Link 2: https://stackoverflow.com/a/18216967/5762140
Link 3: https://stackoverflow.com/a/4765191/5762140
I am using a 64 bit installation of Windows 7. I have IPython 4.0.1 which came with my installation of Anaconda3 (3.18.9 64bit). Screenies about my error traceback from the IPython terminal when i try to use %load_ext autoreload can be provided on request.
All the links you have above use commands within ipython. You should try editing your config file. Open up your terminal and complete the following steps.
Step 1: Make sure you have the latest ipython version installed
$ ipython --version
Step 2: find out where your config file is
$ ipython profile create
Step 3: Open the config file with an editor based on the location of your config file. I use atom. For example:
$ atom ~/.ipython/profile_default/ipython_config.py
Step 4: Look for the following lines in the config file:
c.InteractiveShellApp.extensions = []
change it to:
c.InteractiveShellApp.extensions = ['autoreload']
and then uncomment that line
find:
c.InteractiveShellApp.exec_lines = []
change it to:
c.InteractiveShellApp.exec_lines = ['%autoreload 2']
and then uncomment that line
Done.

What is the application launched by canopy when running a python file with matplotlib?

What is the application used by canopy when running a python file?
This application opens in a new window when using matplotlib. See screenshot below.
Is it possible to use this application directly without canopy?
Matplotlib opens a displays a figure that has been rendered by the selected backend when you call show. You can find out what backend is in use with:
matplotlib.get_backend()
and set the backend by updating the matplotlibrc file or with:
matplotlib.use('PS')
matplotlib.use() has only effect if called before pylab, matplotlib.pyplot, or matplotlib.backends is imported for the first time.
Running the same Python program with the same backend in an environment other than Canopy will display the same figure.
The application that displays the plot is Python (in particular Canopy User Python), using the Matplotlib library with a Qt backend. To run this from outside Canopy:
1) Ensure that Canopy User Python is your default Python; the simplest way is to open a "Canopy Command Prompt Window" from the Start Menu, or see https://support.enthought.com/entries/23646538-Make-Canopy-User-Python-be-your-default-Python).
2) Run the following commands:
set ets_toolkit=qt4
python my_scripty.py

ipython kernel with remote display [duplicate]

This question already has an answer here:
ipython notebook on linux VM running matplotlib interactive with nbagg
(1 answer)
Closed 6 years ago.
I use an ipython kernel on a remote machine via:
user#remote_machine$ ipython kernel
[IPKernelApp] To connect another client to this kernel, use:
[IPKernelApp] --existing kernel-24970.json
and then through manual ssh tunneling (see here) connect a qtconsole on my local machine to it:
user#local_machine$ for port in $(cat kernel-24970.json | grep '_port' | grep -o '[0-9]\+'); do ssh remote_machine -Y -f -N -L $port:127.0.0.1:$port; done
user#local_machine$ ipython qtconsole --existing kernel-24970.json
This works fine. However, to visualize my data while debugging, i want to use matplotlib.pyplot. Although I have enabled X11 forwarding on my ssh tunnel (through -Y), when I try plotting something, I get the following error:
TclError: no display name and no $DISPLAY environment variable
as if X11 forwarding does not have any effect.
Furthermore, once when I had access to the remote machine, I started the remote kernel with:
user#remote_machine$ ipython qtconsole
and repeated the same process from my local machine. This time, I wasn't getting any errors. But the figures were being plotted on the remote machine instead of my local machine.
So, does anyone know if it's possible to connect to a remote ipython kernel, and display plots locally? (please note that inline mode works, and shows the plots in the local qtconsole, but that's not useful for me as I frequently need to zoom in).
A simpler and more robust approach is to run ipython remotely as you did, and instead of trying to plot the figures remotely, instead save them remotely. At the same time mount the remote directory using sftp, and open it in your local file browser.
Make sure to refresh your directory view in case the images that were saved remotely are not visible (otherwise it can take some time for this to happen). One simple way for refreshing the remote directory's view is noted here.