'Upload' function not working for Jupyter Notebook in ssh mode on a Ubuntu 18.04 machine - ssh

I am new to using Jupyter, but am well versed with R. My new role requires me to use R-Kernel inside a jupyter notebook via ssh to share common data and save space. However, I am unable to upload any files from my local machine for some reason - although the permissions check out. There is no error - the entire computer just hangs the moment I click 'Upload'!! Has anybody ever faced this issue?
I am using Jupyter 3.1.18 via ssh on Ubuntu 18.04.
I don't have jupyter installed on my local machine.

Related

SSH Jupyter notebook using non-base Conda envirnoment?

My problem is the following: I want to run a Jupyter notebook on my remote desktop and access it via my laptop elsewhere. I have accomplished this, but I can't use my GPU for tensorflow because the GPU-supported version is only installed in my custom, non-base environment. Even though all of my installed jupyter kernels are available, it seems things don't work right unless I run 'jupyter notebook' from within the correct activated conda environment (says "no GPU" even though I select as the kernel the one where tensorflow-gpu is installed).
Is there a simple way of running jupyter notebook from within that environment by a batch script? I also need it to run the notebook on a secondary drive.
I could of course just start up the server while at home and then access it using the token, but that's a little clumsy.
I've found a solution. On windows, in %AppData%\Roaming\Microsoft\Windows\Start Menu\Programs\Anaconda3, there are shortcuts for various Anaconda-related programs, including Jupyter notebook for each environment.
The shortcut for Jupyter notebook for my given env is
`E:\Software\Anaconda3\python.exe E:\Software\Anaconda3\cwp.py E:\Software\Anaconda3\envs\tf E:\Software\Anaconda3\envs\tf\python.exe E:\Software\Anaconda3\envs\tf\Scripts\jupyter-notebook-script.py "%USERPROFILE%".
I modified this to end in '"E:" --no-browser' instead of the userprofile bit and made that into a script. Now when I SSH into the computer and run this script, the notebook is within the correct environment and I have access to my GPU, all on the correct drive, E.

Keep jupyter lab notebook running when SSH is terminated on local machine?

I would like to be able to turn off my local machine while having my code continuously running in jupyter lab and come back to it running later, however as soon as the SSH is terminated, the jupyter lab kernel is stopped. My code also stops executing when I close the jupyter lab browser tab.
From the Google Cloud Platform marketplace I'm using a 'Deep Learning VM'. From there, I SSH to it through the suggested gcloud command (Cloud SDK) gcloud compute ssh --project projectname --zone zonename vmname -- -L 8080:localhost:8080. It then opens a PuTTY connection to the VM and automatically has jupyter lab running that I can now access on local host.
What can I do to be able to run my code with my local machine off in this case?
I usually use "nohup" when using jupter notebook through ssh!
:~$ nohup jupyter notebook --ip=0.0.0.0 --port=xxxx --no-browser &
you can know more about it here
Hope it helps!
You can use Notebook remote execution.
Basically your Notebook code will run in a remote machine and results will be stored there or in GCS for later view.
You have the following options:
nbconvert based options:
nbconvert: Provides a convenient way to execute the input cells of an .ipynb notebook file and save the results, both input and output cells, as a .ipynb file.
papermill: is a Python package for parameterizing and executing Jupyter Notebooks. (Uses nbconvert --execute under the hood.)
notebook executor: This tool that can be used to schedule the execution of Jupyter notebooks from anywhere (local, GCE, GCP Notebooks) to the Cloud AI Deep Learning VM. You can read more about the usage of this tool here. (Uses gcloud sdk and papermill under the hood)
Notebook training tool
Python package allows users to run a Jupyter notebook at Google Cloud AI Platform Training Jobs.
AI Platform Notebook Scheduler
This is in Alpha (Beta soon) with AI Platform Notebooks and the recommended option. Allows you scheduling a Notebook for recurring runs follows the exact same sequence of steps, but requires a crontab-formatted schedule option.
There are other options which allow you to execute Notebooks remotely:
tensorflow_cloud (Keras for GCP) Provides APIs that will allow to easily go from debugging and training your Keras and TensorFlow code in a local environment to distributed training in the cloud.
GCP runner Allows running any Jupyter notebook function on Google Cloud Platform
Unlike all other solutions listed above, it allows to run training for the whole project, not single Python file or Jupyter notebook

How to use a remote machine's GPU in jupyter notebook

I am trying to run tensorflow on a remote machine's GPU through Jupyter notebook. However, if I print the available devices using tf, I only get CPUs. I have never used a GPU before and am relatively new at using conda / jupyter notebook remotely as well, so I am not sure how to set up using the GPU in jupyter notebook.
I am using an environment set up by someone else who already executed the same code on the same GPU, but they did it via python script, not in a jupyter notebook.
this is the only code in the other person's file that had to do with the GPU
config = tf.ConfigProto()
config.gpu_options.allow_growth=True
set_session(tf.Session(config=config))
I think the problem was that I had tensorflow in my environment instead of tensorflow-gpu. But now I get this message "cudaGetDevice() failed. Status: CUDA driver version is insufficient for CUDA runtime version" and I don't know how to update the driver through terminal
How is your environment set up? Specifically, what is your remote environment, and what is your local environment? Sounds like your CUDA drivers are out of date, but it could be more than just that. If you are just getting started, I would recommend finding an environment that requires little to no configuration work on your part, so you can get started more easily/quickly.
For example, you can run GPUs on the cloud, and connect to them via local terminal. You also have your "local" frontend be Colab by connecting it to a local runtime. (This video explains that particular setup, but there's lots of other options)
You may also want to try running nvidia-smi on the remote machine to see if the GPUs are visible.
Here is another solution, that describes how to set up a GPU-Jupyterlab instance with Docker.
To update your drivers via terminal, run:
ubuntu-drivers devices
sudo ubuntu-drivers autoinstall
sudo reboot
Are your CUDA paths set appropriately? Like that?
export PATH=/usr/local/cuda/bin${PATH:+:${PATH}}
export LD_LIBRARY_PATH=/usr/local/cuda/lib64:${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}

Windows Pycharm with remote environment not displaying fgures

I have installed PyCharm Professional 2017.3.2 on my Windows 10 machine laptop, and configured it to use a Vagrant Ubuntu 16.04 Server (Virtualbox) VM running a conda environment as the remote interpreter. I am able to execute Python scripts using this environment, but figures do not get displayed. For instance, the example in https://www.jetbrains.com/help/pycharm/scientific-mode-tutorial.html returns with exit code 0 despite no figure being rendered by the plt.show() command. No errors are reported.
The backend given by matplotlib.get_backend() is module://backend_interagg. I have seen mention of setting DISPLAY or installing Xorg on the VM, but this seems to be from older posts when QT was used in the backend. Can anyone advise on how to get plots to show with a recent setup?

Can Vagrant suffice my requirement?

I have been looking out for ways to setup an automation environment and I found this application named Vagrant. I read the docs on the site, however I wanted to know from the experts out there if Vagrant with Oracle VirtualBox would suffice my needs.
I need to have a script that will call Vagrant to initialize a VM [The VM-Image is always the same - Windows Server 2008 R2]
I need to copy some of my project related files from a shared location onto the VM
Call a Batch file that will take care of test runs for me inside the VM
Once my test run is complete, This VM needs to be self destroyed/destructed.
Also, I would like to know if the Image be a custom .ISO file?
Sounds like Vagrant and VirtualBox will work for that scenario. Also, you might find that running commands in the VM using WinRM or SSH may be the easiest way to launch tests.
If you haven't already seen it, the blog post about Windows support in Vagrant 1.6 is informative: https://www.vagrantup.com/blog/feature-preview-vagrant-1-6-windows.html
Creating a VirtualBox/Vagrant base VM from an .iso should work, and you can then do all of your work using the VM from that point onward.
To get started, you might try these steps:
Create a VirtualBox VM from your Windows .iso, using the VirtualBox GUI or cmdline tools.
Once you have the VM in the state you want it, shut it down and package it as a vagrant box - for example, on a Mac that step looks like (where Win7x64 is the dir containing the VirtualBox VM):
cd ~/VirtualBox\ VMs
vagrant package --base Win7x64 --output win7x64_base.box
Once that finishes, tell vagrant about the new base box:
vagrant box add win7x64_base /path/to/win7_base.box
Then you can vagrant init/vagrant up the VM:
mkdir win7 && cd win7
vagrant init win7x64
vagrant up
To enable SSH access, I installed Cygwin in the VM and configured sshd. So, after launching you can SSH in by running vagrant ssh
Note that if there's no Windows user in the VM named 'vagrant', you can specify the SSH username to use with vagrant ssh by placing this in your Vagrantfile:
config.ssh.username = 'user1'
As mentioned above, WinRM is also an option for remotely running commands.
And Vagrant apparently has some convenience features to make it easy to RDP into the VM, but I haven't looked at that.