Running accelerate launch shell command from jupyterlab does not print live output - google-colaboratory

I had a notebook running on Google Colab, which would use several shell commands such as:
!accelerate launch train_dreambooth.py
Since I am running this notebook in a Managed Notebook from GCP (Jupyter Lab), I only get the output of the command when the command has finished executing. I also seem to be getting a less verbose output.
Does somebody know what is going on, and what can I do? It's a long running command and I need to see live progress. Thanks!

Related

Keep jupyter lab notebook running when SSH is terminated on local machine?

I would like to be able to turn off my local machine while having my code continuously running in jupyter lab and come back to it running later, however as soon as the SSH is terminated, the jupyter lab kernel is stopped. My code also stops executing when I close the jupyter lab browser tab.
From the Google Cloud Platform marketplace I'm using a 'Deep Learning VM'. From there, I SSH to it through the suggested gcloud command (Cloud SDK) gcloud compute ssh --project projectname --zone zonename vmname -- -L 8080:localhost:8080. It then opens a PuTTY connection to the VM and automatically has jupyter lab running that I can now access on local host.
What can I do to be able to run my code with my local machine off in this case?
I usually use "nohup" when using jupter notebook through ssh!
:~$ nohup jupyter notebook --ip=0.0.0.0 --port=xxxx --no-browser &
you can know more about it here
Hope it helps!
You can use Notebook remote execution.
Basically your Notebook code will run in a remote machine and results will be stored there or in GCS for later view.
You have the following options:
nbconvert based options:
nbconvert: Provides a convenient way to execute the input cells of an .ipynb notebook file and save the results, both input and output cells, as a .ipynb file.
papermill: is a Python package for parameterizing and executing Jupyter Notebooks. (Uses nbconvert --execute under the hood.)
notebook executor: This tool that can be used to schedule the execution of Jupyter notebooks from anywhere (local, GCE, GCP Notebooks) to the Cloud AI Deep Learning VM. You can read more about the usage of this tool here. (Uses gcloud sdk and papermill under the hood)
Notebook training tool
Python package allows users to run a Jupyter notebook at Google Cloud AI Platform Training Jobs.
AI Platform Notebook Scheduler
This is in Alpha (Beta soon) with AI Platform Notebooks and the recommended option. Allows you scheduling a Notebook for recurring runs follows the exact same sequence of steps, but requires a crontab-formatted schedule option.
There are other options which allow you to execute Notebooks remotely:
tensorflow_cloud (Keras for GCP) Provides APIs that will allow to easily go from debugging and training your Keras and TensorFlow code in a local environment to distributed training in the cloud.
GCP runner Allows running any Jupyter notebook function on Google Cloud Platform
Unlike all other solutions listed above, it allows to run training for the whole project, not single Python file or Jupyter notebook

Is there a way to detach running remote ssh script in PyCharm?

I use remote shh interpreter in PyCharm regularly, using the configured deployment. I often run remote programs from PyCharm GUI (using F5 key), that takes hours to complete (e.g. training a deep net). This unfortunately means that any network outage causes running script to exit and I have to run the script over again. Is there a way to detach the running script so it keeps running? In the sense similar to what screen or nohup is doing? I know I can run it in screen manually via ssh, but it is a bit inconvenient.
Ok. I found out that this feature is not implemented yet. There is however a suggested workaround

cygwin and BQ CLI not working... "-bash: bq: command not found"

I'm trying to script the loading of some data into GCP using the command line interface and I'm having an issue with cygwin with what i believe is a not complete install of BQ CLI.
From a DOS prompt I'm able to successfully run commands and load data so I believe I have it installed correctly on my desktop, Windows 10 64bit.
Is there some additional installation required to get cygwin to work correctly with BQ CLI that isn't installed by default with cygwin?
Appreciate any assistance.
I found the problem I was having... I was attempting to execute BQ from the cli using 'bq' and not 'bq.cmd'...
Working fine now...

Can't run Tensorflow on GPU within jupyter-notebook

I can't be able to run the tensorflow code with GPU when I ran it from a jupyter notebook. Same code runs no problem, if I ran in a python script.
I followed the main installation link:
https://www.tensorflow.org/install/install_windows
Also tried:
http://bailiwick.io/2017/11/05/tensorflow-gpu-windows-and-jupyter/
No problems outside notebook when I ran in a python script file.
Most likely the problem is similar to this:
Tensorflow not running on GPU in jupyter notebook
More specifically my test:
I can see both devices CPU and GPU via python a script
I can see only CPU via notebook
Thanks a lot for any help in advance!
Very late, but short answer:
Here is a Tutorial on how to set up a GPU-based Jupyterlab instance with Docker (which makes the installation faster).
I hope this helps!
I removed all existing environments and created a new one, which resolved the issue.
(Also, I had to apply the following to get around an issue caused by removed environments:
https://github.com/jupyter/notebook/issues/2301
)

How to run Jupyter notebook and Tensorboard at the same time inside virtualenv?

I already found its workaround inside Docker. But, in my case, I am running TensorFlow inside virtualenv so that I could run my Jupyter notebook to make a code and to run it.
But I also needed to run the Tensorboard. How can I run two web application inside virtualenv? I have never run two things at the same time. If I want to, I don't know how to run it in a background.
I found a simple solution. Just open another shell and run virtualenv / Jupyter notebook after it is already running a Tensorboard in other shell.