Is it possible to have an IPython notebook upon in your own local browser, but it is running on a remote machine?
How does one actually access an IPython notebook running remotely using ssh?
Quoth the extensive Jupyter Documentation for Running a Notebook Server:
The Jupyter notebook web application is based on a server-client structure. The notebook server uses a two-process kernel architecture based on ZeroMQ, as well as Tornado for serving HTTP requests.
This document describes how you can secure a notebook server and how to run it on a public interface.
If you store your iPython notebook on GitHub or GitHub Gist or any file service (DropBox) then you can point http://nbviewer.jupyter.org/ to your file and view it online.
Or you can export your notebook to HTML https://ipython.org/ipython-doc/1/interactive/nbconvert.html
this may be less necessary as GitHub displays iPython notebooks directly now (try https://github.com/jakevdp/sklearn_pycon2015/blob/master/notebooks/02.1-Machine-Learning-Intro.ipynb)
The code for nbViewer is also on GitHub https://github.com/jupyter/nbviewer
Let me know if you need to modify notebooks remotely or just view them.
Related
I use jupyter book for creating books from jupyterlab notebooks. dtale tables do not show up in the jupyter book (they do show up in the jupyter notebook). I read that jupyterbook expects Interactive outputs will work under the assumption that the outputs they produce have self-contained HTML that works without requiring any external dependencies to load.
dtale needs a server on port 40000, so does not fit the bill. However, I control the server that servers the jupyter book html pages. Is it possible to configure this html server to support the dtale requests? Any pointers I would appreciate.
I have a model (based on Mask_RCNN) which I have exported to a servable. I can run it with tf serving in a docker container locally on my Macbook pro and using the json API it will respond in 15-20s, which is not fast but I didn't really expect it to be.
I've tried to serve it on various AWS machines based off the DLAMI and also tried some Ubuntu AMIs specifically using a p2.xlarge with a gpu, 4vcpus and 61GB of RAM. When I do this the same model responds in about 90s. The configurations are identical since I've built a docker image with the model inside it.
I also get a timeout using the AWS example here: https://docs.aws.amazon.com/dlami/latest/devguide/tutorial-tfserving.html
Has anyone else experienced anything similar to this or have any ideas on how I can fix or isolate the problem?
I would like to understand how feasible it would be to spin up my own instance of a Colaboratory server that I could run within a closed network. Using the public version is unfortunately not yet an option in my company. I would really like to have something equivalent that I could use internally, which has all of the nice features such as collaborative editing.
Has anyone tried doing this? Is it even possible?
There's no way to spin up a full instance of the Colab service; i.e., the bits that integrate with GSuite / Docs / GCP / TPUs.
But, you can run local backends using the instructions here:
http://research.google.com/colaboratory/local-runtimes.html
I am using a Google Colab Jupyter notebook for algorithm training and have been struggling with an annoying problem. Since Colab is running in a VM environment, all my variables become undefined if my session is idle for a few hours. I come back from lunch and the training dataframe that takes a while to load becomes undefined and I have to read_csv again to load my dataframes.
Does anyone know how to rectify this?
If the notebook is idle for some time, it might get recycled: "Virtual machines are recycled when idle for a while" (see colaboratory faq)
There is also an imposed hard limit for a virtual machine to run (up to about 12 hours !?).
What could also happen is that your notebook gets disconnected from the internet / google colab. This could be an issue with your network. Read more about this here or here
There are no ways to "rectify" this, but if you have processed some data you could add a step to save it to google drive before entering the idle state.
You can use local runtime with Google Colab. Doing so, the Colab notebook will use your own machine's resources, and you won't have any limits. More on this: https://research.google.com/colaboratory/local-runtimes.html
There are various ways to save your data in the process:
you can save on the Notebook's VM filesystem, e. g. pd.to_csv("my_data.csv")
you can import sqlite3 which is the Python implementation of the popular SQLite database. Difference between SQLite and other SQL databases is that the DBMS runs inside your application, and data is saved to the file system of that application. Info: https://docs.python.org/2/library/sqlite3.html
you can save to your google drive, download to your local file system through your browser, upload to GCP... more info here: https://colab.research.google.com/notebooks/io.ipynb#scrollTo=eikfzi8ZT_rW
I am trying to run a Jupyter Notebook in the background. I found this question that included the command
jupyter notebook &> /dev/null &
which worked on my local machine. However, I have two problems:
I need a token in order to be able to access my notebooks in a browser window. However, with the above command, there is no output into the Terminal window except for the process ID, and therefore I could not access my notebooks.
I also need to run the notebook in the background on a remote machine. I ssh into the remote machine, and then run jupyter notebook --no-browser. However, once I close my laptop, the notebook process is killed in my local Terminal window, as well as the ssh.
I was able to crudely circumvent the above problems by running the normal
jupyter notebook --no-browser
in the remote server, and then killing the ssh to the remote server. My question boils down to the following two sub-questions:
Is there any way of doing this besides closing the ssh? I guess this isn't really the biggest problem, but it seems very hacky to simply kill the ssh instead of some more elegant or more effective solution.
How would I achieve the same thing on my local machine? I need to run the Jupyter Notebook in the background while also somehow getting the output. Can I direct the output into another file or read it somewhere else?
Generate a password for your Jupyter Notebook server so that you don't need to enter it via token (which will be changed each time you restart the server).
Run your Jupyter Notebook server in a screen or a tmux, thus each time you close the connection with the remote server, you just detach from the screen. It will keep on running in your remote server. Next time you want to access to it, just tap in screen -r to attach the screen after you ssh to the remote server.
Run the Jupyter notebook on tmux with no-browser option. And take the browser when you want it. To keep the running session like variables and all, you can make use of nbconvert in Jupyter, use the command : jupyter nbconvert --to notebook --execute --inplace mynotebook.ipynb to get the outputs on Jupyter notebook when you open it on browser after detaching several times.