How to check if the GPU is being utilized by someone at the moment while being connected to a remote server - gpu

I'm working remotely and from time to time I need to use the GPU for model training. I connect to the company network using ssh. Is there a way to see if someone is currently using the GPU for training?

Related

Why can't I connect to GPU backend even with a Colab Pro account?

For the past week, I have been failing to connect to a GPU, even though I have no active sessions whatsoever.
The message that keeps getting popped is the following:
Cannot connect to GPU backend
You cannot currently connect to a GPU due to usage limits in Colab. Learn more
As a Colab Pro subscriber you have higher usage limits than non-subscribers, but availability is not unlimited. To get the most out of Colab Pro, avoid using GPUs when they are not necessary for your work.
Note that I have a Colab Pro account.
If you excessively use GPUs you will go over the Colab Pro quota of 24h. Then, you will be restricted from usage for at least 12h.
Colab Pro is better and more flexible than the free version, but it still has its limitations.

Deploy my own tensorflow model on a virtual machine with AWS

I have a Tensorflow model which is working perfectly fine on my laptop (Tf 1.8 on OS HighSierra). However, I wanted to scale my operations up and use Amazon's Virtual Machine to run predictions faster. What is the best way to use my saved model and classify images in jpeg format which are stored locally? Thank you!
you have two options:
1) Start a virtual machine on AWS (known as an Amazon EC2 instance). You can pick from many different instance types, including GPU instances. You'll have full administrative access on this machine, meaning that you can copy you TF model to it and predict just like you would on your own machine.
More details on getting started with EC2 here: https://aws.amazon.com/ec2/getting-started/
I would also recommend using the Deep Learning Amazon Machine Image, which bundles all the popular ML/DL tools as well as the NVIDIA environment for GPU training/prediction : https://aws.amazon.com/machine-learning/amis/
2) If you don't want to manage virtual machines, I'd recommend looking at Amazon SageMaker. You'll be able to import your TF model and to deploy it on fully-managed infrastructure for prediction.
Here's a sample notebook showing you how to bring your own TF model to SageMaker: https://github.com/awslabs/amazon-sagemaker-examples/blob/master/advanced_functionality/tensorflow_iris_byom/tensorflow_BYOM_iris.ipynb
Hope this helps.

Can I scale devices on Google Cloud ML for predictions?

Does Google Cloud ML predictions run on multiple devices, or a single device?
I find my Google ML preductions running at ~7sec but when running my model locally with a Flask server on a 4-core machine it takes ~1.8-2.1 sec.
Is there a way to increase the number of devices/resources I am using on Google Cloud ML?
Yes you can use more resources to serve your predictions. However the feature is still at alpha stage and it will only be available to a selected list of accounts who opted in as "Trusted Testers". Please contact cloudml-feedback#google.com if you need help to setup prediction service using multicores.

Which Google Cloud Platform service is the easiest for running Tensorflow?

While working on Udacity Deep Learning assignments, I encountered memory problem. I need to switch to a cloud platform. I worked with AWS EC2 before but now I would like to try Google Cloud Platform (GCP). I will need at least 8GB memory. I know how to use docker locally but never tried it on the cloud.
Is there any ready-made solution for running Tensorflow on GCP?
If not, which service (Compute Engine or Container Engine) would make it easier to get started?
Any other tip is also appreciated!
Summing up the answers:
AI Platform Notebooks - One click Jupyter Lab environment
Deep Learning VMs images - Raw VMs with ML libraries pre-installed
Deep Learning Container Images - Containerized versions of the DLVM images
Cloud ML
Manual installation on Compute Engine. See instructions below.
Instructions to manually run TensorFlow on Compute Engine:
Create a project
Open the Cloud Shell (a button at the top)
List machine types: gcloud compute machine-types list. You can change the machine type I used in the next command.
Create an instance:
gcloud compute instances create tf \
--image container-vm \
--zone europe-west1-c \
--machine-type n1-standard-2
Run sudo docker run -d -p 8888:8888 --name tf b.gcr.io/tensorflow-udacity/assignments:0.5.0 (change the image name to the desired one)
Find your instance in the dashboard and edit default network.
Add a firewall rule to allow your IP as well as protocol and port tcp:8888.
Find the External IP of the instance from the dashboard. Open IP:8888 on your browser. Done!
When you are finished, delete the created cluster to avoid charges.
This is how I did it and it worked. I am sure there is an easier way to do it.
More Resources
You might be interested to learn more about:
Google Cloud Shell
Container-Optimized Google Compute Engine Images
Google Cloud SDK for a more responsive shell and more.
Good to know
"The contents of your Cloud Shell home directory persist across projects between all Cloud Shell sessions, even after the virtual machine terminates and is restarted"
To list all available image versions: gcloud compute images list --project google-containers
Thanks to #user728291, #MattW, #CJCullen, and #zain-rizvi
Google Cloud Machine Learning is open to the world in Beta form today. It provides TensorFlow as a Service so you don't have to manage machines and other raw resources. As part of the Beta release, Datalab has been updated to provide commands and utilities for machine learning. Check it out at: http://cloud.google.com/ml.
Google has a Cloud ML platform in a limited Alpha.
Here is a blog post and a tutorial about running TensorFlow on Kubernetes/Google Container Engine.
If those aren't what you want, the TensorFlow tutorials should all be able to run on either AWS EC2 or Google Compute Engine.
You now can also use pre-configured DeepLearning images. They have everything that is required for the TensorFlow.
This is an old question but there's are new, even easier options now:
If you want to run TensorFlow with Jupyter Lab
GCP AI Platform Notebooks, which gives you on-click access to a Jupyter Lab Notebook with Tensorflow pre-installed (you can also use Pytorch, R, or a few other libraries instead if you prefer).
If you just want to use a raw VM
If you don't care about Jupyer Lab and just want a raw VM with Tensorflow pre-installed, you can instead create a VM using GCP's Deep Learning VM Image. These DLVM images give you a VM with Tensorflow pre-installed and are all setup to use GPUs if you want. (The AI Platform Notebooks use these DLVM images under the hood)
If you'd like to run it on both your laptop and the cloud
Finally, if you want to be able to run tensorflow both on your personal laptop and in the cloud and are comfortable using Docker, you can use GCP's Deep Learning Container Images. It contains the exact same setup as the DLVM images, but packaged as a container instead, so you can launch these anywhere you like.
Extra benefit: If you're running this container image on your laptop, it's 100% free :D
Im not sure there if there is a need for you to stay on the Google Cloud platform. If you are able to use other products you might save a lot of time, and some money.
If you are using TensorFLow I would recommend a platform called TensorPort. It is exclusively for TesnorFlow and is the easy platform I am aware of. Code and data are loaded with git and they provide a python module for automatic toggling of paths between remote and your local machine. They also provide some boiler plate code for setting up distributed computing if you need it. Hope this helps.

Is it possible to run CUDA C remotely?

I am new to CUDA C and I would like to have some have an answer to a question; I'm using Ubuntu 14.04 and I have a code that demands a pretty high computational cost: my Gpu is a Quadro k600 and takes about 15h in order to complete the calculations I need to do. I was wandering if there is a way to connect remotely to somebody else's computer in order to borrow a greater computational capacity in order to speed up the calculations; does Nvidia or some university provide that kind of service? should that be done by using ssh?
There is the NVIDIA test drive program that you could sign up for and try a high end tesla.
http://www.nvidia.com/object/gpu-test-drive.html
If you are familiar with linux:
You can log into another persons computer with ssh (login and password is provided with test drive program)
Copy of your code with scp or rsync
Compile your code on their machine (this is likely necessary for compatibility reasons).
Copy over your data again with scp or rsync
Run on their machine
Copy back output data with scp/rsync
I frequently use ssh to log into computing clusters to run larger jobs than what my local machine would be able to use. If you do end up using a cluster, they should provide you with some documentation on how to submit jobs, as it is not quite as simple as using a workstation.
As previously mentioned, Amazon also offers you to buy computing time on CUDA enabled clusters.
https://aws.amazon.com/articles/7249489223918169
may help if you're interested.
rCUDA might be what you are looking for