I am running some edge tpu model on edge tpu devices. Is there anyway I can read how much memory from TPU is consuming during the run?
Related
I'm working remotely and from time to time I need to use the GPU for model training. I connect to the company network using ssh. Is there a way to see if someone is currently using the GPU for training?
For the past week, I have been failing to connect to a GPU, even though I have no active sessions whatsoever.
The message that keeps getting popped is the following:
Cannot connect to GPU backend
You cannot currently connect to a GPU due to usage limits in Colab. Learn more
As a Colab Pro subscriber you have higher usage limits than non-subscribers, but availability is not unlimited. To get the most out of Colab Pro, avoid using GPUs when they are not necessary for your work.
Note that I have a Colab Pro account.
If you excessively use GPUs you will go over the Colab Pro quota of 24h. Then, you will be restricted from usage for at least 12h.
Colab Pro is better and more flexible than the free version, but it still has its limitations.
I've been enjoying the free colab TPUs and I am looking to upgrade to the GCP ones, but I am a little concerned about the time limits for TPU colabs, I heard colab only allows a certain number of hours for each user.
So I am wondering if I could just use a CPU or GPU instance, and connect to the TPU on my GCP.
I have a Kinect 2 that runs at a framerate around 5-7 FPS (sometimes it peaks at 15 fps)
The system is an HP laptop, G3 i7-6820HQ, 8 gb memory, with an intel graphics 530 and an Nvidia Quaddro M1000M on Windows 10 Enterprise. As far as I can tell the system should be powerful enough to run the kinect at a better framerate. I've run the kinect on a another machine equipped with just an intel GPU, and it runs at a similar framerate, so I'm suspecting that it doesn't utilize the NVIDIA GPU.
I've followed the steps for multi-gpu systems outlined here:
https://social.msdn.microsoft.com/Forums/en-US/20dbadae-dcee-406a-b66f-a182d76cea3b/troubleshooting-and-common-issues-guide?forum=kinectv2sdk
but without any effect
Any ideas?
EDIT:
It seems to me that the kinect is indeed using the nvidia card:
Any other ideas?
I also had a problem with my frame rate. I was plugged into a low-speed USB port on my computer. Simply switching to a high-speed USB port solved the problem.
I'm serving a model using TensorFlow serving. After attacking to the system to serve 10 requests per second, the status of my server is:
It shows that all CPUs are busy while my GPU is idle. I found that about 50% of my requests takes longer than 30 seconds.
Why TensorFlow Serving doesn't leverage my GPU?