Specs of TPU on colab - google-colaboratory

How do I see specs of TPU on colab, for GPU I am able to use commands like
nvidia-smi
but it does not work for TPU, how do I get to see specs of TPU?

I cannot find the source. But it is said somewhere that Colab TPU is TPU v2-8.
See more details about TPU v2-8 (and others) here.
https://cloud.google.com/tpu/docs/tpus

Related

Google Colab and Kaggle

I have tried using Google Colab and Kaggle to run a certain code of mine, an Ai code. However it uses up all the RAM and all the code crashes. Yes, I have GPU on in both but still to no avail. I even tried TPU for Colab but still it's not working. What is the remedy? Should I pay for Colab? Or I should reduce my dataset?

Can you use GPU in Google Colab without any library?

I've coded a Neural Network from scratch in Python and I am using Google Colaboratory to train it. However, if I enable GPU or TPU acceleration, the training is not faster.
When you search for examples online, all of them use Tensorflow and other libraries, and their training times are shorter with GPU than without it.
Am I doing it correctly or am I missing something and the GPU is not being used?
Just enabling GPU or TPU won't help your problem, you need to explicitly code them to run on GPU if you are not using any frameworks or libraries.

Dump HLO IR for TPU while using TPUClusterResolver

I'm using TPU through Google Colab and GCP, and want to dump XLA IR. But I have seen the xla doc in github xla index, and it only shows the way while the backend is CPU or GPU.
I have tried using XLA_FLAGS="--xla_dump_hlo_as_text --xla_dump_to=/content/iir/" TF_XLA_FLAGS=--tf_xla_cpu_global_jit to run a CPU-targeted program and get dumped hlo file. I have also tried capture_tpu_file and can only get ir for each operator in 'op_profile' page. So is there a way to dump XLA IR for the whole program when the backend is TPU?
Thank you!
Jay
Unfortunately there isn't a way to dump/access the XLA IR on Cloud TPUs at the moment, since the XLA_FLAGS need to be set on the TPU server.

How to use local Coral USB TPU with Google Colab (instead of Cloud TPU)

I have a USB TPU and would like to use it as LOCAL RUNTIME in Google Colab.
I was not able to find any resources on this topic.
You can use a local Runtime (local Jupyter) and it is explained here :
https://research.google.com/colaboratory/local-runtimes.html
Do I need to install all the TPU libraries in my local Jupyter and then connect to local Jupyter as local runtime to start using my USB TPU in Colab?
I'm not familiar with Google Colab, but looks like it allows you to expose your model on your hardware. You'll then need to locate your model in order to run inference with it. There are multiple ways that you can choose to run it which are all listed here:
https://coral.withgoogle.com/docs/edgetpu/inference/

tf.test.is_gpu_available() returns False on GCP

I am training a CNN on GCP's notebook using a Tesla V100. I've trained a simple yolo on my own custom data and it was pretty fast but not very accurate. So, I decided to write my own code from scratch to solve the specific aspects of the problem that I want to tackle.
I have tried to run my code on Google Colab prior to GCP, and it went well. Tensorflow detects the GPU and is able to use it whether it was a Tesla K80 or T4.
from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())
tf.test.is_gpu_available() #>>> True
My problem is that, this same function returns a False on GCP notebook, as if Tensorflow is unable to use the GPU it detected on GCP VM. I don't know of any command that forces Tensorflow to use the GPU over CPU, since it does that automatically.
I have already tried to install or uninstall and then install some versions of tensorflow, tensorflow-gpu and tf-nightly-gpu (1.13 and 2.0dev for instance) but it yielded nothing.
output of nvidia-smi
Have you tried using GCP's AI Platform Notebooks instead? They offer VMs that are pre-configured with Tensorflow and have all required GPU drivers installed.