I don't use Keras. And I want to use TPUs on Google Colab. Questions:
Can tf.session automatically use TPUs?
What do tf.contrib.tpu.TPUDistributionStrategy, tf.contrib.tpu.rewrite, tf.contrib.cluster_resolver.TPUClusterResolver do in TPU computing? Are they all necessary?
Which Tensorflow version are you running? Currently the firmware for TPUs on Google Colab only support 1.14 (I may be wrong with the exact version, but it's definitely 1.x), however if you are using TF2.0, there is TPU support for nightly-2.x on GCP, so perhaps you can give that a try!
Note that in 2.0, you would want to get rid of any "sessions" because that is no longer a thing. Check out the TPUStrategy docs here for more information: https://www.tensorflow.org/guide/distributed_training#tpustrategy
Related
I want to run inference on the CPU; although my machine has a GPU.
I wonder if it's possible to force TensorFlow to use the CPU rather than the GPU?
By default, TensorFlow will automatically use GPU for inference, but since my GPU is not good (OOM'ed), I wonder if there's a setting to force Tensorflow to use the CPU for inference?
For inference, I used:
tf.contrib.predictor.from_saved_model("PATH")
Assuming you're using TensorFlow 2.0, please check out this issue on GitHub:
[TF 2.0] How to globally force CPU?
The solution seems to be to hide the GPU devices from TensorFlow. You can do that using one of the methodologies described below:
TensorFlow 2.0:
my_devices = tf.config.experimental.list_physical_devices(device_type='CPU')
tf.config.experimental.set_visible_devices(devices= my_devices, device_type='CPU')
TensorFlow 2.1:
tf.config.set_visible_devices([], 'GPU')
(Credit to #ymodak and #henrysky, who answered the question on the GitHub issue.)
These are the instruction to solve the assignments?
Convert your TensorFlow model to UFF
Use TensorRT’s C++ API to parse your model to convert it to a CUDA engine.
TensorRT engine would automatically optimize your model and perform steps
like fusing layers, converting the weights to FP16 (or INT8 if you prefer) and
optimize to run on Tensor Cores, and so on.
Can anyone tell me how to proceed with this assignment because I don't have GPU in my laptop and is it possible to do this in google colab or AWS free account.
And what are the things or packages I have to install for running TensorRT in my laptop or google colab?
so I haven't used .uff but I used .onnx but from what I've seen the process is similar.
According to the documentation, with TensorFlow you can do something like:
from tensorflow.python.compiler.tensorrt import trt_convert as trt
converter = trt.TrtGraphConverter(
input_graph_def=frozen_graph,
nodes_blacklist=['logits', 'classes'])
frozen_graph = converter.convert()
In TensorFlow1.0, so they have it pretty straight forward, TrtGraphConverter has the option to serialized for FP16 like:
converter = trt.TrtGraphConverter(
input_saved_model_dir=input_saved_model_dir,
max_workspace_size_bytes=(11<32),
precision_mode=”FP16”,
maximum_cached_engines=100)
See the preciosion_mode part, once you have serialized you can load the networks easily on TensorRT, some good examples using cpp are here.
Unfortunately, you'll need a nvidia gpu with FP16 support, check this support matrix.
If I'm correct, Google Colab offered a Tesla K80 GPU which does not have FP16 support. I'm not sure about AWS but I'm certain the free tier does not have gpus.
Your cheapest option could be buying a Jetson Nano which is around ~90$, it's a very powerful board and I'm sure you'll use it in the future. Or you could rent some AWS gpu server, but that is a bit expensive and the setup progress is a pain.
Best of luck!
Export and convert your TensorFlow model into .onnx file.
Then, use this onnx-tensorrt tool to do the CUDA engine file conversion.
Knowing that you can set the max. memory amount for GPUs for Tensorflow,
I wonder how can I prevent tensorflow**.js** from using all my gpu ram. I didn't found anything in the API documentation.
Not yet. Currently you can only select to register the tensorflow backend by using tf.setBackend which will use all the available gpu.
use tensorflow backend by using
tf.setBackend('tensorflow')
use webgl or cpu in the brower by using
tf.setBackend('cpu')
tf.setBackend('webgl') // if using tfjs with the browser
I am new to Tensorflow.
I am looking to get some help in understanding what is the minimum I would need to setup and work with a TensorFlow system?
Do I really need to read through the Tensorflow website documentation to understand the whole work process?
Basics of tensorflow is that first we create a model which is called a computational graph with tensorflow objects then we create a tensorflow session in which we start running all the computation.
To install in windows ,I found this webpage Installation of tensorflow in windows
To learn more about tensorflow ,you also see tensorflow guide.
I hope this helps.
YES YOU SHOULD!
Here is an easier version of tutorial: https://pythonprogramming.net/tensorflow-introduction-machine-learning-tutorial/
Easier and funnier version: How to Make a Tensorflow Neural Network (LIVE)
Which type of Hardware is used as part of Google Cloud ML when using TensorFlow? Only CPU or Tensor Processing Unit (custom cards) are also available?
cf this article
Cloud ML currently focuses on CPUs. GPUs and TPUs will be available in the future.
Cloud TPUs are available to the public as of 2018-06-27: https://cloud.google.com/tpu/docs/release-notes
This was announced at Google Next '18:
https://www.blog.google/products/google-cloud/empowering-businesses-and-developers-to-do-more-with-ai/
At the time of writing (December 2017), GPUs are available, see https://cloud.google.com/ml-engine/docs/training-overview
If you use the gcloud command line utility, you can e.g. add the --scale-tier BASIC_GPU option when submitting jobs to mlengine. This currently gives runs your tensorflow code on a Tesla K80.
There is also a CUSTOM scale tier which allows more complex configurations and gives access to P100 GPUs (see https://cloud.google.com/ml-engine/docs/using-gpus ).
The TPU service is in 'alpha' status according to https://cloud.google.com/tpu/ and one needs to sign up to learn more.