As tf.data augmentations are executed only on CPUs. I need a way to run certain augmentations on the TPU for an audio project.
For example,
CPU: tf.recs read -> audio crop -> noise addition.
TPU: spectogram -> Mixup Augmentation.
Most augmentations can be done as a Keras Layer on top of the model, but MixUp requires both changes in input as well as label.
Is there a way to do it using tf keras APIs.
And if there is any way we can transfer part of tf.data to run on TPU that will also be helpful.
As you have rightly mentioned and as per the Tensorflow documentation also the preprocessing of tf.data is done on CPU only.
However, you can do some workaround to preprocess your tf.data using TPU/GPU by directly using transformation function in your model with something like below code.
input = tf.keras.layers.Input((512,512,3))
x = tf.keras.layers.Lambda(transform)(input)
You can follow this Kaggle post for detailed discussion on this topic.
See the Tensorflow guide that discusses preprocessing data before the model or inside the model. By including preprocessing inside the model, the GPU is leveraged instead of the CPU, it makes the model portable, and it helps reduce the training/serving skew. The guide also has multiple recipes to get you started too. It doesn't explicitly state this works for a TPU but it can be tried.
Related
To replicate Multimodal Few-Shot Learning with Frozen Language Models, I am trying to train a ~7B parameter subclassed TF2 model on a TPUv3-32. Out of the 7B parameters, roughly 6B parameters are frozen.
I want to use model and data parallelism to train it as efficiently as possible. As far as I know, MeshTensorflow can only be used for models written in TF1.
I tried using experimental_device_assignment from TPUStrategy but it was placing all the variables only on the 1st(0th) core of the TPU which quickly ran out of memory.
On a TPUv3-8, I tried to keep computation_shape = [2, 2, 1, 2] and [1, 1, 1, 2] and num_replicas = 1 but it didn't work.
I am also open to using GPUs to train it.
According to the cloud TPU documents, there is no official support:
Does Cloud TPU support model parallelism?
Model parallelism (or executing non-identical programs on the multiple cores within a single Cloud TPU device) is not currently supported.
https://cloud.google.com/tpu/docs/faq
The underlying issue may be that there is no automatic sharding of the computation graph in TPUStrategy so the graph is all placed one device, unless (in the model code) you manually assign device placements for weights and operations to the logical devices as created by DeviceAssignment.build and handle communication across the devices carefully.
That said, there is another TF2-compatible library (also from Google) that could help if you are building a big Transformer where you want layers that are friendly to graph sharding: Lingvo. In their Github, there is an example of sharding a model on a TPU v3-512 node. The library has Google's open sourced GPipe which can help speed up model parallel training loops. Lingvo should also work with GPUs.
I am using the tensorflow centernet_resnet50_v2_512x512_kpts_coco17_tpu-8 object detection model on a Nvidia Tesla P100 to extract bounding boxes and keypoints for detecting people in a video. Using the pre-trained from tensorflow.org, I am able to process about 16 frames per second. Is there any way I can imporve the evaluation speed for this model? Here are some ideas I have been looking into:
Pruning the model graph since I am only detecting 1 type of object (people)
Have not been successful in doing this. Changing the label_map when building the model does not seem to improve performance.
Hard coding the input size
Have not found a good way to do this.
Compiling the model to an optimized form using something like TensorRT
Initial attempts to convert to TensorRT did not have any performance improvements.
Batching predictions
It looks like the pre-trained model has the batch size hard coded to 1, and so far when I try to change this using the model_builder I see a drop in performance.
My GPU utilization is about ~75% so I don't know if there is much to gain here.
TensorRT should in most cases give a large increase in frames per second compared to Tensorflow.
centernet_resnet50_v2_512x512_kpts_coco17_tpu-8 can be found in the TensorFlow Model Zoo.
Nvidia has released a blog post describing how to optimize models from the TensorFlow Model Zoo using Deepstream and TensorRT:
https://developer.nvidia.com/blog/deploying-models-from-tensorflow-model-zoo-using-deepstream-and-triton-inference-server/
Now regarding your suggestions:
Pruning the model graph: Pruning the model graph can be done by converting your tensorflow model to a TF-TRT model.
Hardcoding the input size: Use the static mode in TF-TRT. This is the default mode and enabled by: is_dynamic_op=False
Compiling the model: My advise would be to convert you model to TF-TRT or first to ONNX and then to TensorRT.
Batching: Specifying the batch size is also covered in the NVIDIA blog post.
Lastly, for my model a big increase in performance came from using FP16 in my inference engine. (mixed precision) You could even try INT8 although then you first have to callibrate.
Let's begin with the premise that I'm newly approaching to TensorFlow and deep learning in general.
I have TF 2.0 Keras-style model trained using tf.Model.train(), two available GPUs and I'm looking to scale down inference times.
I trained the model distributing across GPUs using the extremely handy tf.distribute.MirroredStrategy().scope() context manager
mirrored_strategy = tf.distribute.MirroredStrategy()
with mirrored_strategy.scope():
model.compile(...)
model.train(...)
both GPUs get effectively used (even if I'm not quite happy with the results accuracy).
I can't seem to find a similar strategy for distributing inference between GPUs with the tf.Model.predict() method: when i run model.predict() I get (obviously) usage from only one of the two GPUs.
Is it possible to istantiate the same model on both GPUs and feed them different chunks of data in parallel?
There are posts that suggest how to do it in TF 1.x but I can't seem to replicate the results in TF2.0
https://medium.com/#sbp3624/tensorflow-multi-gpu-for-inferencing-test-time-58e952a2ed95
Tensorflow: simultaneous prediction on GPU and CPU
my mental struggles with the question are mainly
TF 1.x is tf.Session()based while sessions are implicit in TF2.0, if I get it correctly, the solutions I read use separate sessions for each GPU and I don't really know how to replicate it in TF2.0
I don't know how to use the model.predict() method with a specific session.
I know that the question is probably not well-formulated but I summarize it as:
Does anybody have a clue on how to run Keras-style model.predict() on multiple GPUs (inferencing on a different batch of data on each GPU in a parallel way) in TF2.0?
Thanks in advance for any help.
Try to load model in tf.distribute.MirroredStrategy and use greater batch_size
mirrored_strategy = tf.distribute.MirroredStrategy()
with mirrored_strategy.scope():
model = tf.keras.models.load_model(saved_model_path)
result = model.predict(batch_size=greater_batch_size)
There still does not seem to be an official example for distributed inference. There is a potential solution here using tf.distribute.MirroredStrategy: https://github.com/tensorflow/tensorflow/issues/37686. However, it does not seem to fully utilize multi gpus
Has anyone implement the FRCNN for TensorFlow version?
I found some related repos as following:
Implement roi pool layer
Implement fast RCNN based on py-faster-rcnn repo
but for 1: assume the roi pooling layer works (I haven't tried), and there are something need to be implemented as following:
ROI data layer e.g. roidb.
Linear Regression e.g. SmoothL1Loss
ROI pool layer post-processing for end-to-end training which should convert the ROI pooling layer's results to feed into CNN for classifier.
For 2: em...., it seems based on py-faster-rcnn which based on Caffe to prepared pre-processing (e.g. roidb) and feed data into Tensorflow to train the model, it seems weird, so I may not tried it.
So what I want to know is that, will Tensorflow support Faster RCNN in the future?. If not, do I have any mis-understand which mentioned above? or has any repo or someone support that?
Tensorflow has just released an official Object Detection API here, that can be used for instance with their various slim models.
This API contains implementation of various Pipelines for Object Detection, including popular Faster RCNN, with their pre-trained models as well.
I am wondering why the buckets are being introduced in the Seq2Seq TensorFlow tutorial. I understand the efficiency gain from not processing the padding symbols, but you can avoid processing the paddings if you use rnn and specify the sequence_length parameter. Or if you use dynamic_rnn.
Is it because it helps distributing the training across multiple devices / machines ?
One reason is that seq2seq was created before dynamic rnn was available. The other is that, even with dynamic rnn, it still helps for speed if your batches are organized by bucket.