Tensorflow Serving with XLA - tensorflow-serving

Is it possible to enable XLA compilation when doing inference with Tensorflow Serving?
(I am hoping it's just a matter of undocumented configs and that I can avoid implementing a custom Servable).

#njs,
It is actually not suggested doing compilations during inference. Compilations at inference time will cause HBM to be out of memory, resulting the chips to be unable to serve requests.
The recommended solution is to:
Use batch-function with allowed batch sizes to restrict the number of compilations at run time.
Do all compilations for these allowed batch sizes at model load time instead of inference time. This way your model is ready for inference right after load, rather than going through high latency compilations at inference time.

Related

Does Tensorflow automaticaly use multiple CPUs?

I have programmed some code doing an inference with Tensorflow's C API (CPU only). It is running on a cluster node, where I have access to 24 CPUs and 1 GPU. I do not make use of the GPU as I will need to do the task CPU-only later on.
Somehow every time I call the Tensorflow-Code from the other program (OpenFOAM) Tensorflow seems to run on all CPUs parallelized. However I have not done anything to cause this behavior. Now I would like to know whether Tensorflow does this parallelization by default?
Greets and thanks in advance!
I am not sure how you are using tensorflow. But a typical TensorFlow training has an input pipeline which can be thought as an ETL process. Following are the main activities involved:
Extract: Read data from persistent storage
Transform: Use CPU cores to parse and perform preprocessing operations on the data such as image decompression, data augmentation transformations (such as random crop, flips, and color distortions), shuffling, and batching.
Load: Load the transformed data onto the accelerator device(s) (for example, GPU(s) or TPU(s)) that execute the machine learning model.
CPUs are generally used during the data transformation. During the transformation, the data input elements are preprocessed. To improve the performance of the pre-processing, it is parallelized across multiple CPU cores by default.
Tensorflow provides the tf.data API which offers the tf.data.Dataset.map transformation. To control the parallelism, the map provides the num_parallel_calls argument.
Read more on this from here:
https://www.tensorflow.org/guide/performance/datasets

CUDA-like optimization on Tensorflow-GPU

I am trying to implement a neural network architecture (Self Organizing Maps) for execution on GPUs. I am exploring TensorFlow for this task.
In TensorFlow, I noticed that you just have to specify gpu as the device to execute something on the gpu like in this post. It seems that the way the operations are parallelized is decided by TF and the user does not have options to take optimization decisions. The "Optimizing for GPU" section on TensorFlow Performance Guide also does not talk about explicit control over parallelizing operations.
My question is, can I do CUDA-like optimization in TensorFlow? More elaborately, is it possible to define which operation will be parallelized (like defining CUDA kernels for parallel operations)?
Yes, but you probably don't want to.
At the most extreme you can define your own op (as described here: https://www.tensorflow.org/extend/adding_an_op).
You can implement it as a GPU Kernel and write whatever you want.
You probably don't want to. The default operations are likely well optimized. I doubt you would be able to squeeze anything out significant out of them.
You can decide the device placement for each individual operation (by using tf.device), but you will incur data transfer overhead every time you switch. This should cover the cases where there's some operation that it slow to execute on the GPU.
If you want to process part of the data on CPU and part on the GPU you can slice your data and do 2 operations (one on CPU and one on GPU).
By default, in TF, in graph mode (not in eager mode), everything, all the TF ops run in parallel. There is a thread pool for that, and its size is controlled via inter_op_parallelism_threads. (See also.)
That does not necessarily mean that e.g. multiple matmul will really run in parallel, if they are internally synchronized. That is the case for most CUDA ops, as there is only a single CUDA stream. See here.

Debugging batching in Tensorflow Serving (no effect observed)

I have a small web server that gets input in terms of sentences and needs to return a model prediction using Tensorflow Serving. It's working all fine and well using our single GPU, but now I'd like to enable batching such that Tensorflow Serving waits a bit to group incoming sentences before processing them together in one batch on the GPU.
I'm using the predesigned server framework with the predesigned batching framework using the initial release of Tensorflow Serving. I'm enabling batching using the --batching flag and have set batch_timeout_micros = 10000 and max_batch_size = 1000. The logging does confirm that batching is enabled and that the GPU is being used.
However, when sending requests to the serving server the batching has minimal effect. Sending 50 requests at the same time almost linearly scales in terms of time usage with sending 5 requests. Interestingly, the predict() function of the server is run once for each request (see here), which suggests to me that the batching is not being handled properly.
Am I missing something? How do I check what's wrong with the batching?
Note that this is different from How to do batching in Tensorflow Serving? as that question only examines how to send multiple requests from a single client, but not how to enable Tensorflow Serving's behind-the-scenes batching for multiple separate requests.
(I am not familiar with the server framework, but I'm quite familiar with HPC and with cuBLAS and cuDNN, the libraries TF uses to do its dot products and convolutions on GPU)
There are several issues that could cause disappointing performance scaling with the batch size.
I/O overhead, by which I mean network transfers, disk access (for large data), serialization, deserialization and similar cruft. These things tend to be linear in the size of the data.
To look into this overhead, I suggest you deploy 2 models: one that you actually need, and one that's trivial, but uses the same I/O, then subtract the time needed by one from another.
This time difference should be similar to the time running the complex model takes, when you use it directly, without the I/O overhead.
If the bottleneck is in the I/O, speeding up the GPU work is inconsequential.
Note that even if increasing the batch size makes the GPU faster, it might make the whole thing slower, because the GPU now has to wait for the I/O of the whole batch to finish to even start working.
cuDNN scaling: Things like matmul need large batch sizes to achieve their optimal throughput, but convolutions using cuDNN might not (At least it hasn't been my experience, but this might depend on the version and the GPU arch)
RAM, GPU RAM, or PCIe bandwidth-limited models: If your model's bottleneck is in any of these, it probably won't benefit from bigger batch sizes.
The way to check this is to run your model directly (perhaps with mock input), compare the timing to the aforementioned time difference and plot it as a function of the batch size.
By the way, as per the performance guide, one thing you could try is using the NCHW layout, if you are not already. There are other tips there.

htop cpu almost red when running tensorflow, predict is very slow

I'm using tensorflow to train a model and predict, and use htop on ubuntu to monitor cpu usage. predict is very slow, I just can't bear it. htop shows that cpu color is almost red, which means almost all cpu resource is used by system kernel threads, but cpu usage is 0% before tensorflow start.
I have not changed the thread_num, I'm using tensorflow v0.11 on ubuntu14.04.
The problem is that default glibc malloc is not efficient for small allocations. Also, because Google develops/tests tensorflow with tcmalloc internally, bad interactions with regular malloc don't get ironed out. The solution is to run TensorFlow with tcmalloc.
sudo apt-get install google-perftools
export LD_PRELOAD="/usr/lib/libtcmalloc.so.4"
python ...
If you're looking for something to improve the inference performance, I could recommend trying OpenVINO. It improves your model's accuracy by converting it to Intermediate Representation (IR), conducting graph pruning, and fusing certain operations into others. Then, in runtime, it uses vectorization. OpenVINO is optimized for Intel hardware, although it should work with any CPU.
It's rather straightforward to convert the Tensorflow model to OpenVINO unless you have fancy custom layers. The full tutorial on how to do it can be found here. Some snippets are below.
Install OpenVINO
The easiest way to do it is using PIP. Alternatively, you can use this tool to find the best way in your case.
pip install openvino-dev[tensorflow]
Use Model Optimizer to convert SavedModel model
The Model Optimizer is a command-line tool that comes from OpenVINO Development Package. It converts the Tensorflow model to IR, which is a default format for OpenVINO. You can also try the precision of FP16, which should give you better performance without a significant accuracy drop (just change data_type). Run in the command line:
mo --saved_model_dir "model" --data_type FP32 --output_dir "model_ir"
Run the inference
The converted model can be loaded by the runtime and compiled for a specific device, e.g., CPU or GPU (integrated into your CPU like Intel HD Graphics). If you don't know what the best choice for you is, use AUTO. If you care about latency, I suggest adding a performance hint (as shown below) to use the device that fulfills your requirement. If you care about throughput, change the value to THROUGHPUT or CUMULATIVE_THROUGHPUT.
# Load the network
ie = Core()
model_ir = ie.read_model(model="model_ir/model.xml")
compiled_model_ir = ie.compile_model(model=model_ir, device_name="AUTO", config={"PERFORMANCE_HINT":"LATENCY"})
# Get output layer
output_layer_ir = compiled_model_ir.output(0)
# Run inference on the input image
result = compiled_model_ir([input_image])[output_layer_ir]
Disclaimer: I work on OpenVINO.

TensorFlow - GPU Acceleration only for training?

Will utilizing GPU Acceleration with TensorFlow increase the speed of only the training of models or will it also help improve speed while using the model on data.
Most guides only talk about utilizing GPU acceleration for training purposes.
Also will it work with any of the TensorFlow Models ? Even those run via shell scripts ?
In addition would it run on the shell scripts by default or does it require explicit coding to make it work.
It will work for both and yes it should make using the models faster even when not training (unless the model is really simple and the overhead of placing it on the GPU outweighs the performance cost.) I do think using a GPU is less necessary for just evaluating the model. When training often the data is batched together so that each train step contains multiple runs of the model. Also the gradients need to be calculated which takes up a lot of compute time and memory. The weights also need to be updated during training. Therefore just making a simple forward pass is a lot faster. I really think you would see a benefit if you needed to make a whole bunch of forward passes at once.
As for running tensorflow models through shell scripts, I would assume if they train on the GPU they will also run on the GPU.