Tensorflow gpu error: Dst tensor not initialized - tensorflow

It is my first time training a model on GPU. I am using tensorflow. I am getting an error: InternalError: Failed copying input tensor from /job:localhost/replica:0/task:0/device:CPU:0 to /job:localhost/replica:0/task:0/device:GPU:0 in order to run AssignVariableOp: Dst tensor is not initialized. [Op:AssignVariableOp]
I have tried for solutions like reducing batch size, use tf-nightly but to no avail. I am using Nvidia GeForce GTX 1080 8 Gb. I am trying to train an image classification model using Keras Application(Xception).

Related

CUDNN_STATUS_EXECUTION_FAILED error in tensorflow model on GPU

I am trying to compile a tensorflow model using UNET architecture (OS->Rocky Linux 8.6, GPU->Quadro P620, Tensoflow-> 2.11.0, CUDA->11.6). The model works fine on CPU and google colab. But when i try to run it on GPU then the following problem comes during model.fit.
CUDNN_STATUS_EXECUTION_FAILED
in tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc(5750): 'cudnnBatchNormalizationForwardTrainingEx( cudnn.handle(), mode,
bn_ops, &one, &zero, x_descriptor.handle(), x.opaque(),
x_descriptor.handle(), side_input.opaque(), x_descriptor.handle(),
y->opaque(), scale_offset_descriptor.handle(), scale.opaque(),
offset.opaque(), exponential_average_factor, batch_mean_opaque,
batch_var_opaque, epsilon, saved_mean->opaque(),
saved_inv_var->opaque(), activation_desc.handle(), workspace.opaque(),
workspace.size(), reserve_space.opaque(), reserve_space.size())'

Tensorflow not using multiple GPUs - getting OOM

I'm running into OOM on a multi-gpu machine, because TF 2.3 seems to be allocating a tensor using only one GPU.
tensorflow/core/framework/op_kernel.cc:1767] OP_REQUIRES failed at conv_ops.cc:539 :
Resource exhausted: OOM when allocating tensor with shape[20532,64,48,32]
and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc.
But tensorflow does recognize multiple GPUs when I run my code:
Adding visible gpu devices: 0, 1, 2
Is there anything else I need to do to have TF use all GPUs?
The direct answer is yes, you do need to do more to get TF to recognize multiple GPUs. You should refer to this guide but the tldr is
mirrored_strategy = tf.distribute.MirroredStrategy()
with mirrored_strategy.scope():
...
https://www.tensorflow.org/guide/distributed_training#using_tfdistributestrategy_with_tfkerasmodelfit
But in your case, something else is happening. While this one tensor may be triggering the OOM, it's likely because a few previous large tensors were allocated.
The first dimension, your batch size, is 20532, which is really big. Since the factorization of that is 2**2 × 3 × 29 × 59, I'm going to guess you are working with CHW format and your source image was 3x64x128 which got trimmed after a few convolutions. I'd suspect an inadvertent broadcast. Print a model.summary() and then review the sizes of the tensors coming out of each layer. You may also need to look at your batching.

OOm - cannot run StyleGAN2 despite reducing batch size

I am trying to run StyleGAN2 using a cluster equipped with eight GPUs (NVIDIA GeForce RTX 2080). At present, I am using the following configuration in training_loop.py:
minibatch_size_dict = {4: 512, 8: 256, 16: 128, 32: 64, 64: 32}, # Resolution-specific overrides.
minibatch_gpu_base = 8, # Number of samples processed at a time by one GPU.
minibatch_gpu_dict = {}, # Resolution-specific overrides.
G_lrate_base = 0.001, # Learning rate for the generator.
G_lrate_dict = {}, # Resolution-specific overrides.
D_lrate_base = 0.001, # Learning rate for the discriminator.
D_lrate_dict = {}, # Resolution-specific overrides.
lrate_rampup_kimg = 0, # Duration of learning rate ramp-up.
tick_kimg_base = 4, # Default interval of progress snapshots.
tick_kimg_dict = {4:10, 8:10, 16:10, 32:10, 64:10, 128:8, 256:6, 512:4}): # Resolution-specific overrides.
I am training using a set of 512x52 pixel images. After a couple of iterations, I get the error message reported below and it looks like the script stops running (using watch nvidia-smi, we have that both the temperature and the fan activity for the GPUs decreases). I already reduced the batch size but it looks like the problem is somewhere else. Do you have any tip on how to fix this?
I was able to run StyleGAN with the same dataset. In the paper they say that StyleGAN2 should be less heavy, so I am a bit surprised.
Here is the error message I get:
2019-12-16 18:22:54.909009: E tensorflow/stream_executor/cuda/cuda_driver.cc:828] failed to allocate 334.11M (350338048 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2019-12-16 18:22:54.909087: W tensorflow/core/common_runtime/bfc_allocator.cc:314] Allocator (GPU_0_bfc) ran out of memory trying to allocate 129.00MiB (rounded to 135268352). Current allocation summary follows.
2019-12-16 18:22:54.918750: W tensorflow/core/common_runtime/bfc_allocator.cc:319] **_***************************_*****x****x******xx***_******************************_***************
2019-12-16 18:22:54.918808: W tensorflow/core/framework/op_kernel.cc:1502] OP_REQUIRES failed at conv_grad_input_ops.cc:903 : Resource exhausted: OOM when allocating tensor with shape[4,128,257,257] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
The config-f model for StyleGAN2 is actually bigger than StyleGAN1. Try using a less VRAM consuming configuration like config-e. You can actually change the configuration of the model by passing a flag in your python command like so: https://github.com/NVlabs/stylegan2/blob/master/run_training.py#L144
In my case, I'm able to train StyleGAN2 with config-e on 2 RTX 2080ti.
One or more high-end NVIDIA GPUs, NVIDIA drivers, CUDA 10.0 toolkit
and cuDNN 7.5. To reproduce the results reported in the paper, you
need an NVIDIA GPU with at least 16 GB of DRAM.
Your NVIDIA GeForce RTX 2080 card has 11GB, but I guess you're saying you have 8 of them? I don't think tensorflow is setup for parallelism out of the box.

tflite_convert ValueError Unknown layer BatchNorm

We are using
Tensorflow 1.14
Keras 2.1.2
GPU: GeForce GTX 1660 Ti major: 7 minor: 5 memoryClockRate(GHz): 1.86
for custom object detection using Mask-RCNN from this repo https://github.com/matterport/Mask_RCNN.
Now we trained a model successfully and its detecting objects on our desktop. Now, we want to generate tflite for mobile usage where we are facing below mentioned error:
ValueError: Unknown layer BatchNorm
Please note that we have created weights and keras model .h5 with different scripts
We have tried following code to convert keras model to tflite
import tensorflow as tf
converter = tf.lite.TFLiteConverter.from_keras_model_file( 'Save-Model8.h5')
tfmodel = converter.convert()
open ("model.tflite", "wb") .write(tfmodel)

Tensorflow GAN estimator hang while evaluating

I implement GAN in Tensorflow Estimator format. Here's the complete code in gist.
The model can be trained normally. However, it seems to hang at model.evaluate forever. The log after training is like below.
INFO:tensorflow:Starting evaluation at 2018-12-03-02:19:06
INFO:tensorflow:Graph was finalized.
2018-12-03 02:19:06.956750: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0
2018-12-03 02:19:06.956781: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-12-03 02:19:06.956786: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988] 0
2018-12-03 02:19:06.956790: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0: N
2018-12-03 02:19:06.956912: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10464 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:01:00.0, compute capability: 6.1)
INFO:tensorflow:Restoring parameters from /tensorlog/wad/acgan/a51fbd6/model.ckpt-10002
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
If I use tf.estimator.train_and_evaluate, the evaluated accuracy will always be 0.5.
I've already checked my tfrecords file and it's not empty, images and labels can be read without problem. I also tried using same tfrecords file for both training and evaluating but still got the same result.
It seems to me that the tensorflow model may have problem loading GAN's weights from checkpoints. If it's true, how to solve that problem?
It turns out it's because the training parameters in dropout and batch_normalization prevent the weights from restoration.
Fix the value of training to either true or false solve the problem.