FailedPreconditionError: Error while reading resource variable _AnonymousVar54 from Container: localhost - google-colaboratory

I am running this code on google colab, where dataset is my own i.e. X_train (which i uploaded), and image size is 15x15 (which i changed in code, where required. To save output images, location is changed. and some more parameters that are changes is
epochs=40000, batch_size=10, sample_interval=2000
I am getting following error, and only one image is saved:
/usr/local/lib/python3.6/dist-packages/keras/engine/training.py:297: UserWarning: Discrepancy between trainable weights and collected trainable weights, did you set `model.trainable` without calling `model.compile` after ?
'Discrepancy between trainable weights and collected trainable'
0 [D loss: 0.999931] [G loss: 1.000150]
---------------------------------------------------------------------------
FailedPreconditionError Traceback (most recent call last)
<ipython-input-4-caa940a5773c> in <module>()
2 if __name__ == '__main__':
3 wgan = WGAN()
----> 4 wgan.train(epochs=40000, batch_size=10, sample_interval=2000)
________________________________________
7 frames
________________________________________
/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name)
58 ctx.ensure_initialized()
59 tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
---> 60 inputs, attrs, num_outputs)
61 except core._NotOkStatusException as e:
62 if name is not None:
FailedPreconditionError: Error while reading resource variable _AnonymousVar54 from Container: localhost. This could mean that the variable was uninitialized. Not found: Resource localhost/_AnonymousVar54/N10tensorflow3VarE does not exist.
[[node mul_12/ReadVariableOp (defined at /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:3009) ]] [Op:__inference_keras_scratch_graph_4664]
Function call stack:
keras_scratch_graph

Related

Data generator for image classification

I'm trying to create a data generator for my CNN project(using a sequential model in keras). Due to the large amount of data, I need to keep flowing data to the model training so I dont get OOM on RAM. However, I'm having some trouble with creating the generator. The generator should take in the batch_size of data and then create X number of augmented images. Then I want to create a batch of the created augmented images and the original, e.g 30 original pictures, 5 augmented images per images = 30 original pictures + 150 augmented pictures = 180 total pictures in one batch. I then want to take a batch_size from this 180 pictures, lets say 30, this will create 6 epoch steps with 30 images per step. Then i want to generate a new batch of images and repeat these steps for X amount of epochs.
Code:
class customDataGen(tf.keras.utils.Sequence):
data_holder_x = []
data_holder_y = []
## leave out img_gen, that does not do anything right now.
def __init__(self, X, y, img_gen, batch_size, shuffle = True):
self.X = X
self.y = y
self.batch_size = batch_size
self.shuffle = shuffle
self.img_gen = img_gen
nr1 = 5*self.batch_size ## The image augmentation does generates 5 images per image so im just hard-coding in 5 right now.
nr2 = self.batch_size ## this is the original pictures
self.n = nr1 + nr2
self.indices = list(range(0,self.n))
self.__get_data(index=1) ## just generating a instance of get_data
def on_epoch_end(self):
self.index = np.arange(len(self.indices))
if self.shuffle == True:
np.random.shuffle(self.index)
def __get_data(self,index):
print("get_data startad")
aug_img = img_aug(self.X[index*self.batch_size:(index+1)*self.batch_size],self.y[index*self.batch_size:(index+1)*self.batch_size])
X = list(self.X[index*self.batch_size:(index+1)*self.batch_size])
y = list(self.y[index*self.batch_size:(index+1)*self.batch_size])
X.extend(aug_img[0])
y.extend(aug_img[1])
customDataGen.data_holder_x.append(X)
customDataGen.data_holder_y.append(y)
def __data_holder(self,index):
container_x = []
container_y = []
print("__data_holder startad")
if len(customDataGen.data_holder_x[0]) == 0:
self.__get_data(index)
container_x.append(customDataGen.data_holder_x[0][:self.batch_size])
container_y.append(customDataGen.data_holder_y[0][:self.batch_size])
del customDataGen.data_holder_x[0][:self.batch_size], customDataGen.data_holder_y[0][:self.batch_size]
else:
container_x.append(customDataGen.data_holder_x[0][:self.batch_size])
container_y.append(customDataGen.data_holder_y[0][:self.batch_size])
del customDataGen.data_holder_x[0][:self.batch_size], customDataGen.data_holder_y[0][:self.batch_size]
#X = np.array(container_x[0][0])
#y = np.array(container_y[0][0])
print("remaining data of data_holder_x", len(customDataGen.data_holder_x[0]))
return container_x[0],container_y[0]
def __getitem__(self,index):
container_x,container_y = self.__data_holder(index)
print("get_item startad")
X = tf.convert_to_tensor(container_x)
y = tf.convert_to_tensor(container_y)
return X,y
def __len__(self):
return (self.n)//self.batch_size
My problem now is that it seems like __get_item is getting called and
initiated by model.fit() 3 times before the epoch start
__data_holder startad
remaining data of data_holder_x 160
get_item startad
Epoch 1/2
__data_holder startad
remaining data of data_holder_x 128
get_item startad
__data_holder startad
remaining data of data_holder_x 96
get_item startad
1/6 [====>.........................] - ETA: 15s - loss: 1.7893 - accuracy: 0.1562__data_holder startad
remaining data of data_holder_x 64
get_item startad
2/6 [=========>....................] - ETA: 6s - loss: 1.7821 - accuracy: 0.2344 __data_holder startad
remaining data of data_holder_x 32
get_item startad
3/6 [==============>...............] - ETA: 4s - loss: 1.7879 - accuracy: 0.1562__data_holder startad
remaining data of data_holder_x 0
get_item startad
4/6 [===================>..........] - ETA: 3s - loss: 1.7878 - accuracy: 0.1953__data_holder startad
get_data startad
remaining data of data_holder_x 0
get_item startad
5/6 [========================>.....] - ETA: 1s - loss: 1.7888 - accuracy: 0.1875
Then the error occurs
2022-09-30 17:44:31.255235: W tensorflow/core/framework/op_kernel.cc:1733] INVALID_ARGUMENT: TypeError: `generator` yielded an element of shape (0,) where an element of shape (None, None, None, None) was expected.
Traceback (most recent call last):
File "/usr/local/lib/python3.9/dist-packages/tensorflow/python/ops/script_ops.py", line 270, in __call__
ret = func(*args)
File "/usr/local/lib/python3.9/dist-packages/tensorflow/python/autograph/impl/api.py", line 642, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.9/dist-packages/tensorflow/python/data/ops/dataset_ops.py", line 1073, in generator_py_func
raise TypeError(
TypeError: `generator` yielded an element of shape (0,) where an element of shape (None, None, None, None) was expected.
---------------------------------------------------------------------------
InvalidArgumentError Traceback (most recent call last)
Input In [298], in <cell line: 1>()
----> 1 model.fit(training,
2 validation_data=validation,
3 epochs=2, callbacks = [checkpoint])
File /usr/local/lib/python3.9/dist-packages/keras/utils/traceback_utils.py:67, in filter_traceback.<locals>.error_handler(*args, **kwargs)
65 except Exception as e: # pylint: disable=broad-except
66 filtered_tb = _process_traceback_frames(e.__traceback__)
---> 67 raise e.with_traceback(filtered_tb) from None
68 finally:
69 del filtered_tb
File /usr/local/lib/python3.9/dist-packages/tensorflow/python/eager/execute.py:54, in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name)
52 try:
53 ctx.ensure_initialized()
---> 54 tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
55 inputs, attrs, num_outputs)
56 except core._NotOkStatusException as e:
57 if name is not None:
InvalidArgumentError: Graph execution error:
2 root error(s) found.
(0) INVALID_ARGUMENT: TypeError: `generator` yielded an element of shape (0,) where an element of shape (None, None, None, None) was expected.
Traceback (most recent call last):
File "/usr/local/lib/python3.9/dist-packages/tensorflow/python/ops/script_ops.py", line 270, in __call__
ret = func(*args)
File "/usr/local/lib/python3.9/dist-packages/tensorflow/python/autograph/impl/api.py", line 642, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.9/dist-packages/tensorflow/python/data/ops/dataset_ops.py", line 1073, in generator_py_func
raise TypeError(
TypeError: `generator` yielded an element of shape (0,) where an element of shape (None, None, None, None) was expected.
[[{{node PyFunc}}]]
[[IteratorGetNext]]
[[IteratorGetNext/_2]]
(1) INVALID_ARGUMENT: TypeError: `generator` yielded an element of shape (0,) where an element of shape (None, None, None, None) was expected.
Traceback (most recent call last):
File "/usr/local/lib/python3.9/dist-packages/tensorflow/python/ops/script_ops.py", line 270, in __call__
ret = func(*args)
File "/usr/local/lib/python3.9/dist-packages/tensorflow/python/autograph/impl/api.py", line 642, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.9/dist-packages/tensorflow/python/data/ops/dataset_ops.py", line 1073, in generator_py_func
raise TypeError(
TypeError: `generator` yielded an element of shape (0,) where an element of shape (None, None, None, None) was expected.
[[{{node PyFunc}}]]
[[IteratorGetNext]]
0 successful operations.
0 derived errors ignored. [Op:__inference_train_function_5083]
Im new to both python and tensorflow so any help is appreciated.
Thanks,
Pythonnorra
With data augmentation, the Sequence class is not really simple to use. It seems to call getitem when building the graph before the first epoch.
A good and interesting alternative to the Sequence class is the use of tensorflow tf.data.Dataset
Here is some dummy code that does the job :
mydataset = tf.data.Dataset.from_tensor_slices((X, y))
def aug_data(x, y):
# dummy data augmentation
x_aug = tf.repeat(x, 5, axis=0)
y_aug = tf.repeat(y, 5, axis=0)
# concatenate with original data
x_aug = tf.concat([x, x_aug], axis=0)
y_aug = tf.concat([y, y_aug], axis=0)
return x_aug, y_aug
# apply data augmentation on batches
mydataset = mydataset.map(aug_data).batch(BATCH_SIZE)
It applies the data augmentation to each new batch individually.

DirectML InvalidArgumentError: Graph execution error: No OpKernel was registered to support Op 'CudnnRNN'

i was trying to use DirectML for usage of my amd rx580 graphics card in tensorflow, but i'm having a real hard time to pull this up.
i'm getting this error:
---------------------------------------------------------------------------
InvalidArgumentError Traceback (most recent call last)
Input In [73], in <cell line: 24>()
23 # fit network
24 for i in range(n_epoch):
---> 25 model_LSTM_peso.fit(X, y, epochs=1, batch_size=n_batch, verbose=1, shuffle=False)
26 model_LSTM_peso.reset_states()
File ~\anaconda3\envs\tfdml_plugin\lib\site-packages\keras\utils\traceback_utils.py:67, in filter_traceback.<locals>.error_handler(*args, **kwargs)
65 except Exception as e: # pylint: disable=broad-except
66 filtered_tb = _process_traceback_frames(e.__traceback__)
---> 67 raise e.with_traceback(filtered_tb) from None
68 finally:
69 del filtered_tb
File ~\anaconda3\envs\tfdml_plugin\lib\site-packages\tensorflow\python\eager\execute.py:54, in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name)
52 try:
53 ctx.ensure_initialized()
---> 54 tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
55 inputs, attrs, num_outputs)
56 except core._NotOkStatusException as e:
57 if name is not None:
InvalidArgumentError: Graph execution error:
No OpKernel was registered to support Op 'CudnnRNN' used by {{node CudnnRNN}} with these attrs: [seed=0, dropout=0, T=DT_FLOAT, input_mode="linear_input", direction="unidirectional", rnn_mode="lstm", is_training=true, seed2=0]
Registered devices: [CPU, GPU]
Registered kernels:
<no registered kernels>
[[CudnnRNN]]
[[sequential_11/lstm_3/PartitionedCall]] [Op:__inference_train_function_491977]
I have running another code that fits with the model for embedding. So the rest of the code is running, but seems to be that the problem that i have is just for use LSTM
from pandas import DataFrame
from pandas import concat
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
values = input_emb_peso
X, y = values, input_peso
X = X.reshape(X.shape[0],X.shape[1], 1)
# configure network
n_batch = 1
n_epoch = 1 #X.shape[0] #len(grupo_carrera_train) #19485 carreras de entrenamiento
n_neurons = 80
# design network
model_LSTM_peso = Sequential()
model_LSTM_peso.add(LSTM(n_neurons, batch_input_shape=(n_batch,X.shape[1], X.shape[2]), stateful=True))
model_LSTM_peso.add(Dense(80))
model_LSTM_peso.add(Dense(80))
model_LSTM_peso.add(Dense(1))
model_LSTM_peso.compile(loss='mean_squared_error', optimizer='adam')
# fit network
for i in range(n_epoch):
model_LSTM_peso.fit(X, y, epochs=1, batch_size=n_batch, verbose=1, shuffle=False)
model_LSTM_peso.reset_states()

How to fix "pop from empty list" error while using Keras tuner search method with TPU in google colab?

I previously was able to run the search method of keras tuner on my model with GPU runtime of Google colab. But when I switched to the TPU runtime, I get the following error. I haven't been able to come to the conclusion of how to access a google cloud storage for the TPU runtime to save the checkpoint folder that the keras tuner saves model checkpoints in. I also don't know how to do it and I'm getting the following error. Please help me resolve this issue.
My code:
def post_se(hp):
ip = Input(shape=(6, 128))
x = Masking()(ip)
x = LSTM(units=hp.Choice('lstm_1', values = [8,16,32,64,128,256,512]),return_sequences=True)(x)
x = Dropout(hp.Choice(name='Dropout', values = [0.0,0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8]))(x)
x = LSTM(units=hp.Choice('lstm_2', values = [8,16,32,64,128,256,512]))(x)
x = Dropout(hp.Choice(name='Dropout_2', values = [0.0,0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8]))(x)
y = Permute((2, 1))(ip)
y = Conv1D(hp.Choice('conv_1_filter', values = [32,64,128,256,512]), hp.Choice(name='conv_1_filter_size', values = [3,5,7,8,9]), padding='same', kernel_initializer='he_uniform')(y)
y = BatchNormalization()(y)
y = Activation('relu')(y)
y = squeeze_excite_block(y)
y = Conv1D(hp.Choice('conv_2_filter', values = [32,64,128,256,512]), hp.Choice(name='conv_2_filter_size',values = [3,5,7,8,9]), padding='same', kernel_initializer='he_uniform')(y)
y = BatchNormalization()(y)
y = Activation('relu')(y)
y = squeeze_excite_block(y)
y = Conv1D(hp.Choice('conv_3_filter', values = [32,64,128,256,512,]), hp.Choice(name='conv_3_filter_size',values = [3,5,7,8,9]), padding='same', kernel_initializer='he_uniform')(y)
y = BatchNormalization()(y)
y = Activation('relu')(y)
y = GlobalAveragePooling1D()(y)
x = concatenate([x,y])
# batch_size = hp.Choice('batch_size', values=[32, 64, 128, 256, 512, 1024, 2048, 4096])
out = Dense(num_classes, activation='softmax')(x)
model = Model(ip, out)
if gpu:
opt = keras.optimizers.Adam(learning_rate=0.001)
if tpu:
opt = keras.optimizers.Adam(learning_rate=8*0.001)
model.compile(optimizer=opt, loss='categorical_crossentropy',metrics=['accuracy'])
# model.summary()
return model
if gpu:
tuner = kt.tuners.BayesianOptimization(post_se,
objective='val_accuracy',
max_trials=30,
seed=42,
project_name='Model_gpu')
# Will stop training if the "val_loss" hasn't improved in 30 epochs.
tuner.search(X_train, train_label, epochs=200, validation_split=0.1, shuffle=True, callbacks=[tensorflow.keras.callbacks.EarlyStopping('val_loss', patience=30)])
if tpu:
print("TPU")
with strategy.scope():
tuner = kt.tuners.BayesianOptimization(post_se,
objective='val_accuracy',
max_trials=30,
seed=42,
project_name='Model_tpu')
# Will stop training if the "val_loss" hasn't improved in 30 epochs.
tuner.search(X_train, train_label, epochs=200, validation_split=0.1, shuffle=True, callbacks=[tensorflow.keras.callbacks.EarlyStopping('val_loss', patience=30)])
The error log
---------------------------------------------------------------------------
UnimplementedError Traceback (most recent call last)
/usr/lib/python3.7/contextlib.py in __exit__(self, type, value, traceback)
129 try:
--> 130 self.gen.throw(type, value, traceback)
131 except StopIteration as exc:
10 frames
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/ops.py in resource_creator_scope(resource_type, resource_creator)
2957 resource_creator):
-> 2958 yield
2959
<ipython-input-15-24c1e1bb603d> in <module>()
17 # Will stop training if the "val_loss" hasn't improved in 30 epochs.
---> 18 tuner.search(X_train, train_label, epochs=200, validation_split=0.1, shuffle=True, callbacks=[tensorflow.keras.callbacks.EarlyStopping('val_loss', patience=30)])
/usr/local/lib/python3.7/dist-packages/keras_tuner/engine/base_tuner.py in search(self, *fit_args, **fit_kwargs)
178 self.on_trial_begin(trial)
--> 179 results = self.run_trial(trial, *fit_args, **fit_kwargs)
180 # `results` is None indicates user updated oracle in `run_trial()`.
/usr/local/lib/python3.7/dist-packages/keras_tuner/engine/tuner.py in run_trial(self, trial, *args, **kwargs)
303 copied_kwargs["callbacks"] = callbacks
--> 304 obj_value = self._build_and_fit_model(trial, *args, **copied_kwargs)
305
/usr/local/lib/python3.7/dist-packages/keras_tuner/engine/tuner.py in _build_and_fit_model(self, trial, *args, **kwargs)
233 model = self._try_build(hp)
--> 234 return self.hypermodel.fit(hp, model, *args, **kwargs)
235
/usr/local/lib/python3.7/dist-packages/keras_tuner/engine/hypermodel.py in fit(self, hp, model, *args, **kwargs)
136 """
--> 137 return model.fit(*args, **kwargs)
138
/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py in error_handler(*args, **kwargs)
66 filtered_tb = _process_traceback_frames(e.__traceback__)
---> 67 raise e.with_traceback(filtered_tb) from None
68 finally:
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/ops.py in _numpy(self)
1116 except core._NotOkStatusException as e: # pylint: disable=protected-access
-> 1117 raise core._status_to_exception(e) from None # pylint: disable=protected-access
1118
UnimplementedError: File system scheme '[local]' not implemented (file: './untitled_project/trial_78ed6883514d67dc6222064095c134cb/checkpoints/epoch_0/checkpoint_temp/part-00000-of-00001')
Encountered when executing an operation using EagerExecutor. This error cancels all future operations and poisons their output tensors.
During handling of the above exception, another exception occurred:
IndexError Traceback (most recent call last)
<ipython-input-15-24c1e1bb603d> in <module>()
16 seed=42)
17 # Will stop training if the "val_loss" hasn't improved in 30 epochs.
---> 18 tuner.search(X_train, train_label, epochs=200, validation_split=0.1, shuffle=True, callbacks=[tensorflow.keras.callbacks.EarlyStopping('val_loss', patience=30)])
/usr/local/lib/python3.7/dist-packages/tensorflow/python/distribute/distribute_lib.py in __exit__(self, exception_type, exception_value, traceback)
454 "tf.distribute.set_strategy() out of `with` scope."),
455 e)
--> 456 _pop_per_thread_mode()
457
458
/usr/local/lib/python3.7/dist-packages/tensorflow/python/distribute/distribution_strategy_context.py in _pop_per_thread_mode()
64
65 def _pop_per_thread_mode():
---> 66 ops.get_default_graph()._distribution_strategy_stack.pop(-1) # pylint: disable=protected-access
67
68
IndexError: pop from empty list
For some extra info, I am attaching my code in this post.
This is your error:
UnimplementedError: File system scheme '[local]' not implemented (file: './untitled_project/trial_78ed6883514d67dc6222064095c134cb/checkpoints/epoch_0/checkpoint_temp/part-00000-of-00001')
See https://stackoverflow.com/a/62881833/14043558 for a solution.

'Error While Encoding with Hub.KerasLayer' while using TFF

An error is being generated while training a federated model that uses hub.KerasLayer. The details of error and stack trace is given below. The complete code is available of gist https://gist.github.com/aksingh2411/60796ee58c88e0c3f074c8909b17b5a1. Help and suggestion in this regard would be appreciated. Thanks.
from tensorflow import keras
def create_keras_model():
encoder = hub.load("https://tfhub.dev/google/tf2-preview/gnews-swivel-20dim/1")
return tf.keras.models.Sequential([
hub.KerasLayer(encoder, input_shape=[],dtype=tf.string,trainable=True),
keras.layers.Dense(32, activation='relu'),
keras.layers.Dense(16, activation='relu'),
keras.layers.Dense(1, activation='sigmoid'),
])
def model_fn():
# We _must_ create a new model here, and _not_ capture it from an external
# scope. TFF will call this within different graph contexts.
keras_model = create_keras_model()
return tff.learning.from_keras_model(
keras_model,
input_spec=preprocessed_example_dataset.element_spec,
loss=tf.keras.losses.BinaryCrossentropy(),
metrics=[tf.keras.metrics.Accuracy()])
# Building the Federated Averaging Process
iterative_process = tff.learning.build_federated_averaging_process(
model_fn,
client_optimizer_fn=lambda: tf.keras.optimizers.SGD(learning_rate=0.02),
server_optimizer_fn=lambda: tf.keras.optimizers.SGD(learning_rate=1.0))
str(iterative_process.initialize.type_signature)
state = iterative_process.initialize()
state, metrics = iterative_process.next(state, federated_train_data)
print('round 1, metrics={}'.format(metrics))
UnimplementedError Traceback (most recent call last)
<ipython-input-80-39d62fa827ea> in <module>()
----> 1 state, metrics = iterative_process.next(state, federated_train_data)
2 print('round 1, metrics={}'.format(metrics))
119 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/execute.py in
quick_execute(op_name, num_outputs, inputs, attrs, ctx, name)
58 ctx.ensure_initialized()
59 tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
---> 60 inputs, attrs, num_outputs)
61 except core._NotOkStatusException as e:
62 if name is not None:
UnimplementedError: Cast string to float is not supported
[[{{node StatefulPartitionedCall_1/StatefulPartitionedCall/Cast_1}}]]
[[StatefulPartitionedCall_1]]
[[import/StatefulPartitionedCall_3/ReduceDataset]] [Op:__inference_wrapped_function_65986]
Function call stack:
wrapped_function -> wrapped_function -> wrapped_function
The issues has now been resolved. The error was thrown because 'label' was getting passed as tf.string instead of tf.int32. Explicit casting resolved this.

sampling from MultivariateNormalDiag with placeholders

I'm running tensorflow 2.1 and tensorflow_probability 0.9. I'd like to parameterize a multivariatenormal distribution with placeholders and sample from it. Here's what I've tried
import tensorflow as tf
import tensorflow_probability as tfp
#tf.function()
def sample_vae(dist):
return dist.sample()
vae_mu = tf.keras.layers.Input(shape=(5), dtype=tf.float16)
vae_logvar = tf.keras.layers.Input(shape=(5), dtype=tf.float16)
dist = tfp.distributions.MultivariateNormalDiag(loc=vae_mu, scale_diag=tf.exp(vae_logvar))
z = sample_vae(dist)
The above gives me the following error
TypeError Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/eager/execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name)
60 op_name, inputs, attrs,
---> 61 num_outputs)
62 except core._NotOkStatusException as e:
TypeError: An op outside of the function building code is being passed
a "Graph" tensor. It is possible to have Graph tensors
leak out of the function building context by including a
tf.init_scope in your function building code.
For example, the following function will fail:
#tf.function
def has_init_scope():
my_constant = tf.constant(1.)
with tf.init_scope():
added = my_constant * 2
The graph tensor has name: BroadcastArgs_3:0
During handling of the above exception, another exception occurred:
_SymbolicException Traceback (most recent call last)
6 frames
/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/eager/execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name)
73 raise core._SymbolicException(
74 "Inputs to eager execution function cannot be Keras symbolic "
---> 75 "tensors, but found {}".format(keras_symbolic_tensors))
76 raise e
77 # pylint: enable=protected-access
_SymbolicException: Inputs to eager execution function cannot be Keras symbolic tensors, but found [<tf.Tensor 'BroadcastArgs_3:0' shape=(1,) dtype=int32>, <tf.Tensor 'Exp_3:0' shape=(None, 5) dtype=float16>, <tf.Tensor 'input_7:0' shape=(None, 5) dtype=float16>]
I was able to get around this using DistributionLambda
vae_mu = tf.keras.layers.Input(shape=(1, 5), dtype=tf.float16)
vae_logvar = tf.keras.layers.Input(shape=(1, 5), dtype=tf.float16)
mu_logvar = tf.concat([vae_mu, vae_logvar], axis=1)
l = vae_mu.shape[1]
dist = tfp.layers.DistributionLambda(lambda theta: tfp.distributions.MultivariateNormalDiag(loc=theta[:, :l], scale_diag=theta[:, l:]))
z = dist(mu_logvar)