ValueError: Unknown initializer with clone_model and custom initialization - tensorflow

I created the minimum working example. You can reproduce it here
I created my own Initializer called ComplexGlorotUniform(Initializer).
Then I created a file like: init_dispatcher = {"complex_glorot_uniform": ComplexGlorotUniform()}
Finally, I did:
from tensorflow.keras.utils import get_custom_objects
get_custom_objects().update(init_dispatcher)
I generated a sequential model of Dense layers using kernel_initializer="complex_glorot_uniform".
Now when using tf.keras.models.clone_model I get the error:
ValueError: Unknown initializer: ComplexGlorotUniform. Please ensure this object is passed to the `custom_objects` argument. See https://www.tensorflow.org/guide/keras/save_and_serialize#registering_the_custom_object
I DO think the custom_object is working correctly because he knows it's ComplexGlorotUniform and not the string I gave. Also, the layer is created correctly, is when calling the clone model method that it gets broken.

Well, I am still unsure why but I solved it by changing the init_dispatcher to init_dispatcher = {"ComplexGlorotUniform": ComplexGlorotUniform} to make the name match the string.
I guess it is Ok as it works for me but I am unsure if this is how it is supposed to work.

Related

How to use tf.data.Dataset.ignore_errors to ignore errors in a Tensorflow Dataset?

When loading images from a directory in Tensorflow, you use something like:
dataset = tf.keras.utils.image_dataset_from_directory(
"S:\\Images",
batch_size=32,
image_size=(128,128),
label_mode=None,
validation_split=0.20, #Reserve 20% of images for validation
subset='training', #If we specify a validation_split, we *must* specify subset
seed=619 #If using validation_split we *must* specify a seed to ensure there is no overlap between training and validation data
)
But of course some of the images (.jpg, .png, .gif, .bmp) will be invalid. So we want to ignore those errors; just skip them (and ideally log the filenames so they can be repaired, removed, or deleted).
There have been some ideas along the way of how to ignore invalid images:
Method 1: tf.contrib.data.ignore_errors (Tensorflow 1.x only)
Warning: The tf.contrib module will not be included in TensorFlow 2.0.
Sample usage:
dataset = dataset.apply(tf.contrib.data.ignore_errors())
The only down-side of this method is that it was only available in Tensorflow 1. Trying to use it today simply won't work, as the tf.contib namespace no longer exists. That led to a built-in method:
Method 2: tf.data.experimental.ignore_errors(log_warning=False) (deprecated)
From the documentation:
Creates a Dataset from another Dataset and silently ignores any errors. (deprecated)
Deprecated: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use tf.data.Dataset.ignore_errors instead.
Sample usage:
dataset = dataset.apply(tf.data.experimental.ignore_errors(log_warning=True))
And this method works. It works great. And it has the advantage of working.
But it's apparently deprecated, and they documentation says we should use method 3:
Method 3 - tf.data.Dataset.ignore_errors(log_warning=False, name=None)
Drops elements that cause errors.
Sample usage:
dataset = dataset.ignore_errors(log_warning=True, name="Loading images from directory")
Except it doesn't work
The dataset.ignore_errors attribute doesn't work, and gives the error:
AttributeError: 'BatchDataset' object has no attribute 'ignore_errors'
Which means:
the thing that works is deprecated
they tell us to use this other thing instead
and "provide the instructions for updating"
but the other thing doesn't work
So we ask Stackoverflow:
How do i use tf.data.Dataset.ignore_errors to ignore errors?
Bonus Reading
TensorFlow Dataset `.map` - Is it possible to ignore errors?
TensorFlow: How to skip broken data
Untested Workaround
Not only is it not what i was asking, but people are not allowed to read this:
It looks like the tf.data.Dataset.ignore_errors() method is not
available in the BatchDataset object, which is what you are using in
your code. You can try using tf.data.Dataset.filter() to filter out
elements that cause errors when loading the images. You can use a
try-except block inside the lambda function passed to filter() to
catch the errors and return False for elements that cause errors,
which will filter them out. Here's an example of how you can use
filter() to achieve this:
def filter_fn(x):
try:
# Load the image and do some processing
# Return True if the image is valid, False otherwise
return True
except:
return False
dataset = dataset.filter(filter_fn)
Alternatively, you can use the tf.data.experimental.ignore_errors()
method, which is currently available in TensorFlow 2.x. This method
will silently ignore any errors that occur while processing the
elements of the dataset. However, keep in mind that this method is
experimental and may be removed or changed in a future version.
tf.data.Dataset.ignore_errors was introduced in TensorFlow 2.11. You can use tf.data.experimental.ignore_errors for older versions like so:
dataset.apply(tf.data.experimental.ignore_errors())

TFX custom config argument in trainer not working

This question is based on the TFX recommender tutorial. Please note that the code is being orchestrated by LocalDagRunner rather than run interactively in a notebook.
In the Trainer, we pass in a custom_config with the transformed ratings and movies:
trainer = tfx.components.Trainer(
module_file=os.path.abspath(_trainer_module_file),
examples=ratings_transform.outputs['transformed_examples'],
transform_graph=ratings_transform.outputs['transform_graph'],
schema=ratings_transform.outputs['post_transform_schema'],
train_args=tfx.proto.TrainArgs(num_steps=500),
eval_args=tfx.proto.EvalArgs(num_steps=10),
custom_config={
'epochs':5,
'movies':movies_transform.outputs['transformed_examples'],
'movie_schema':movies_transform.outputs['post_transform_schema'],
'ratings':ratings_transform.outputs['transformed_examples'],
'ratings_schema':ratings_transform.outputs['post_transform_schema']
})
The problem is that all of the outputs passed into custom_config seem to be empty. This results in errors, for example
class MovielensModel(tfrs.Model):
def __init__(self, user_model, movie_model, tf_transform_output, movies_uri):
super().__init__()
self.movie_model: tf.keras.Model = movie_model
self.user_model: tf.keras.Model = user_model
movies_artifact = movies_uri.get()[0]
complains that movie_uri.get() is empty. The same is true for ratings. Ratings passed in through the examples parameter however are not empty (the artefact uri is available), so it seems as though this custom_config is 'breaking things'.
I have tried debugging it but to no avail. I did notice that arguments in custom_config are serialised and deserialised, but this didn't seem to be the cause of the problem. Does anyone know why this happens and how to resolve this?

How to use Huggingface Data Collator

I was following this tutorial which comes with this notebook.
I plan to use Tensorflow for my project, so I followed this tutorial and added the line
tokenized_datasets = tokenized_datasets["train"].to_tf_dataset(columns=["input_ids"], shuffle=True, batch_size=16, collate_fn=data_collator)
to the end of the notebook.
However, when I ran it, I got the following error:
RuntimeError: Index put requires the source and destination dtypes match, got Float for the destination and Long for the source.
Why didn't this work? How can I use the collator?
The issue is not your code, but how the collator is set up. (It's set up to not use Tensorflow by default.)
If you look at this, you'll see that their collator uses the return_tensors="tf" argument. If you add this to your collator, your code for using the collator will work.
In short, your collator creation should look like
data_collator = DataCollatorForLanguageModeling(tokenizer, mlm_probability=0.15, return_tensors="tf")
This will fix the issue.

Keras model.get_config() returns list instead of dictionary

I am using tensorflow-gpu==1.10.0 and keras from tensorflow as tf.keras.
I am trying to use source code written by someone else to implement it on my network.
I saved my network using save_model and load it using load_model. when I use model.get_config(), I expect a dictionary, but i"m getting a list. Keras source documentation also says that get_config returns a dictionary (https://keras.io/models/about-keras-models/).
I tried to check if it has to do with saving type : save_model or model.save that makes the difference in how it is saved, but both give me this error:
TypeError: list indices must be integers or slices, not str
my code block :
model_config = self.keras_model.get_config()
for layer in model_config['layers']:
name = layer['name']
if name in update_layers:
layer['config']['filters'] = update_layers[name]['filters']
my pip freeze :
absl-py==0.6.1
astor==0.7.1
bitstring==3.1.5
coverage==4.5.1
cycler==0.10.0
decorator==4.3.0
Django==2.1.3
easydict==1.7
enum34==1.1.6
futures==3.1.1
gast==0.2.0
geopy==1.11.0
grpcio==1.16.1
h5py==2.7.1
image==1.5.15
ImageHash==3.7
imageio==2.5.0
imgaug==0.2.5
Keras==2.1.3
kiwisolver==1.1.0
lxml==4.1.1
Markdown==3.0.1
matplotlib==2.1.0
networkx==2.2
nose==1.3.7
numpy==1.14.1
olefile==0.46
opencv-python==3.3.0.10
pandas==0.20.3
Pillow==4.2.1
prometheus-client==0.4.2
protobuf==3.6.1
pyparsing==2.3.0
pyquaternion==0.9.2
python-dateutil==2.7.5
pytz==2018.7
PyWavelets==1.0.1
PyYAML==3.12
Rtree==0.8.3
scikit-image==0.13.1
scikit-learn==0.19.1
scipy==0.19.1
Shapely==1.6.4.post1
six==1.11.0
sk-video==1.1.8
sklearn-porter==0.6.2
tensorboard==1.10.0
tensorflow-gpu==1.10.0
termcolor==1.1.0
tqdm==4.19.4
utm==0.4.2
vtk==8.1.0
Werkzeug==0.14.1
xlrd==1.1.0
xmltodict==0.11.0

Error evaluating a TensorArray in a while loop

I've built the following TensorArray:
ta = tf.TensorArray(
dtype=tf.float32,
size=0,
dynamic_size=True,
element_shape=tf.TensorShape([None, None])
)
and called ta = ta.write(idx, my_tensor) inside a while_loop.
When evaluating the output = ta.stack() tensor in a session, I receive this error message:
ValueError: Cannot use '.../TensorArrayWrite/TensorArrayWriteV3' as
input to '.../TensorArrayStack_1/TensorArraySizeV3' because
'.../TensorArrayWrite/TensorArrayWriteV3' is in a while loop. See info
log for more details.
I don't understand this error message, could you please help me ?
Update: A minimal example might be difficult to come up with, but this is what I am doing: I am using the reference to the ta TensorArray inside the cell_input_fn of AttentionWrapper. This callback is used in AttentionWrapper's call method, where another TensorArray named alignment_history is being written. Therefore the while_loop code is not designed by me, it's part of the TF dynamic RNN computation tf.nn.dynamic_rnn.
Not sure if this is what's biting you, but you have to make sure your while_loop function takes the tensor array as input and emits an updated one as output; and you have to use the final version of the TensorArray at the end of the while_loop:
def fn(ta_old):
return ta_old.write(...)
ta_final = while_loop(..., body=fn, [tf.TensorArray(...)])
values = ta_final.stack()
specifically you should never access ta_old outside of fn().