Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 months ago.
Improve this question
I am new to python, and I was trying to run this code on loading the model for building extraction. But it throws the error that the name 'model' is not found. I am unable to resolve it. Can anyone please help.
This is the code:
from tensorflow.keras.models import load_model
from loss import bce_dice_loss, dice_coef
model = load_model('D:\RESEARCH\IITR\BUILDING_EXTRACTION_PhD_Final_WORK\DATASET\ISPRS_DATASET\Toronto\Toronto\ImagesDolon\output\weights\my_model.h5',custom_objects={ 'bce_dice_loss': bce_dice_loss}) # Load BCE Dice Loss from loss.py into model
The error:
`ValueError` `Traceback (most recent call last)`
`Input In [18], in <cell line: 4>()`
`1 from tensorflow.keras.models import load_model`
`2 from loss import bce_dice_loss, dice_coef`
`----> 4 model = load_model('D:\RESEARCH\IITR\BUILDING_EXTRACTION_PhD_Final_WORK\DATASET\ISPRS_DATASET\Toronto\Toronto\ImagesDolon\output\weights\my_model.h5',`
`5 custom_objects={ 'bce_dice_loss': bce_dice_loss} # Load BCE Dice Loss from loss.py into model`
`6 )
`
`File ~\anaconda3\envs\tensorflow\lib\site-packages\keras\utils\traceback_utils.py:67, in filter_traceback.<locals>.error_handler(*args, **kwargs)`
`65 except Exception as e: # pylint: disable=broad-except`
`66 filtered_tb = _process_traceback_frames(e.__traceback__)`
`---> 67 raise e.with_traceback(filtered_tb) from None`
`68 finally:`
`69 del filtered_tb`
`File ~\anaconda3\envs\tensorflow\lib\site-packages\keras\utils\generic_utils.py:709, in deserialize_keras_object(identifier, module_objects, custom_objects, printable_module_name)`
`707 obj = module_objects.get(object_name)`
`708 if obj is None:`
`--> 709 raise ValueError(`
`710 f'Unknown {printable_module_name}: {object_name}. Please ensure '`
`711 'this object is passed to the `custom_objects` argument. See '`
`712 'https://www.tensorflow.org/guide/keras/save_and_serialize'`
`713 '#registering_the_custom_object for details.')`
`715 # Classes passed by name are instantiated with no args, functions are`
`716 # returned as-is.`
`717 if tf_inspect.isclass(obj):`
`ValueError: Unknown metric function: dice_coef. Please ensure this object is passed to the `custom_objects` argument. See https://www.tensorflow.org/guide/keras/save_and_serialize#registering_the_custom_object for details.`
The error is saying that model has been created using a custom metric, called dice_coef. When re-loading the model, you should add that metric too to your model creation, just as you have done for the custom loss:
from tensorflow.keras.models import load_model
from loss import bce_dice_loss, dice_coef
model = load_model(
'D:\RESEARCH\IITR\BUILDING_EXTRACTION_PhD_Final_WORK\DATASET\ISPRS_DATASET\Toronto\Toronto\ImagesDolon\output\weights\my_model.h5',
custom_objects={'bce_dice_loss': bce_dice_loss, 'dice_coef': dice_coef}
)
Related
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 10 months ago.
Improve this question
I have been trying to get this zero-shot text classification joeddav / xlm-roberta-large-xnli to convert from h5 to tflite file (https://huggingface.co/joeddav/xlm-roberta-large-xnli), but this error pops up and I cant find it described online, how is it fixed? If it can't, is there another zero-shot text classifier I can use that would produce similar accuracy even after becoming tflite?
AttributeError: 'T5ForConditionalGeneration' object has no attribute 'call'
I have been trying a few different tutorials and the current google colab file I have is an amalgam of a couple of them. https://colab.research.google.com/drive/1sYQJqvhM_KEvMt2IP15d8Ud9L-ApiYv6?usp=sharing
[ Convert TFLite from saved .h5 model to TFLite model ]
Conversion using tflite convert there are multiple ways by
TF-Lite Convertor TF-Lite convertor
TF.Lite.TFLiteConverter OR else
From the provided links currently they try to convert from saved model .h5 to TFLite, to confirm their question.
[ Sample ]:
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
: Model Initialize
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
model = tf.keras.models.Sequential([
tf.keras.layers.InputLayer(input_shape=( 32, 32, 3 )),
tf.keras.layers.Dense(128, activation='relu'),
])
model.compile(optimizer='sgd', loss='mean_squared_error') # compile the model
model.summary()
model.save_weights(checkpoint_path)
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
: FileWriter
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
if exists(checkpoint_path) :
model.load_weights(checkpoint_path)
print("model load: " + checkpoint_path)
tf_lite_model_converter = tf.lite.TFLiteConverter.from_keras_model(
model
) # <tensorflow.lite.python.lite.TFLiteKerasModelConverterV2 object at 0x0000021095194E80>
tflite_model = tf_lite_model_converter.convert()
# Save the model.
with open(checkpoint_dir + '\\model.tflite', 'wb') as f:
f.write(tflite_model)
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
I want my Neural network output to be either 0 or 1, and not probabilities between 0 and 1.
for the same I have designed step function for output layer, I want my output layer to just roundoff the output of previous(softmax) layer i.e. converting probabilities into 0 and 1.
My customised function is not giving the expected results .
Kindly help.
My code is :
from keras.layers.core import Activation
from keras.models import Sequential
from keras import backend as K
# Custom activation function
from keras.layers import Activation
from keras import backend as K
from keras.utils.generic_utils import get_custom_objects
#tf.custom_gradient
def custom_activation(x):
print("tensor ",x)
ones = tf.ones(tf.shape(x), dtype=x.dtype.base_dtype)
zeros = tf.zeros(tf.shape(x), dtype=x.dtype.base_dtype)
def grad(dy):
return dy
print(" INSIDE ACTOVATION FUNCTION ")
return keras.backend.switch(x > .5, ones, zeros), grad
model = keras.models.Sequential()
model.add(keras.layers.Dense(32,input_dim=a,activation='relu'))
model.add(keras.layers.Dropout(0.2))
model.add(keras.layers.Dense(64,activation="relu"))
model.add(keras.layers.Dropout(0.2))
model.add(keras.layers.Dense(2,activation='softmax'))
model.add(Activation(custom_activation, name='custom_activation'))#output layer
### Compile the model
model.compile(loss="binary_crossentropy",optimizer='adam',metrics=["accuracy"])
First of all, note that obtaining the class probabilities is always yielding more information than a pure 0-1 classification, and thus your model will almost always train better and faster.
That being said, and considering that you do have an underlying reason to limit your NN, a hard decision like the one you want to implement as activaction function is know as step function or heaviside function. The main problem with this function is that, by default, the function is non-differentiable (there is an infinite slope in the threshold, 0.5 in your case). To address this you have two options:
Create a custom "approximative" gradient that is differentiable. This SO answer covers it well.
Use tf.cond(), which, relying on TF's AutoGrad, will only execute one branch of the graph at runtime, and omit the unused branch.
class MyHeavisideActivation(tf.keras.layers.Layer):
def __init__(self, num_outputs, threshold=.5, **kwargs):
super(MyHeavisideActivation, self).__init__(**kwargs)
self.num_outputs = num_outputs
self.threshold = threshold
def build(self, input_shape):
pass
def call(self, inputs):
return tf.cond(inputs > self.threshold,
lambda: tf.add(tf.multiply(inputs,0), 1), # set to 1
lambda: tf.multiply(inputs, 0)) # set to 0
#> ...same as above
model.add(keras.layers.Dense(2,activation='softmax'))
model.add(MyHeavisideActivation(2, name='custom_activation'))#output layer
Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 2 years ago.
Improve this question
import tensorflow as tf
from tensorflow.keras.datasets import mnist
from tensorflow.keras import models
from tensorflow.keras import layers
from tensorflow.keras.utils import to_categorical
network=models.Sequential() # this initializes a sequential model that we will call network
network.add(layers.Dense(10, activation = 'relu') # this adds a dense hidden layer
network.add(layers.Dense(8, activation = 'softmax')) # this is the output layer
I am trying to create a 2 layer neural network model in tensorflow and am getting this error:
File "<ipython-input-6-0dde2ff676f8>", line 7
network.add(layers.Dense(8, activation = 'softmax')) # this is the output layer
^
SyntaxError: invalid syntax
May I know why I'm getting this error for output layer but not for hidden layer? Thanks.
You have missed a closing bracket.
import tensorflow as tf
from tensorflow.keras.datasets import mnist
from tensorflow.keras import models
from tensorflow.keras import layers
from tensorflow.keras.utils import to_categorical
network=models.Sequential() # this initializes a sequential model that we will call network
network.add(layers.Dense(10, activation = 'relu')) # this adds a dense hidden layer
network.add(layers.Dense(8, activation = 'softmax')) # this is the output layer
Update: As rvinas pointed out, I had forgotten to add inputs_aux as second input in Model. Fixed now, and it works. So ConditionalRNN can readily be used to do what I want.
I'd like to treat time-series together with non-time-series characteristics in extended LSTM cells (a requirement also discussed here). ConditionalRNN (cond-rnn) for Tensorflow in Python seems to allow this.
Can it be used in Keras Functional API (without eager execution)?
That is, does anyone have a clue how to fix my failed approach below, or a different example where ConditionalRNN (or alternatives) are used to readily combine TS and non-TS data in LSTM-style cells or any equivalent?
I've seen the eager execution-bare tf example on Pilippe Remy's ConditionalRNN github page, but I did not manage to extend it to a readily fittable version in Keras Functional API.
My code looks as follows; it works if, instead of the ConditionalRNN, I use a standard LSTM cell (and adjust the model 'x' input correspondingly). With ConditionalRNN, I did not get it to execute; I receive either the must feed a value for placeholder tensor 'in_aux' error (cf. below), or instead some different type of input size complaints when I change the code, despite trying to be careful about data dimensions compatibility.
(Using Python 3.6, Tensorflow 2.1, cond-rnn 2.1, on Ubuntu 16.04)
import numpy as np
from tensorflow.keras.models import Model
from tensorflow.keras.layers import LSTM, Dense, Input
from cond_rnn import ConditionalRNN
inputs = Input(name='in',shape=(5,5)) # Each observation has 5 dimensions à 5 time-steps each
x = Dense(64)(inputs)
inputs_aux = Input(name='in_aux', shape=[5]) # For each of the 5 dimensions, a non-time-series observation too
x = ConditionalRNN(7, cell='LSTM')([x,inputs_aux]) # Updated Syntax for cond_rnn v2.1
# x = ConditionalRNN(7, cell='LSTM', cond=inputs_aux)(x) # Syntax for cond_rnn in some version before v2.1
predictions = Dense(1)(x)
model = Model(inputs=[inputs, inputs_aux], outputs=predictions) # With this fix, [inputs, inputs_aux], it now works, solving the issue
#model = Model(inputs=inputs, outputs=predictions)
model.compile(optimizer='rmsprop', loss='mean_squared_error', metrics=['mse'])
data = np.random.standard_normal([100,5,5]) # Sample of 100 observations with 5 dimensions à 5 time-steps each
data_aux = np.random.standard_normal([100,5]) # Sample of 100 observations with 5 dimensions à only 1 non-time-series value each
labels = np.random.standard_normal(size=[100]) # For each of the 100 obs., a corresponding (single) outcome variable
model.fit([data,data_aux], labels)
The error I get is
tensorflow.python.framework.errors_impl.InvalidArgumentError: You must feed a value for placeholder tensor 'in_aux' with dtype float and shape [?,5]
[[{{node in_aux}}]]
and the traceback is
Traceback (most recent call last):
File "/home/florian/temp_nonclear/playground/test/est1ls_bare.py", line 20, in <module>
model.fit({'in': data, 'in_aux': data_aux}, labels) #model.fit([data,data_aux], labels) # Also crashes when using model.fit({'in': data, 'in_aux': data_aux}, labels)
File "/home/florian/BB/tsgenerator/ts_wgan/venv/lib/python3.5/site-packages/tensorflow/python/keras/engine/training.py", line 643, in fit
use_multiprocessing=use_multiprocessing)
File "/home/florian/BB/tsgenerator/ts_wgan/venv/lib/python3.5/site-packages/tensorflow/python/keras/engine/training_arrays.py", line 664, in fit
steps_name='steps_per_epoch')
File "/home/florian/BB/tsgenerator/ts_wgan/venv/lib/python3.5/site-packages/tensorflow/python/keras/engine/training_arrays.py", line 383, in model_iteration
batch_outs = f(ins_batch)
File "/home/florian/BB/tsgenerator/ts_wgan/venv/lib/python3.5/site-packages/tensorflow/python/keras/backend.py", line 3353, in __call__
run_metadata=self.run_metadata)
File "/home/florian/BB/tsgenerator/ts_wgan/venv/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1458, in __call__
run_metadata_ptr)
I noticed that you are not passing inputs_aux as input to your model. TF is complaining because this tensor is required to compute your output predictions and it is not being fed with any value. Defining your model as follows should solve the problem:
model = Model(inputs=[inputs, inputs_aux], outputs=predictions)
I am trying to setup the decode functionality of textsum using tensorflow serving but I haven't been able fully make sense of what is fully necessary to perform via the MNIST tutorial. Has anyone come across any other tutorials on setting up Tensorflow serving models or even something more aligned to textsum? Any help or direction would be great. Thanks!
In the end I am trying to export the decode functionality from a model trained via 'train' in seq2seq_attention.py found here: https://github.com/tensorflow/models/blob/master/textsum/seq2seq_attention.py
When comparing the below 2 files to make sense of what I need to perform to the above textsum model, I am having difficulty in making sense of what needs to be assigned in the "default_graph_signature, input tensor, classes_tensor, etc" I realize that these may not be aligned with the textsum model,however this is what I am trying to clear up and figured perhaps if I saw some other models that were being exported to tensorflow serving, that it may perhaps make a bit more sense.
Comapred:
https://github.com/tensorflow/tensorflow/blob/r0.11/tensorflow/examples/tutorials/mnist/mnist_softmax.py
and
https://github.com/tensorflow/serving/blob/master/tensorflow_serving/example/mnist_export.py
------------------ Edit -------------------
Below is what I have so far but I am having a few issues. I am trying to setup the Textsum Eval functionality for serving. First I am getting an error stating "no variables to save" when the assignment of Saver(sharded=True) occurs. That aside, I also don't understand what I am supposed to assign to the "classification_signature" and the "named_graph_signature" variables for the exporting of the results via textsum decode.
Any help on what I'm missing here...sure it is a bit.
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import sys
import tensorflow as tf
from tensorflow.contrib.session_bundle import exporter
tf.app.flags.DEFINE_string("export_dir", "exports/textsum",
"Directory where to export textsum model.")
tf.app.flags.DEFINE_string('checkpoint_dir', 'log_root',
"Directory where to read training checkpoints.")
tf.app.flags.DEFINE_integer('export_version', 1, 'version number of the model.')
tf.app.flags.DEFINE_bool("use_checkpoint_v2", False,
"If true, write v2 checkpoint files.")
FLAGS = tf.app.flags.FLAGS
def Export():
try:
saver = tf.train.Saver(sharded=True)
with tf.Session() as sess:
# Restore variables from training checkpoints.
ckpt = tf.train.get_checkpoint_state(FLAGS.checkpoint_dir)
if ckpt and ckpt.model_checkpoint_path:
saver.restore(sess, ckpt.model_checkpoint_path)
global_step = ckpt.model_checkpoint_path.split('/')[-1].split('-')[-1]
print('Successfully loaded model from %s at step=%s.' %
(ckpt.model_checkpoint_path, global_step))
else:
print('No checkpoint file found at %s' % FLAGS.checkpoint_dir)
return
# Export model
print('Exporting trained model to %s' % FLAGS.export_dir)
init_op = tf.group(tf.initialize_all_tables(), name='init_op')
model_exporter = exporter.Exporter(saver)
classification_signature = <-- Unsure what should be assigned here
named_graph_signature = <-- Unsure what should be assigned here
model_exporter.init(
init_op=init_op,
default_graph_signature=classification_signature,
named_graph_signatures=named_graph_signature)
model_exporter.export(FLAGS.export_dir, tf.constant(global_step), sess)
print('Successfully exported model to %s' % FLAGS.export_dir)
except:
err = sys.exc_info()
print ('Unexpected error:', err[0], ' - ', err[1])
pass
def main(_):
Export()
if __name__ == "__main__":
tf.app.run()