Failed to convert object of type <class 'function'> to Tensor - tensorflow

I am trying to randomize the flip augmentation using tensorflow's left_right and up_down augmentation function. I am getting error mapping the function based on the boolean condition via tf.cond()
random_number=tf.random_uniform([],seed=seed)
print_random_number=tf.print(random_number)
flip_strategy=tf.less(random_number,0.5)
version 0.1
image=tf.cond
(
flip_strategy,
tf.image.flip_left_right(image),
tf.image.flip_up_down(image),
)
version 0.2
image=tf.cond
(
flip_strategy,
lambda: tf.image.flip_left_right(image),
lambda: tf.image.flip_up_down(image),
)
ERROR
TypeError: Failed to convert object of type to Tensor. Contents: . Consider casting elements to a supported type.ROR:
Let me know what am I missing or if more info is needed.

From the documentation:
tf.math.less(
x,
y,
name=None
)
Args:
x: A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, int64, bfloat16, uint16, half, uint32, uint64.
y: A Tensor. Must have the same type as x.
name: A name for the operation (optional).
So tf.less expects two tensors, but one of the arguments you pass is a numpy array. You could just convert the numpy array in tensor like
random_number=tf.random_uniform([],seed=seed)
print_random_number=tf.print(random_number)
random_numer=tf.convert_to_tensor(random_number,dtype=tf.float32)
flip_strategy=tf.less(random_number,0.5)
image=tf.cond`
(
flip_strategy,
tf.image.flip_left_right(image),
tf.image.flip_up_down(image),
)

Related

Difference between tf.add() and tensorflow.keras.layers.Add()

I have implemented a Deep Learning model (FCN-8s) in tensorflow and initially used tf.add(x,y) to perform the tensor addition. However, when plotting the architecture, the addition layers seem to be disconected from the rest , but in the summary it appears that the result of this operation is effectively passed to the next layer
When using tensorflow.keras.layers.Add or tensorflow.keras.layers.add, then the addition layers are correclty connected to the rest of the network.
My question is pretty simple:
Is there any difference between using tf.add() and tensorflow.keras.layers.Add().
Besides, is there any difference between tensorflow.keras.layers.Add() and tensorflow.keras.layers.add()? I have seen some code using Add with upper and other with lowercase.
tf.add and tensorflow.keras.layers.Add have different implementation methods.
In gen_math_ops.py (a file generated by the system)
tf.add
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "Add", name, tld.op_callbacks,
x, y)
tensorflow.keras.layers.Add
_result = pywrap_tfe.TFE_Py_FastPathExecute(
_ctx._context_handle, tld.device_name, "AddV2", name,
tld.op_callbacks, x, y)
Their corresponding C source code
REGISTER_OP("Add")
.Input("x: T")
.Input("y: T")
.Output("z: T")
.Attr(
"T: {bfloat16, half, float, double, uint8, int8, int16, int32, int64, "
"complex64, complex128, string}")
.SetShapeFn(shape_inference::BroadcastBinaryOpShapeFn);
// TODO(rmlarsen): Add a Python wrapper that swiches non-string instances to
// use AddV2 (b/68646025).
REGISTER_OP("AddV2")
.Input("x: T")
.Input("y: T")
.Output("z: T")
.Attr(
"T: {bfloat16, half, float, double, uint8, int8, int16, int32, int64, "
"complex64, complex128}")
.SetShapeFn(shape_inference::BroadcastBinaryOpShapeFn)
.SetIsAggregate()
.SetIsCommutative();
That is to say, tf.add is 'Add' but tensorflow.keras.layers.Add is 'AddV2'
tensorflow.keras.layers.Add and tensorflow.keras.layers.add are the same. tensorflow.keras.layers.add is just functional interface of tensorflow.keras.layers.Add
I cant reproduce the addition layers is disconected from the rest when using tf.add

Tensorflow 2 custom dataset Sequence

I have a dataset in a python dictionary. The structure is as follow:
data.data['0']['input'],data.data['0']['target'],data.data['0']['length']
Both input and target are arrays of size (n,) and length is an int.
I have created a class object with tf.keras.utils.Sequence and specify __getitem__ as this:
def __getitem__(self, idx):
idx = str(idx)
return {
'input': np.asarray(self.data[idx]['input']),
'target': np.asarray(self.data[idx]['target']),
'length': self.data[idx]['length']
}
How can I iterate over such dataset using tf.data.Dataset? I am getting this error if I try to use from_tensor_slices
ValueError: Attempt to convert a value with an unsupported type (<class 'dict'>) to a Tensor.
I think you should modify the dictionary to a tensor as proposed here convert a dictionary to a tensor
or change the dictionary to a text file or to a tfrecords. Hope this would help you!

K.cast and tf.cast wont transform datatype

I am working with Keras Functional API
the line nonNegActivity = K.cast(K.greater_equal(activity,0.05),tf.float32) should transform my activity to bool and then to float32, yet an TypeError is raised, when calling fit, stating :
TypeError: Value passed to parameter 'values' has DataType bool not in list of allowed values: float32, float64, int32, uint8, int16, int8, int64, bfloat16, uint16, float16, uint32, uint64
Whole Model:
X = Input(shape=(self.Tx,self.kx,))
lstm_regr = LSTM(400,return_sequences=True,activation="tanh")(X)regr = Dense(self.ky)(lstm_regr)
lstm_activity = LSTM(400,return_sequences=True,activation="sigmoid")(X) activity = Dense(self.ky)(lstm_activity)
nonNegActivity = K.cast(K.greater_equal(activity,0.05),tf.float32)
multiplied = Multiply()([nonNegActivity,regr])
out = [multiplied,activity]
model = Model(inputs=X, outputs=out)

Using static rnn getting TypeError: Cannot convert value None to a TensorFlow DType

First some of my code:
...
fc_1 = layers.Dense(256, activation='relu')(drop_reshape)
bi_LSTM_2 = layers.Lambda(buildGruLayer)(fc_1)
...
def buildGruLayer(inputs):
gru_cells = []
gru_cells.append(tf.contrib.rnn.GRUCell(256))
gru_cells.append(tf.contrib.rnn.GRUCell(128))
gru_layers = tf.keras.layers.StackedRNNCells(gru_cells)
inputs = tf.unstack(inputs, axis=1)
outputs, _ = tf.contrib.rnn.static_rnn(
gru_layers,
inputs,
dtype='float32')
return outputs
Error I am getting when running static_rnn is:
raise TypeError("Cannot convert value %r to a TensorFlow DType." % type_value)
TypeError: Cannot convert value None to a TensorFlow DType.
The shape that comes into the Layer in the shape (64,238,256).
Anyone has a clue what the problem could be. I already googled the error but couldn't find anything. Any help is much appreciated.
If anyone still needs a solution to this. Its because you need to specify the dtype for the GRUCell, e.g tf.float32
Its default is None which in the documentation defaults to the first dimension of your input data (i.e batch dimension, which in tensorflow is a ? or None)
Check the dtype argument from :
https://www.tensorflow.org/api_docs/python/tf/compat/v1/nn/rnn_cell/GRUCell

converting pyspark dataframe fail on 'None Type' object

I have a pyspark dataframe 'data3' with many columns. I am trying to run kmeans on it except the first two columns, when I run my code , tasks always fails on TypeError: float() argument must be a string or a number, not 'NoneType' What am I doing wrong?
def f(x):
rel = {}
#rel['features'] = Vectors.dense(float(x[0]),float(x[1]),float(x[2]),float(x[3]))
rel['features'] = Vectors.dense(float(x[2]),float(x[3]),float(x[4]),float(x[5]),float(x[6]),float(x[7]),float(x[8]),float(x[9]),float(x[10]),float(x[11]),float(x[12]),float(x[13]),float(x[14]),float(x[15]),float(x[16]),float(x[17]),float(x[18]),float(x[19]),float(x[20]),float(x[21]),float(x[22]),float(x[23]),float(x[24]),float(x[25]),float(x[26]),float(x[27]),float(x[28]),float(x[29]),float(x[30]),float(x[31]),float(x[32]),float(x[33]),float(x[34]),float(x[35]),float(x[36]),float(x[37]),float(x[38]),float(x[39]),float(x[40]),float(x[41]),float(x[42]),float(x[43]),float(x[44]),float(x[45]),float(x[46]),float(x[47]),float(x[48]),float(x[49]))
return rel
data= data3.rdd.map(lambda p: Row(**f(p))).toDF()
kmeansmodel = KMeans().setK(7).setFeaturesCol('features').setPredictionCol('prediction').fit(data)
TypeError: float() argument must be a string or a number, not 'NoneType'
Your error comes from converting the xs to float because you probably have missing values
rel['features'] = Vectors.dense(float(x[2]),float(x[3]),float(x[4]),float(x[5]),float(x[6]),float(x[7]),float(x[8]),float(x[9]),float(x[10]),float(x[11]),float(x[12]),float(x[13]),float(x[14]),float(x[15]),float(x[16]),float(x[17]),float(x[18]),float(x[19]),float(x[20]),float(x[21]),float(x[22]),float(x[23]),float(x[24]),float(x[25]),float(x[26]),float(x[27]),float(x[28]),float(x[29]),float(x[30]),float(x[31]),float(x[32]),float(x[33]),float(x[34]),float(x[35]),float(x[36]),float(x[37]),float(x[38]),float(x[39]),float(x[40]),float(x[41]),float(x[42]),float(x[43]),float(x[44]),float(x[45]),float(x[46]),float(x[47]),float(x[48]),float(x[49]))
return rel
You can create a flag to convert each x to float when there is a missing values. For example
list_of_Xs = [x[2], x[3], x[4], x[5], x[6],etc. ]
for x in list_of_Xs:
if x is not None:
x = float(x)
Or use rel.dropna()