I am using keras==2.0.8 with tensorflow==1.3.0 backend.
Here is the example which I am confused with:
from keras.layers import Input, Reshape, Conv2DTranspose
x = Input((5000,))
y = Reshape((25, 25, 8))(x)
y = Conv2DTranspose(10, 5, padding='same', strides=2)(y)
print(y)
It's just part of my model and after these lines I use y in some tensorflow operations, but code above prints node of shape (?, ?, ?, 10). I have no idea why TF cannot deduce height and width of resulting tensor statically. (I know that keras can, but I want TF node with proper shape)
If you intend to use these tensorflow operations in a keras model, you have to use them inside Lambda layers.
In the function you create for the lambda layer, you can use the given tensor normally. Unless you have a very specific reason for tensorflow to have this fixed size explicit, there won't be any problem. Is there any special need that demands you to have the tensorflow tensor with explicit shape?
In Keras, you can always use K.shape() in a keras tensor to get its shape. Many keras backend functions can take this shape (mostly with tensorflow) as input. If you can use the keras backend functions instead of pure tensorflow functions, your code may be portable to other backends later.
Example of function:
def tensorflowPart(x):
#do tensorflow operations with the tensor x
shape = K.shape(x) #use the shape of the tensor, as a tensor
#more tensorflow operations
return result
Use the lambda layer in your model:
y = Lambda(tensorflowPart)(y)
Related
I can't find a simple way to convert a tensor to a NumPy array without enabling eager mode, which gives a nice .numpy() method, but also slows down my model training.
I'd be super grateful for your suggestions. For context, I'm writing a custom metric for my TensorFlow model that relies on a scikit learn function, which only takes numpy arrays.
I've tried wrapping the tensors with np.array(), which throws a not implemented error. Also gave sessions and .eval() a go, but didn't get it to work either and seemed like too much for this simple job.
My specific error:
NotImplementedError: Cannot convert a symbolic Tensor (model_17/dense_17/Sigmoid:0) to a numpy array.
# Custom metric
def accuracy_ml(y_true, y_pred):
return accuracy_score(y_true, np.round(y_pred)) # ERROR here feeding tensor to sklearn function
# Model
cnn = simple_model(input_shape=(224, 224, 3),
num_classes=10,
base_model = base_ResNet101)
lr = 1e-2
loss_fn = tf.keras.losses.BinaryCrossentropy()
metrics = [accuracy_ml]
cnn.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=lr),
loss=loss_fn,
metrics=metrics)
# Simple baseline eval that fails
validation_steps=17
loss0, accuracy0 = cnn.evaluate(validation_batches, steps = validation_steps)
Wrapping my NumPy metric with tf.numpy_function() solved it. https://www.tensorflow.org/api_docs/python/tf/numpy_function
What are symbolic tensors in TensorFlow and Keras? How are they different than other tensors? Why do they even exist? Where do they come up in TensorFlow and Keras? How should we deal with them or what problems can we face when dealing with them?
In the past, I had faced certain issues related to symbolic tensors, such as the _SymbolicException, but the documentation does not describe this concept. There's also another post where this question is also asked, but, in this post, I am focusing on this specific question, so that answers can be later used as a reference.
According to blog.tensorflow.org, a symbolic tensor differs from other tensors in that they do not specifically hold values.
Let's consider a simple example.
>>> a = tf.Variable(5, name="a")
>>> b = tf.Variable(7, name="b")
>>> c = (b**2 - a**3)**5
>>> print(c)
The output is as follows:
tf.Tensor(1759441920, shape=(), dtype=int32)
For the above, the values are specifically defined in tf.Variable format, and the output is in Tensor format. However, the tensor must contain a value in order to be considered as such.
Symbolic tensors are different in that no explicit values are required to define the tensor, and this has implications in terms of building neural networks with TensorFlow 2.0, which now uses Keras as the default API.
Here is an example of a Sequential neural network that is used to build a classification model for predicting hotel cancellation incidences (full Jupyter Notebook here if interested):
from tensorflow.keras import models
from tensorflow.keras import layers
model = models.Sequential()
model.add(layers.Dense(8, activation='relu', input_shape=(4,)))
model.add(layers.Dense(1, activation='sigmoid'))
This is a symbolically defined model, as no values are explicitly being defined in the network. Rather, a framework is created for the input variables to be read by the network, and then generate predictions.
In this regard, Keras has become quite popular given that it allows for building of graphs using symbolic tensors, while at the same time maintaining an imperative layout.
Trying to use non keras backend functions for custom loss calculation in keras models.
I am trying to make my keras cnn model use a custom loss function ( KAppa score). However since kappas is not defined in Keras backend , i need to used scikit-learn based kappa implementation. This sklearn function takes array of labels as the argument unlike keras backend functions which take tensors. The loss function call within keras mostly sends tensors Y_pred and Y_true. I did the implementation below using some quide i found online but I get errors .
import keras.backend as K
def cohen_kappa_score_func(y_true, y_pred):
sess = tf.Session()
with sess.as_default():
score = cohen_kappa_score(type(y_true.eval()),type(y_pred.eval()), weights='linear')#idea is to convert the tensor to array
sess.close()
return score
#use this later to compile the keras model with custom loss function as
model.compile(optimizer=optimizers.SGD(lr=0.001, momentum=0.9),
loss=cohen_kappa_score_func,
metrics=['categorical_crossentropy', 'mae','categorical_accuracy'])
This doesnt work and i get the following error
"InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'dense_15_target' with dtype float and shape [?,?]
[[node dense_15_target "
Please give me suggestios to solve this.
I want to build a customized layer in keras to do a linear transformation on the output of last layer.
For example, I got an output X from last layer, my new layer will output X.dot(W)+b.
The shape of W is (49,10), and the shape of X should be (64,49), the shape of b is (10,)
However, the shape of X is (?, 7, 7, 64), when I am trying to reshape it, it becomes shape=(64, ?). What is the meaning of question mark? Could you tell me a proper way to do linear transformation on the output of last layer?
The question mark generally represents the batch size, which has no effect on the model architecture.
You should be able to reshape your X with keras.layers.Reshape((64,49))(X).
You can wrap arbitrary tensorflow operations such as tf.matmul in a Lambda layer to include custom layers in your Keras model. Minimal working example that does the trick:
import tensorflow as tf
from keras.layers import Dense, Lambda, Input
from keras.models import Model
W = tf.random_normal(shape=(128,20))
b = tf.random_normal(shape=(20,))
inp = Input(shape=(10,))
x = Dense(128)(inp)
y = Lambda(lambda x: tf.matmul(x, W) + b)(x)
model = Model(inp, y)
Finally: refer to the Keras documentation on how to write custom layers with trainable weights.
For example: I have a tensor with shape (5,10) and I want back a tensor with shape (5,10) but the first element should now be the last element. so [1,2,3,4,5]becomes [5,4,3,2,1] and [[1,2,3,4,5],[2,3,4,5,6]] becomes [[2,3,4,5,6],[1,2,3,4,5]].
If it matter, I am using tensorflow backend.
Using the Keras backend, there is the reverse function.
import keras.backend as K
flipped = K.reverse(x,axes=0)
For using it in a layer, you can create a Lambda layer:
from keras.layers import *
layer = Lambda(lambda x: K.reverse(x,axes=0),output_shape=(shape of x))
(If it's a sequential layer, model.add(layer), if a functional API model, output = layer(input)