Reshaping tensor with tf.reshape during forward pass causes disconnect error in TensorFlow.
For example,
...
image = tf.keras.layers.Input(INPUT_SHAPE, name='image', dtype=tf.uint8)
image = tf.reshape(image, shape = (-1, 1344, 768, 1))
image_norm = normalize(image)
...
Above code causes the following error
Graph disconnected: cannot obtain value for tensor
KerasTensor(type_spec=TensorSpec(shape=(None, 8, 1344, 768, 1),
dtype=tf.uint8, name='image'), name='image', description="created by
layer 'image'") at layer "tf.reshape". The following previous layers
were accessed without issue: []
Is there any way to reshape tensor without disconnet?
Related
I am using VGG16 for transfer learning. My images are grayscale. So, I need to change the input channel shape of Vgg16 from (224, 224, 3) to (224, 224, 1). I tried the following code and got error:
TypeError: build() takes from 1 to 2 positional arguments but 4 were given
Can anyone help me where Am I doing it wrong?
vgg16_model= load_model('Fetched_VGG.h5')
vgg16_model.summary()
# transform the model to Sequential
model= Sequential()
for layer in vgg16_model.layers[1:-1]:
model.add(layer)
# Freezing the layers (Oppose weights to be updated)
for layer in model.layers:
layer.trainable = False
model.build(224,224,1)
model.add(Dense(2, activation='softmax', name='predictions'))
you can't, even if you get rid of the input layer, this model has a graph that has already been compiled and your first conv layer expects an input with 3 channels. I don't think there is really an easy work around to make it accept 1 channel if there is any at all.
you need to repeat your data in third dimension and have the same grayscale image in all 3 bands instead of RGB, that works just fine.
if your image has the shape of : (224,224,1):
import numpy as np
gray_image_3band = np.repeat(gray_img, repeats = 3, axis = -1)
if your image has the shape of : (224,224)
gray_image_3band = np.repeat(gray_img[..., np.newaxis], repeats = 3, axis = -1)
you don't need to call the model.build() anymore this way, keep the input layer. but if you ever wanted to call it you need to pass the shape as a tuple like this:
model.build( (224, 224, 1) ) # this is correct, notice the parentheses
Following the example at https://www.tensorflow.org/guide/datasets#preprocessing_data_with_datasetmap, I want to create a tf.Dataset which takes in paths to images, and maps these to image tensors.
My first attempt was the following, which is very similar to the example in the above link:
def input_parser(image_path):
image_data_string = tf.read_file(image_path)
image_decoded = tf.image.decode_png(image_data_string, channels=3)
image_float = tf.image.convert_image_dtype(image_decoded, dtype=tf.float32)
return image_float
def train_model():
image_paths = ['test_image1.png', .test_image2.png', 'test_image3.png']
dataset = tf.data.Dataset.from_tensor_slices(image_paths)
dataset = dataset.map(map_func=input_parser)
iterator = dataset.make_initializable_iterator()
input_images = iterator.get_next()
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
sess.run(iterator.initializer)
for i in range(3):
x = sess.run(input_images)
print(x.shape)
This seemed to work ok, and printed out:
(64, 64, 3)
(64, 64, 3)
(64, 64, 3)
Which are indeed the dimensions of my images.
So then I tried to actually feed this data into a network to train, and modified the code accordingly:
def input_parser(image_path):
image_data_string = tf.read_file(image_path)
image_decoded = tf.image.decode_png(image_data_string, channels=3)
image_float = tf.image.convert_image_dtype(image_decoded, dtype=tf.float32)
return image_float
def train_model():
image_paths = ['test_image1.png', .test_image2.png', 'test_image3.png']
dataset = tf.data.Dataset.from_tensor_slices(image_paths)
dataset = dataset.map(map_func=input_parser)
iterator = dataset.make_initializable_iterator()
input_images = iterator.get_next()
x = tf.layers.conv2d(inputs=input_images, filters=50, kernel_size=[5, 5], name='layer1')
x = tf.layers.flatten(x, name='layer2')
prediction = tf.layers.dense(inputs=x, units=4, name='layer3')
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
sess.run(iterator.initializer)
for i in range(3):
p = sess.run(prediction)
print(p)
This then gave me the following error message:
ValueError: Input 0 of layer layer1 is incompatible with the layer: expected ndim=4, found ndim=3. Full shape received: [None, None, 3]
I have two questions about this:
1) Why is my network receiving an input of shape [None, None, 3], when as we have seen, the data read by the iterator is of shape [64, 64, 3].
2) Why isn't the shape of the input actually [1, 64, 64, 3], i.e. with 4 dimensions? I thought that the first dimension would be 1 because this is the batch size (I am not batching the data, so effectively this is a batch size of 1).
Thanks!
The shape is None in the spatial dimensions because in principle you could be loading images of any size. There is no guarantee that they will be 64x64 so Tensorflow uses None shapes to allow for inputs of any size. Since you know that the images will always be the same size, you can use a Tensor's set_shape method to give this information. Just include a line image_float.set_shape((64, 64, 3)) in your parse function. Note that this seems to modify the tensor in place. There is even an example using images here.
You are not batching the data, so no batch axis is added at all. The elements of the dataset are simply images of shape (64, 64, 3) and these elements are returned one by one by the iterator. If you want batches of size 1 you should use dataset = dataset.batch(1). Now the elements of the dataset are image "batches" of shape (1, 64, 64, 3). Of course you could also use any other method to add an axis in front, such as tf.expand_dims.
This seems like a trivial question, but I've been unable to find the answer.
I have batched sequences of images of shape:
[batch_size, number_of_frames, frame_height, frame_width, number_of_channels]
and I would like to pass each frame through a few convolutional and pooling layers. However, TensorFlow's conv2d layer accepts 4D inputs of shape:
[batch_size, frame_height, frame_width, number_of_channels]
My first attempt was to use tf.map_fn over axis=1, but I discovered that this function does not propagate gradients.
My second attempt was to use tf.unstack over the first dimension and then use tf.while_loop. However, my batch_size and number_of_frames are dynamically determined (i.e. both are None), and tf.unstack raises {ValueError} Cannot infer num from shape (?, ?, 30, 30, 3) if num is unspecified. I tried specifying num=tf.shape(self.observations)[1], but this raises {TypeError} Expected int for argument 'num' not <tf.Tensor 'A2C/infer/strided_slice:0' shape=() dtype=int32>.
Since all the images (num_of_frames) are passed to the same convolutional model, you can stack both batch and frames together and do the normal convolution. Can be achieved by just using tf.resize as shown below:
# input with size [batch_size, frame_height, frame_width, number_of_channels
x = tf.placeholder(tf.float32,[None, None,32,32,3])
# reshape for the conv input
x_reshapped = tf.reshape(x,[-1, 32, 32, 3])
x_reshapped output size will be (50, 32, 32, 3)
# define your conv network
y = tf.layers.conv2d(x_reshapped,5,kernel_size=(3,3),padding='SAME')
#(50, 32, 32, 3)
#Get back the input shape
out = tf.reshape(x,[-1, tf.shape(x)[1], 32, 32, 3])
The output size would be same as the input: (10, 5, 32, 32, 3
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
print(sess.run(out, {x:np.random.normal(size=(10,5,32,32,3))}).shape)
#(10, 5, 32, 32, 3)
I'm trying to apply transfer learning to MNIST using MobileNet weights in Keras. Keras documentation to use MobileNet https://keras.io/applications/#mobilenet
Mobilenet accepts 224x224x3 as input but MNIST is 28x28x1. I'm creating a Lambda layer which can convert 28x28x1 image into 224x224x3 and send it as input to MobileNet. The following code causes
TypeError: Input layers to a Model must be InputLayer objects. Received inputs: Tensor("lambda_2/ResizeNearestNeighbor:0", shape=(?, 224, 224, 3), dtype=float32). Input 0 (0-based) originates from layer type Lambda.
height = 28
width = 28
input_image = Input(shape=(height,width,1))
def resize_image_to_inception(x):
x = K.repeat_elements(x, 3, axis=3)
x = K.resize_images(x, 8, 8, data_format="channels_last")
return x
input_image_ = Lambda(resize_image_to_inception, output_shape=(224, 224, 3))(input_image)
print(type(input_image_))
base_model = MobileNet(input_tensor=input_image_, weights='imagenet', include_top=False)
I am trying to build an LSTM network using an Estimator. My data looks like
X = [[1,2,3], [2,3,4], ... , [98,99,100]]
y = [2, 3, ... , 99]
I am using an Estimator:
regressor = learn.Estimator(model_fn=lstm_model,
params=model_params,
)
where the lstm_model function is
def lstm_model(features, targets, mode, params):
def lstm_cells(layers):
if isinstance(layers[0], dict):
return [tf.nn.rnn_cell.BasicLSTMCell(layer['steps'],state_is_tuple=True) for layer in layers]
return [tf.nn.rnn_cell.BasicLSTMCell(steps, state_is_tuple=True) for steps in layers]
stacked_lstm = tf.nn.rnn_cell.MultiRNNCell(lstm_cells(params['rnn_layers']), state_is_tuple=True)
output, layers = tf.nn.rnn(stacked_lstm, [features], dtype=tf.float32)
return learn.models.linear_regression(output, targets)
and params are
model_params = {
'steps': 1000,
'learning_rate': 0.03,
'batch_size': 24,
'time_steps': 3,
'rnn_layers': [{'steps': 3}],
'dense_layers': [10, 10]
}
and then I do the fitting
regressor.fit(X, y)
The issue I am facing is
output, layers = tf.nn.rnn(stacked_lstm, [features], dtype=tf.float32)
requires a sequence but I am not sure how to split my features to into list of tensors. The shape of features inside the lstm_model function is (?, 3)
I have two questions, how do I do the training in batches? and how do I split 'features' so
output, layers = tf.nn.rnn(stacked_lstm, [features], dtype=tf.float32)
doesn't throw and error. The error I am getting is
raise TypeError("%s that don't all match." % prefix)
TypeError: Tensors in list passed to 'values' of 'Concat' Op have types [float64, float32] that don't all match.
I am using tensorflow 0.12
I had to set the shape for features to be
(batch_size, time_step, 1) or (None, time_step, 1) and then unstack the features to go in the rnn. Unstacking the features in the "time_step" so you have a list of tensors with the size of time steps and the shape for each tensor should be (None, 1) or (batch_size, 1)