Suppose I have a typical CNN model in TensorFlow.
def inference(images):
# images: 4D tensor of [batch_size, IMAGE_SIZE, IMAGE_SIZE, 3] size.
conv_1 = conv_layer(images, 64, 7, 2)
pool_2 = pooling_layer(conv_1, 2, 2)
conv_3 = conv_layer(pool_2, 192, 3, 1)
pool_4 = pooling_layer(conv_3, 2, 2)
...
conv_28 = conv_layer(conv_27, 1024, 3, 1)
fc_29 = fc_layer(conv_28, 512)
fc_30 = fc_layer(fc_29, 4096)
return fc_30
A typical forward pass could be done like this:
images = input()
logits = inference(images)
output = sess.run([logits])
Now suppose my input function now returns a pair of arguments, left_images and right_images (stereo camera). I want to run right_images up to conv_28 and left_images up to fc_30. So something like this
images = tf.placeholder(tf.float32, [batch_size, IMAGE_SIZE, IMAGE_SIZE, 3])
left_images, right_images = input()
conv_28, fc_30 = inference(images)
right_images_val = sess.run([conv_28], feed_dict={images: right_images})
left_images_val = sess.run([fc_30], feed_dict={images: left_images})
This however fails with
TypeError: The value of a feed cannot be a tf.Tensor object.
Acceptable feed values include Python scalars, strings, lists, or
numpy ndarrays.
I want to avoid having to evaluate inputs to then feed it back to TensorFlow. Calling inference twice with different arguments will also not work because functions like conv_layer create variables.
Is it possible to rerun the network with a different input tensor?
Tensorflow shared Variables is what you are looking for. Replace all calls of tf.Variable with tf.get_variable() in inference. Then you can run:
images_left, images_right = input()
with tf.variable_scope("logits") as scope:
logits_left = inference(images_left)
scope.reuse_variables()
logits_right = inference(images_right)
output = sess.run([logits_left, logits_right])
Variables are not created again in the second call of inference. Left and right images are processed using the same weights. Also check out my Tensorflow CNN training toolkit (Look at training code). I utilize this technique to run validation and training forwards in the same TensorFlow graph.
Related
I am trying to get to run a bit of sample code from github in order to learn Working with Tensorflow 2 and the YOLO Framework. My Laptop has a M1000M Graphics Card and I installed the CUDA Platform from NVIDIA from here.
So the Code in question is this bit:
tf.compat.v1.disable_eager_execution()
_MODEL_SIZE = (416, 416)
_CLASS_NAMES_FILE = './data/labels/coco.names'
_MAX_OUTPUT_SIZE = 20
def main(type, iou_threshold, confidence_threshold, input_names):
class_names = load_class_names(_CLASS_NAMES_FILE)
n_classes = len(class_names)
model = Yolo_v3(n_classes=n_classes, model_size=_MODEL_SIZE,
max_output_size=_MAX_OUTPUT_SIZE,
iou_threshold=iou_threshold,
confidence_threshold=confidence_threshold)
if type == 'images':
batch_size = len(input_names)
batch = load_images(input_names, model_size=_MODEL_SIZE)
inputs = tf.compat.v1.placeholder(tf.float32, [batch_size, *_MODEL_SIZE, 3])
detections = model(inputs, training=False)
saver = tf.compat.v1.train.Saver(tf.compat.v1.global_variables(scope='yolo_v3_model'))
with tf.compat.v1.Session() as sess:
saver.restore(sess, './weights/model.ckpt')
detection_result = sess.run(detections, feed_dict={inputs: batch})
draw_boxes(input_names, detection_result, class_names, _MODEL_SIZE)
print('Detections have been saved successfully.')
While executing this (also wondering why starting the detection.py doesnt use GPU in the first place), I get the Error Message:
File "C:\SDKs etc\Python 3.8\lib\site-packages\tensorflow\python\client\session.py", line 1451, in _call_tf_sessionrun
return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
tensorflow.python.framework.errors_impl.UnimplementedError: The Conv2D op currently only supports the NHWC tensor format on the CPU. The op was given the format: NCHW
[[{{node yolo_v3_model/conv2d/Conv2D}}]]
Full Log see here.
If I am understanding this correctly, the format of inputs = tf.compat.v1.placeholder(tf.float32, [batch_size, *_MODEL_SIZE, 3]) is already NHWC (Model Size is a tuple of 2 Numbers) and I don't know how I need to change things in Code to get this running on CPU.
If I am understanding this correctly, the format of inputs =
tf.compat.v1.placeholder(tf.float32, [batch_size, *_MODEL_SIZE, 3]) is
already NHWC (Model Size is a tuple of 2 Numbers) and I don't know how
I need to change things in Code to get this running on CPU.
Yes you are. But look here:
def __init__(self, n_classes, model_size, max_output_size, iou_threshold,
confidence_threshold, data_format=None):
"""Creates the model.
Args:
n_classes: Number of class labels.
model_size: The input size of the model.
max_output_size: Max number of boxes to be selected for each class.
iou_threshold: Threshold for the IOU.
confidence_threshold: Threshold for the confidence score.
data_format: The input format.
Returns:
None.
"""
if not data_format:
if tf.test.is_built_with_cuda():
data_format = 'channels_first'
else:
data_format = 'channels_last'
And later:
def __call__(self, inputs, training):
"""Add operations to detect boxes for a batch of input images.
Args:
inputs: A Tensor representing a batch of input images.
training: A boolean, whether to use in training or inference mode.
Returns:
A list containing class-to-boxes dictionaries
for each sample in the batch.
"""
with tf.compat.v1.variable_scope('yolo_v3_model'):
if self.data_format == 'channels_first':
inputs = tf.transpose(inputs, [0, 3, 1, 2])
Solution:
check tf.test.is_built_with_cuda() work as expected
if not - set order manually when create model:
model = Yolo_v3(n_classes=n_classes, model_size=_MODEL_SIZE,
max_output_size=_MAX_OUTPUT_SIZE,
iou_threshold=iou_threshold,
confidence_threshold=confidence_threshold,
data_format = 'channels_last')
I am trying to use tensors variables to use as weights in a keras layer..
I know that I can use numpy arrays instead but the reason I want to feed tensors is that I want my weight matrices to be of the type SparseTensor.
This is a small example that I have coded so far:
def model_keras(seed, new_hidden_size_list=None):
number_of_layers = 1
hidden_size = 512
hidden_size_list = [hidden_size] * number_of_layers
input_size = 784
output_size = 10
if new_hidden_size_list is not None:
hidden_size_list = new_hidden_size_list
weight_input = tf.Variable(tf.random.normal([784, 512], mean=0.0, stddev=1.0))
bias_input = tf.Variable(tf.random.normal([512], mean=0.0, stddev=1.0))
weight_output = tf.Variable(tf.random.normal([512, 10], mean=0.0, stddev=1.0))
# This gives me an error when trying to use in kernel_initializer and bias_initializer in the keras model
weight_initializer_input = tf.initializers.variables([weight_input])
bias_initializer_input = tf.initializers.variables([bias_input])
weight_initializer_output = tf.initializers.variables([weight_output])
# This works fine
#weight_initializer_input = tf.initializers.lecun_uniform(seed=None)
#bias_initializer_input = tf.initializers.lecun_uniform(seed=None)
#weight_initializer_output = tf.initializers.lecun_uniform(seed=None)
print(weight_initializer_input, bias_initializer_input, weight_initializer_output)
model = keras.models.Sequential()
for index in range(number_of_layers):
if index == 0:
# input layer
model.add(keras.layers.Dense(hidden_size_list[index], activation=nn.selu, use_bias=True,
kernel_initializer=weight_initializer_input,
bias_initializer=bias_initializer_input,
input_shape=(input_size,)))
else:
model.add(keras.layers.Dense(hidden_size_list[index], activation=nn.selu, use_bias=True,
kernel_initializer=weight_initializer_hidden,
bias_initializer=bias_initializer_hidden))
# output layer
model.add(keras.layers.Dense(output_size, use_bias=False, kernel_initializer=weight_initializer_output))
model.add(keras.layers.Activation(nn.softmax))
return model
I am using tensorflow 1.15.
Any idea how one can use custom (user defined) Tensor Variables as initializer instead of pre-set schemes (e.g. Glorot, Truncated Normal etc). Another approach that I could take is to explicitly define the computations instead of using the keras.Layer.
Many thanks
Your code works after enabling eager execution.
import tensorflow as tf
tf.compat.v1.enable_eager_execution()
Add this at the top of you file.
See this for working code.
I want to use a feature extractor (such as ResNet101) and add layers after that which use the output of the feature extractor layer. However, I can't seem to figure out how. I have only found solutions online where an entire network is used without adding additional layers.
I am inexperienced with Tensorflow.
In the code below you can see what I have tried. I can run the code properly without the additional convolutional layer, however my goal is to add more layers after the ResNet.
With this attempt at adding the extra conv layer, this type error is returned:
TypeError: Expected float32, got OrderedDict([('resnet_v1_101/conv1', ...
Once I have added more layers, I would like to start training on a very small test set to see if my model can overfit.
import tensorflow as tf
import tensorflow.contrib.slim as slim
from tensorflow.contrib.slim.python.slim.nets import resnet_v1
import matplotlib.pyplot as plt
numclasses = 17
from google.colab import drive
drive.mount('/content/gdrive')
def decode_text(filename):
img = tf.io.decode_jpeg(tf.io.read_file(filename))
img = tf.image.resize_bilinear(tf.expand_dims(img, 0), [224, 224])
img = tf.squeeze(img, 0)
img.set_shape((None, None, 3))
return img
dataset = tf.data.TextLineDataset(tf.cast('gdrive/My Drive/5LSM0collab/filenames.txt', tf.string))
dataset = dataset.map(decode_text)
dataset = dataset.batch(2, drop_remainder=True)
img_1 = dataset.make_one_shot_iterator().get_next()
net = resnet_v1.resnet_v1_101(img_1, 2048, is_training=False, global_pool=False, output_stride=8)
net = slim.conv2d(net, numclasses, 1)
sess = tf.Session()
global_init = tf.global_variables_initializer()
local_init = tf.local_variables_initializer()
sess.run(global_init)
sess.run(local_init)
img_out, conv_out = sess.run((img_1, net))
resnet_v1.resnet_v1_101 does not return just net, but instead returns a tuple net, end_points. The second element is a dictionary, which is presumably why you are getting this particular error message.
For the documentation of this function:
Returns:
net: A rank-4 tensor of size [batch, height_out, width_out,
channels_out]. If global_pool is False,
then height_out and width_out are reduced by a
factor of output_stride compared to the respective height_in and width_in,
else both height_out and width_out equal one. If num_classes is 0 or None,
then net is the output of the last ResNet block, potentially after global
average pooling. If num_classes a non-zero integer, net contains the
pre-softmax activations.
end_points: A dictionary from components of the network to the corresponding
activation.
So you can write for example:
net, _ = resnet_v1.resnet_v1_101(img_1, 2048, is_training=False, global_pool=False, output_stride=8)
net = slim.conv2d(net, numclasses, 1)
You can also choose an intermediate layer, e.g.:
_, end_points = resnet_v1.resnet_v1_101(img_1, 2048, is_training=False, global_pool=False, output_stride=8)
net = slim.conv2d(end_points["main_Scope/resnet_v1_101/block3"], numclasses, 1)
(you can look into end_points to find the names of the endpoints. Your scope name will be different than main_Scope.)
I would like to create a Seq2Seq model to forecast time series data. I am using the InferenceHelper and I am struggling with the sample_fn parameter. I would like to pass the decoder output of each cell through a dense layer in order to generate a single output at each time step. So I'm providing a function that does this to the sample_fn parameter.
Later on I would like to concatenate the rnn cell outputs with other non-time-series features and build more dense layers on top of it.
The network does fine at training time but not during inference. I think this is caused by the fact that I'm not sharing the same dense layer between training and inference time.
I tried to set the reuse parameter and used a with tf.variable_scope() environment. However, the sample_fn is already called within a specific scope in dynamic_decode and so I fail to use the same scope as I did during training.
The relevant part of my code looks as follows:
The placeholders:
inputs = tf.placeholder(shape=(None, 100, 1), dtype=tf.float32, name='inputs')
input_lengths = tf.placeholder(shape=(None,), dtype=tf.int32, name='input_lengths')
targets = tf.placeholder(shape=(None, 100), dtype=tf.float32, name='targets')
target_lengths = tf.placeholder(shape=(None,), dtype=tf.int32, name='target_lengths')
The encoder:
encoder_cell = tf.nn.rnn_cell.MultiRNNCell([tf.contrib.rnn.GRUCell(num_units=16, name='encoder_cell_0'])
self.decoder_cell = tf.nn.rnn_cell.MultiRNNCell([tf.contrib.rnn.GRUCell(num_units=16, name='decoder_cell_0']))
_, final_encoder_states = tf.nn.dynamic_rnn(cell=encoder_cell, inputs=inputs,
sequence_length=input_lengths, dtype=tf.float32)
The decoder (training)
start_tokens = tf.fill([tf.shape(inputs)[0]], start_token)
start_tokens = tf.cast(tf.expand_dims(start_tokens, 1), dtype=tf.float32)
targets_as_inputs = tf.concat([start_tokens, targets], axis=1)
targets_as_inputs = tf.reshape(targets_as_inputs, (-1, targets_as_inputs.shape[1], 1))
training_helper = tf.contrib.seq2seq.TrainingHelper(inputs=targets_as_inputs, sequence_length=target_lengths, name='training_helper')
training_decoder = tf.contrib.seq2seq.BasicDecoder(cell=decoder_cell, helper=training_helper, initial_state=final_encoder_states)
train_outputs, _, _ = tf.contrib.seq2seq.dynamic_decode(decoder=training_decoder, maximum_iterations=max_target_sequence_length, impute_finished=True)
train_predictions = train_outputs.rnn_output
train_predictions = tf.layers.dense(train_predictions, 1, activation=None, name='output_dense_layer')
The decoder (inference). The incorrect part:
def sample_fn(outputs):
return tf.layers.dense(outputs, 1, activation=None,
name='output_dense_layer', reuse=tf.AUTO_REUSE)
infer_helper = tf.contrib.seq2seq.InferenceHelper(sample_fn=sample_fn, sample_shape=(1),
sample_dtype=tf.float32, start_inputs=start_tokens, end_fn=lambda sample_ids: False, next_inputs_fn=None)
infer_decoder = tf.contrib.seq2seq.BasicDecoder(cell=decoder_cell, helper=infer_helper, initial_state=final_encoder_states)
infer_outputs, _, _ = tf.contrib.seq2seq.dynamic_decode(decoder=infer_decoder, maximum_iterations=max_target_sequence_length, impute_finished=True)
infer_predictions = infer_outputs.rnn_output
infer_predictions = sample_fn(infer_predictions)
There is a similar question: How to use tensorflow seq2seq without embeddings?
The author uses sample_fn=lambda outputs: outputs. But this returns a ValueError in my case because the dimensions don't match. How could they with multiple cells? sample_fn should return a single value.
For now, I have solved my problem by creating my own dynamic_decode function. I copied everything beside
with variable_scope.variable_scope(scope, "decoder", reuse=reuse) as varscope:
as well as a related if condition with varscope and another if condition testing the decoder class from tf.contrib.seq2seq.dynamic_decode.
Not a nice solution but good enough for now.
Trying to implement a minimal toy RNN example in tensorflow.
The goal is to learn a mapping from the input data to the target data, similar to this wonderful concise example in theanets.
Update: We're getting there. The only part remaining is to make it converge (and less convoluted). Could someone help to turn the following into running code or provide a simple example?
import tensorflow as tf
from tensorflow.python.ops import rnn_cell
init_scale = 0.1
num_steps = 7
num_units = 7
input_data = [1, 2, 3, 4, 5, 6, 7]
target = [2, 3, 4, 5, 6, 7, 7]
#target = [1,1,1,1,1,1,1] #converges, but not what we want
batch_size = 1
with tf.Graph().as_default(), tf.Session() as session:
# Placeholder for the inputs and target of the net
# inputs = tf.placeholder(tf.int32, [batch_size, num_steps])
input1 = tf.placeholder(tf.float32, [batch_size, 1])
inputs = [input1 for _ in range(num_steps)]
outputs = tf.placeholder(tf.float32, [batch_size, num_steps])
gru = rnn_cell.GRUCell(num_units)
initial_state = state = tf.zeros([batch_size, num_units])
loss = tf.constant(0.0)
# setup model: unroll
for time_step in range(num_steps):
if time_step > 0: tf.get_variable_scope().reuse_variables()
step_ = inputs[time_step]
output, state = gru(step_, state)
loss += tf.reduce_sum(abs(output - target)) # all norms work equally well? NO!
final_state = state
optimizer = tf.train.AdamOptimizer(0.1) # CONVERGEs sooo much better
train = optimizer.minimize(loss) # let the optimizer train
numpy_state = initial_state.eval()
session.run(tf.initialize_all_variables())
for epoch in range(10): # now
for i in range(7): # feed fake 2D matrix of 1 byte at a time ;)
feed_dict = {initial_state: numpy_state, input1: [[input_data[i]]]} # no
numpy_state, current_loss,_ = session.run([final_state, loss,train], feed_dict=feed_dict)
print(current_loss) # hopefully going down, always stuck at 189, why!?
I think there are a few problems with your code, but the idea is right.
The main issue is that you're using a single tensor for inputs and outputs, as in:
inputs = tf.placeholder(tf.int32, [batch_size, num_steps]).
In TensorFlow the RNN functions take a list of tensors (because num_steps can vary in some models). So you should construct inputs like this:
inputs = [tf.placeholder(tf.int32, [batch_size, 1]) for _ in xrange(num_steps)]
Then you need to take care of the fact that your inputs are int32s, but a RNN cell works on float vectors - that's what embedding_lookup is for.
And finally you'll need to adapt your feed to put in the input list.
I think the ptb tutorial is a reasonable place to look, but if you want an even more minimal example of an out-of-the-box RNN you can take a look at some of the rnn unit tests, e.g., here.
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/kernel_tests/rnn_test.py#L164