Get output layer Tensorflow 1.14 - tensorflow

I want to create an multi-modal machine learning model using TLSTM for time-variant data.
In order to concatinate time-variant with time-invariant data I need to get the output vector of the TLSTM.
I´m using this TLSTM Model: https://github.com/illidanlab/T-LSTM
I updated the repo to be compatible with Tensorflow 1.14 and Python 3.7.12.
I assume you can exreact the output vector at the get_output function:
def get_output(self, state):
output = tf.nn.relu(tf.matmul(state, self.Wo) + self.bo)
output = tf.nn.dropout(output, self.keep_prob)
output = tf.matmul(output, self.W_softmax) + self.b_softmax
return output
If I print the output I get an tensor:
output = tf.nn.relu(tf.matmul(state, self.Wo) + self.bo)
print(output)
-> Tensor("map_5/while/Relu:0", shape=(?, 64), dtype=float32)
print(output[0])
-> Tensor("map_5/while/strided_slice:0", shape=(64,), dtype=float32)
print(output[0, 0])
-> Tensor("map_5/while/strided_slice_1:0", shape=(), dtype=float32)
The 64 dimension seems to be the output I´m looking for, how can I access it?
Solution: tf.print()
Note that inside of a session it must be called as control_dependence:
def get_output(self, state):
output = tf.nn.relu(tf.matmul(state, self.Wo) + self.bo)
print_op = tf.print(
output,
summarize=-1,
output_stream="file://C:/Users/my_path/T-LSTM-master/features/foo.out")
with tf.control_dependencies([print_op]):
output = tf.nn.dropout(output, self.keep_prob)
output = tf.matmul(output, self.W_softmax) + self.b_softmax
return output
This example directly saves the feature as a file. Summarize=-1 to save/print the entire tensor.

If you just want to print your output tensor, most of the time tf.print(output) would give you the required result.

Related

convert tf.dense Tensor to tf.one_hot Tensor on Graph execution Tensorflow

TF version: 2.11
I try to train a simple 2input classifier with TFRecords tf.data pipeline
I do not manage to convert the tf.dense Tensor with containing only a scalar to a tf.onehot vector
# get all recorddatasets abspath
training_names= [record_path+'/'+rec for rec in os.listdir(record_path) if rec.startswith('train')]
# load in tf dataset
train_dataset = tf.data.TFRecordDataset(training_names[1])
train_dataset = train_dataset.map(return_xy)
mapping function:
def return_xy(example_proto):
#parse example
sample= parse_function(example_proto)
#decode image 1
encoded_image1 = sample['image/encoded_1']
decoded_image1 = decode_image(encoded_image1)
#decode image 2
encoded_image2 = sample['image/encoded_2']
decoded_image2 = decode_image(encoded_image2)
#decode label
print(f'image/object/class/'+level: {sample['image/object/class/'+level]}')
class_label = tf.sparse.to_dense(sample['image/object/class/'+level])
print(f'type of class label :{type(class_label)}')
print(class_label)
# conversion to onehot with depth 26 :: -> how can i extract only the value or convert directly to tf.onehot??
label_onehot=tf.one_hot(class_label,26)
#resizing image
input_left=tf.image.resize(decoded_image1,[416, 416])
input_right=tf.image.resize(decoded_image2,[416, 416])
return {'input_3res1':input_left, 'input_5res2':input_right} , label_onehot
output:
image/object/class/'+level: SparseTensor(indices=Tensor("ParseSingleExample/ParseExample/ParseExampleV2:14", shape=(None, 1), dtype=int64), values=Tensor("ParseSingleExample/ParseExample/ParseExampleV2:31", shape=(None,), dtype=int64), dense_shape=Tensor("ParseSingleExample/ParseExample/ParseExampleV2:48", shape=(1,), dtype=int64))
type of class label :<class 'tensorflow.python.framework.ops.Tensor'>
Tensor("SparseToDense:0", shape=(None,), dtype=int64)
However I am sure that the label is in this Tensor because when run it eagerly
raw_dataset = tf.data.TFRecordDataset([rec_file])
parsed_dataset = raw_dataset.map(parse_function) # only parsing
for sample in parsed_dataset:
class_label=tf.sparse.to_dense(sample['image/object/class/label_level3'])[0]
print(f'type of class label :{type(class_label)}')
print(f'labels from labelmap :{class_label}')
I get output:
type of class label :<class 'tensorflow.python.framework.ops.EagerTensor'>
labels from labelmap :7
If I just chose a random number for the label and pass it to tf_one_hot(randint, 26) then the model begins to train (obviously nonsensical).
So the question is how can i convert the:
Tensor("SparseToDense:0", shape=(None,), dtype=int64)
to a
Tensor("one_hot:0", shape=(26,), dtype=float32)
What I tried so far
in the call data.map(parse_xy)
i tried to just call .numpy() on the tf tensors but didnt work , this only works for eager tensors.
In my understanding i cannot use eager execution because everthing in the parse_xy function gets excecuted on the whole graph:
ive already tried to enable eager execution -> failed
https://www.tensorflow.org/api_docs/python/tf/config/run_functions_eagerly
Note: This flag has no effect on functions passed into tf.data transformations as arguments.
tf.data functions are never executed eagerly and are always executed as a compiled Tensorflow Graph.
ive also tried to use the tf_pyfunc but this only returns another tf.Tensor with an unknown shape
def get_onehot(tensor):
class_label=tensor[0]
return tf.one_hot(class_label,26)
and add the line in parse_xy:
label_onehot=tf.py_function(func=get_onehot, inp=[class_label], Tout=tf.int64)
but there i always get an unknown shape which a cannot just alter with .set_shape()
I was able to solve the issue by only using TensorFlow functions.
tf.gather allows to index a TensorFlow tensor:
class_label_gather = tf.sparse.to_dense(sample['image/object/class/'+level])
class_indices = tf.gather(tf.cast(class_label_gather,dtype=tf.int32),0)
label_onehot=tf.one_hot(class_indices,26)

How to get vocabulary size in tensorflow_transform before apply_vocabulary?

Also posted the question at https://github.com/tensorflow/transform/issues/261
I am using tft in TFX and needs to transform string list class labels into multi-hot indicators inside preprocesing_fn. Essentially:
vocab = tft.vocabulary(inputs['label'])
outputs['label'] = tf.cast(
tf.sparse.to_indicator(
tft.apply_vocabulary(inputs['label'], vocab),
vocab_size=VOCAB_SIZE,
),
"int64",
)
I am trying to get VOCAB_SIZE from the result of vocab, but couldn't find a way to satisfy the deferred execution and known shapes. The closest I got below wouldn't pass the saved model export as the shape for label is unknown.
def _make_table_initializer(filename_tensor):
return tf.lookup.TextFileInitializer(
filename=filename_tensor,
key_dtype=tf.string,
key_index=tf.lookup.TextFileIndex.WHOLE_LINE,
value_dtype=tf.int64,
value_index=tf.lookup.TextFileIndex.LINE_NUMBER,
)
def _vocab_size(deferred_vocab_filename_tensor):
initializer = _make_table_initializer(deferred_vocab_filename_tensor)
table = tf.lookup.StaticHashTable(initializer, default_value=-1)
table_size = table.size()
return table_size
deferred_vocab_and_filename = tft.vocabulary(inputs['label'])
vocab_applied = tft.apply_vocabulary(inputs['label'], deferred_vocab_and_filename)
vocab_size = _vocab_size(deferred_vocab_and_filename)
outputs['label'] = tf.cast(
tf.sparse.to_indicator(vocab_applied, vocab_size=vocab_size),
"int64",
)
Got
ValueError: Feature label (Tensor("Identity_3:0", shape=(None, None), dtype=int64)) had invalid shape (None, None) for FixedLenFeature: apart from the batch dimension, all dimensions must have known size [while running 'Analyze/CreateSavedModel[tf_v2_only]/CreateSavedModel']
Any idea how to achieve this?
As per this comment in the github issue, You can use tft.experimental.get_vocabulary_size_by_name (link) to achieve the same.

How to use the tf.case api of TensorFlow correctly?

I want to design a follow function for expanding any 1D/2D/3D matrix to a 4D matrix.
import tensorflow as tf
def inputs_2_4D(inputs):
_ranks = tf.rank(inputs)
return tf.case({tf.equal(_ranks, 3): lambda: tf.expand_dims(inputs, 3),
tf.equal(_ranks, 2): lambda: tf.expand_dims(tf.expand_dims(inputs, 0), 3),
tf.equal(_ranks, 1): lambda: tf.expand_dims(tf.expand_dims(tf.expand_dims(inputs, 0), 0), 3)},
default=lambda: tf.identity(inputs))
def run():
with tf.Session() as sess:
mat_1d = tf.constant([1, 1])
mat_2d = tf.constant([[1, 1]])
mat_3d = tf.constant([[[1, 1]]])
mat_4d = tf.constant([[[[1, 1]]]])
result = inputs_2_4D(mat_1d)
print(result.eval())
The function, however, cannot run well. It can only perform to output a 4-D matrix when the mat_3d and mat-4d tensors are passed into it. There will be some errors information if a 1D or 2D matrix are passed to the function.
When passing mat_3dormat_4dinto inputs_2_4D(), they can be expanded to a 4D matrix or original matrix:
mat_3d -----> [[[[1]
[1]]]]
mat_4d -----> [[[[1 1]]]]
When mat_1dormat_2dmatrixes are passed into inputs_2_4D, error information:
ValueError: dim 3 not in the interval [-2, 1]. for 'case/cond/ExpandDims' (op: 'ExpandDims') with input shapes: [2], [] and with computed input tensors: input[1] = <3>.
I tested another similar function before. That function can run correctly.
import tensorflow as tf
def test_2_4D(inputs):
_ranks = tf.rank(inputs)
return tf.case({tf.equal(_ranks, 3): lambda: tf.constant(3),
tf.equal(_ranks, 2): lambda: tf.constant(2),
tf.equal(_ranks, 1): lambda: tf.constant(1)},
default=lambda: tf.identity(inputs))
def run():
with tf.Session() as sess:
mat_1d = tf.constant([1, 1])
mat_2d = tf.constant([[1, 1]])
mat_3d = tf.constant([[[1, 1]]])
mat_4d = tf.constant([[[[1, 1]]]])
result = test_2_4D(mat_3d)
print(result.eval())
This function can correctly output the corresponding results when passing all of matrixes.
test_2_4D() RESULTS:
mat_1d -----> 1
mat_2d -----> 2
mat_3d -----> 3
mat_4d -----> [[[[1 1]]]]
I don't know why the correct branch in inputs_2_4D() cannot be found while the tf.equal() in each branch were executed. I feel that the 1st and 2nd branches in the function seem to still work if the input matrix is "mat_1d" or "mat_2d". So, the program will crash down. Please help me to analyze this problem!
I think I worked out what the problem is here. Turns out all condition/function pairs are evaluated. This can be revealed by giving the ops different names. The problem is that if your input is, say, rank 2, Tensorflow seems to still evaluate tf.equal(_ranks, 3): lambda: tf.expand_dims(inputs, 3). This leads to a crash because it cannot expand dim 3 for a rank-2 tensor (the maximum allowed value is 2).
This actually makes sense since with tf.case you're basically saying "I don't know which of these cases is going to be true at runtime, so check which one is appropriate and execute the corresponding function". However this means that Tensorflow needs to prepare execution paths for all possible cases, which in this case leads to invalid computations (trying to expand invalid dimensions).
At this point it would be nice to know a little more about your problem, i.e. why exactly you need that function. If you have different inputs and you simply want to bring them all to 4D, but each input always has the same dimensionality, consider simply using Python if-statements. Example:
inputs3d = tf.constant([[[1,1]]]) # this is always 3D
inputs2d = tf.constant([[1,1]]) # this is alwayas 2D
...
def inputs_2_4D(inputs):
_rank = len(inputs.shape.as_list())
if _rank == 3:
return tf.expand_dims(inputs, 3)
elif _rank == 2:
return tf.expand_dims(tf.expand_dims(inputs, 0), 3)
...
This will check the input rank while the graph is being built (not at runtime like tf.case) and really only prepare those expand_dims ops that are appropriate for the given input.
However if you have a single inputs tensor and this could have different ranks at different times of your program this would require a different solution. Please let us know which problem you're trying to solve!
I have implement the functionality I want through 2 ways. Now, I provide my code to share.
The 1st method based on tf.cond:
def inputs_2_4D(inputs):
_rank1d = tf.rank(inputs)
def _1d_2_2d(): return tf.expand_dims(inputs, 0)
def _greater_than_1d(): return tf.identity(inputs)
_tmp_2d = tf.cond(_rank1d < 2, _1d_2_2d, _greater_than_1d)
_rank2d = tf.rank(_tmp_2d)
def _2d_2_3d(): return tf.expand_dims(_tmp_2d, 0)
def _greater_than_2d(): return tf.identity(_tmp_2d)
_tmp_3d = tf.cond(_rank2d < 3, _2d_2_3d, _greater_than_2d)
_rank3d = tf.rank(_tmp_3d)
def _3d_2_4d(): return tf.expand_dims(_tmp_3d, 3)
def _greater_than_3d(): return tf.identity(_tmp_3d)
return (tf.cond(_rank3d < 4, _3d_2_4d, _greater_than_3d))
The 2nd method based on tf.case with tf.cond:
def inputs_2_4D_1(inputs):
_rank = tf.rank(inputs)
def _assign_original(): return tf.identity(inputs)
def _dummy(): return tf.expand_dims(inputs, 0)
_1d = tf.cond(tf.equal(_rank, 1), _assign_original, _dummy)
_2d = tf.cond(tf.equal(_rank, 2), _assign_original, _dummy)
_3d = tf.cond(tf.equal(_rank, 3), _assign_original, _dummy)
def _1d_2_4d(): return tf.expand_dims(tf.expand_dims(tf.expand_dims(_1d, 0), 0), 3)
def _2d_2_4d(): return tf.expand_dims(tf.expand_dims(_2d, 0), 3)
def _3d_2_4d(): return tf.expand_dims(_3d, 3)
return (tf.case({tf.equal(_rank, 1): _1d_2_4d,
tf.equal(_rank, 2): _2d_2_4d,
tf.equal(_rank, 3): _3d_2_4d},
default=_assign_original))
I think the efficiency of the 2nd method should be less than the 1st method's, because the function _dummy() always wastes 2 operations when allocating inputs into _1d,_2d,_3d respectively.

Saving and running wide_deep.py model

I've been playing around with the Tensorflow Wide and Deep tutorial using the census dataset.
The linear/wide tutorial states:
We will train a logistic regression model, and given an individual's information our model will output a number between 0 and 1
At the moment, I can't work out how to predict the output of an individual input (copied from the unit test):
TEST_INPUT_VALUES = {
'age': 18,
'education_num': 12,
'capital_gain': 34,
'capital_loss': 56,
'hours_per_week': 78,
'education': 'Bachelors',
'marital_status': 'Married-civ-spouse',
'relationship': 'Husband',
'workclass': 'Self-emp-not-inc',
'occupation': 'abc',
}
How can we predict and output whether this person is likely to earn <50k (0) or >=50k (1)?
The function is predict, but I didn't figure out how to input one example data directly (I tried numpy_input_fn and dict of tensors).
Instead, using the input function in wide_deep.py to write down the data into a temporary csv file then read it, the predict function can be used:
TEST_INPUT = ('18,Self-emp-not-inc,987,Bachelors,12,Married-civ-spouse,abc,'
'Husband,zyx,wvu,34,56,78,tsr,<=50K')
# Create temporary CSV file
input_csv = '/tmp/census_model/test.csv'
with tf.gfile.Open(input_csv, 'w') as temp_csv:
temp_csv.write(TEST_INPUT)
# restore model trained by wide_deep.py with same model_dir and model_type
model = wide_deep.build_estimator(FLAGS.model_dir, FLAGS.model_type)
pred_iter = model.predict(input_fn=lambda: wide_deep.input_fn(input_csv, 1, False, 1))
for pred in pred_iter:
# print(pred)
print(pred['classes'])
There are other attributes like probability, logits etc in pred.
Hookay, I can answer this now.. So if you want to evaluate the test set accuracy, you can follow the accepted answer, but if you want to make your own predictions, here are the steps.
First, construct a new input_fn, notice you need to alter the columns and the default column values since the label column wouldn't be there.
def parse_csv(value):
print('Parsing', data_file)
columns = tf.decode_csv(value, record_defaults=_PREDICT_COLUMNS_DEFAULTS)
features = dict(zip(_PREDICT_COLUMNS, columns))
return features
def predict_input_fn(data_file):
assert tf.gfile.Exists(data_file), ('%s not found. Please make sure the path is correct.' % data_file)
dataset = tf.data.TextLineDataset(data_file)
dataset = dataset.map(parse_csv, num_parallel_calls=5)
dataset = dataset.batch(1) # => This is very important to get the rank correct
iterator = dataset.make_one_shot_iterator()
features = iterator.get_next()
return features
Then you can just call it simply by
results = model.predict(
input_fn=lambda: predict_input_fn(data_file='test.csv')
)

Output node for tensorflow graph created with tf.layers

I have built a tensorflow neural net and now want to run the graph_util.convert_variables_to_constants function on it. However this requires an output_node_names parameter. The last layer in the net has the name logit and is built as follows:
logits = tf.layers.dense(inputs=dropout, units=5, name='logit')
however there are many nodes in that scope:
gd = sess.graph_def
for n in gd.node:
if 'logit' in n.name:print(n.name)
prints:
logit/kernel/Initializer/random_uniform/shape
logit/kernel/Initializer/random_uniform/min
logit/kernel/Initializer/random_uniform/max
logit/kernel/Initializer/random_uniform/RandomUniform
logit/kernel/Initializer/random_uniform/sub
logit/kernel/Initializer/random_uniform/mul
logit/kernel/Initializer/random_uniform
logit/kernel
logit/kernel/Assign
logit/kernel/read
logit/bias/Initializer/zeros
logit/bias
logit/bias/Assign
logit/bias/read
logit/Tensordot/Shape
logit/Tensordot/Rank
logit/Tensordot/axes
...
logit/Tensordot/Reshape_1
logit/Tensordot/MatMul
logit/Tensordot/Const_2
logit/Tensordot/concat_2/axis
logit/Tensordot/concat_2
logit/Tensordot
logit/BiasAdd
...
How do I work out which of these nodes is the output node?
If the graph is complex, a common way is to add an identity node at the end:
output = tf.identity(logits, 'output')
# you can use the name "output"
For example, the following code should work:
logits = tf.layers.dense(inputs=dropout, units=5, name='logit')
output = tf.identity(logits, 'output')
output_graph_def = tf.graph_util.convert_variables_to_constants(
ss, tf.get_default_graph().as_graph_def(), ['output'])