how to find tfslim output node names - tensorflow

After training some model with tensorflow and slim, I am trying to freeze the model and weights. But it's quite hard for me to find out the output nodes name, which is necessary for freeze_graph.freeze_graph().
my output layers looks like:
conv4_1 = slim.conv2d(net,num_outputs=2,kernel_size=[1,1],stride=1,scope='conv4_1',activation_fn=tf.nn.softmax)
#conv4_1 = slim.conv2d(net,num_outputs=1,kernel_size=[1,1],stride=1,scope='conv4_1',activation_fn=tf.nn.sigmoid)
print conv4_1.get_shape()
#batch*H*W*4
bbox_pred = slim.conv2d(net,num_outputs=4,kernel_size=[1,1],stride=1,scope='conv4_2',activation_fn=None)
conv4_1 is the softmaxed class like, face or not.
bbox_pred is the bounding box regression.
when I save the graph with, tf.train.write_graph(self.sess.graph_def, output_path, 'model.pb') and open the model.pb as text, I found that the graph looks like:
node {
name: "conv4_1/weights/Initializer/random_uniform/shape"
...
node {
name: "conv4_1/kernel/Regularizer/l2_regularizer"
...
node {
name: "conv4_1/Conv2D"
op: "Conv2D"
input: "conv3/add"
input: "conv4_1/weights/read"
...
node {
name: "conv4_1/Softmax"
op: "Softmax"
input: "conv4_1/Reshape"
...
node {
name: "Squeeze"
op: "Squeeze"
input: "conv4_1/Reshape_1"
attr {
key: "T"
value {
type: DT_FLOAT
}
}
attr {
key: "squeeze_dims"
value {
list {
i: 0
}
}
}
}
so, here comes the problem, which is the output node names?
tensorflow only ways of writing layers could set "names" like:
.conv(3, 3, 32, 1, 1, padding='VALID', relu=False, name='conv3')
.prelu(name='PReLU3')
.conv(1, 1, 2, 1, 1, relu=False, name='conv4-1')
.softmax(3,name='prob1'))
(self.feed('PReLU3') #pylint: disable=no-value-for-parameter
.conv(1, 1, 4, 1, 1, relu=False, name='conv4-2'))
But I can't find setting output names method in tensorflow slim.
Thanks!

Output Node names for the 3 of the inception models are given below:
inception v3 : InceptionV3/Predictions/Reshape_1
inception v4 : InceptionV4/Logits/Predictions
inception resnet v2 : InceptionResnetV2/Logits/Predictions

Related

How to evaluate the multi-class classifier prediction with TensorFlow Model Analysis library?

I'm trying to use TensorFlow Model Analysis library to analyze the prediction data from a multi-class classifier model, using the analyze_raw_data API. Currently the label contains 3 different classes [0, 1, 2], trained with SparseCategoricalCrossentropy loss. The tfma config was set as following:
eval_config = text_format.Parse("""
## Model information
model_specs {
label_key: "label",
prediction_key: "predictions"
}
## Post training metric information. These will be merged with any built-in
## metrics from training.
metrics_specs {
metrics { class_name: "ExampleCount" }
metrics { class_name: "SparseCategoricalCrossentropy" }
metrics { class_name: "SparseCategoricalAccuracy" }
metrics { class_name: "Precision" config: '"top_k": 1' }
metrics { class_name: "Precision" config: '"top_k": 3' }
metrics { class_name: "Recall" config: '"top_k": 1' }
metrics { class_name: "Recall" config: '"top_k": 3' }
metrics { class_name: "MultiClassConfusionMatrixPlot" }
}
## Slicing information
slicing_specs {} # overall slice
""", tfma.EvalConfig())
I've added ground truth label as a numerical value from [0, 1, 2] to the dataframe column "label", and predicted probabilities as a list to another column "predictions" (e.g. [0.2, 0.3, 0.5]), but I observed the error like ArrowTypeError: ("Expected bytes, got a 'numpy.ndarray' object", 'Conversion failed for column predictions with type object') when loading the data:
~/.pyenv/versions/3.7.8/lib/python3.7/site-packages/tensorflow_model_analysis/api/model_eval_lib.py in analyze_raw_data(data, eval_config, output_path, add_metric_callbacks)
1511
1512 arrow_data = table_util.CanonicalizeRecordBatch(
-> 1513 table_util.DataFrameToRecordBatch(data))
1514 beam_data = beam.Create([arrow_data])
1515
Does anyone know how to write the data for labels and predictions of multi-class classification so that we can do the model analysis with tfma?

Output probability of prediction in tensorflow.js

I have a model.json generated from tensorflow via tensorflow.js coverter
In the original implementation of model in tensorflow in python, it is built like this:
model = models.Sequential([
base_model,
layers.Dropout(0.2),
layers.Flatten(),
layers.Dense(128, activation='relu'),
layers.Dense(num_classes)
])
In tensorflow, the probability can be generated by score = tf.nn.softmax(predictions[0]), according to the tutorial on official website.
How do I get this probability in tensorflow.js?
I have copied the codes template as below:
$("#predict-button").click(async function () {
if (!modelLoaded) { alert("The model must be loaded first"); return; }
if (!imageLoaded) { alert("Please select an image first"); return; }
let image = $('#selected-image').get(0);
// Pre-process the image
console.log( "Loading image..." );
let tensor = tf.browser.fromPixels(image, 3)
.resizeNearestNeighbor([224, 224]) // change the image size
.expandDims()
.toFloat()
// RGB -> BGR
let predictions = await model.predict(tensor).data();
console.log(predictions);
let top5 = Array.from(predictions)
.map(function (p, i) { // this is Array.map
return {
probability: p,
className: TARGET_CLASSES[i] // we are selecting the value from the obj
};
}).sort(function (a, b) {
return b.probability - a.probability;
}).slice(0, 2);
console.log(top5);
$("#prediction-list").empty();
top5.forEach(function (p) {
$("#prediction-list").append(`<li>${p.className}: ${p.probability.toFixed(6)}</li>`);
});
How should I modify the above code?
The output is just the same as the value of variable 'predictions':
Float32Array(5)
0: -2.5525975227355957
1: 7.398464679718018
2: -3.252196788787842
3: 4.710395812988281
4: -4.636396408081055
buffer: (...)
byteLength: (...)
byteOffset: (...)
length: (...)
Symbol(Symbol.toStringTag): (...)
__proto__: TypedArray
0: {probability: 7.398464679718018, className: "Sunflower"}
1: {probability: 4.710395812988281, className: "Rose"}
length: 2
__proto__: Array(0)
Please help!!!
Thanks!
In order to extract the probabilities from the logits of the model using a softmax function you can do the following:
This is the array of logits that are also the predictions you get from the model
const logits = [-2.5525975227355957, 7.398464679718018, -3.252196788787842, 4.710395812988281, -4.636396408081055]
You can call tf.softmax() on the array of values
const probabilities = tf.softmax(logits)
Result:
[0.0000446, 0.9362511, 0.0000222, 0.0636765, 0.0000056]
Then if you wanted to get the index with the highest probability you can make use of tf.argMax():
const results = tf.argMax(probabilities).dataSync()[0]
Result:
1
Edit
I am not too familiar with jQuery so this might not be correct. But here is how I would get the probabilities of the outputs in descending order:
let probabilities = tf.softmax(predictions).dataSync();
$("#prediction-list").empty();
probabilities.forEach(function(p, i) {
$("#prediction-list").append(
`<li>${TARGET_CLASSES[i]}: ${p.toFixed(6)}</li>`
);
});

Converted Tensorrt model has different output shape from Tensorflow model?

I have a tensorflow model and converted to tensorrt model.
Tensorflow model's uff conversion is shown below. Input is image and output is Openpose/concat_stage7
NOTE: UFF has been tested with TensorFlow 1.12.0. Other versions are not guaranteed to work
UFF Version 0.6.3
=== Automatically deduced input nodes ===
[name: "image"
op: "Placeholder"
attr {
key: "dtype"
value {
type: DT_FLOAT
}
}
attr {
key: "shape"
value {
shape {
dim {
size: -1
}
dim {
size: -1
}
dim {
size: -1
}
dim {
size: 3
}
}
}
}
]
=========================================
=== Automatically deduced output nodes ===
[name: "Openpose/concat_stage7"
op: "ConcatV2"
input: "Mconv7_stage6_L2/BiasAdd"
input: "Mconv7_stage6_L1/BiasAdd"
input: "Openpose/concat_stage7/axis"
attr {
key: "N"
value {
i: 2
}
}
attr {
key: "T"
value {
type: DT_FLOAT
}
}
attr {
key: "Tidx"
value {
type: DT_INT32
}
}
]
==========================================
Using output node Openpose/concat_stage7
Converting to UFF graph
No. nodes: 463
UFF Output written to cmu/cmu_openpose.uff
Tensorflow model output shape is
self.tensor_output = self.graph.get_tensor_by_name('TfPoseEstimator/Openpose/concat_stage7:0')
(?, ?, ?, 57)
When i run tensorrt, output dimension is (217500,)?
How to have same dimension as Tensorflow model?
Yes now everything is solved and I can produce the same result in TensorRT as Tensorflow model's output.
The issue is TensorRT produce the output array in flattened format. Need to reshape as the dimension as necessary.
So what I do is check the dimension of Tensorflow's output and reshape accordingly.

Tensorflow map_fn gives error: ValueError: No attr named '_XlaCompile'

I try to implement the 'batch hard' batches as described in https://arxiv.org/pdf/1703.07737.pdf to use with a triplet loss. So input is of shape [batch_size, 32] and output should be a list representing triplets, so [[batch_size, 32], [batch_size, 32], [batch_size, 32]] when each individual example is of size (32,).
I implemented this with the following function, so basically using tf.map_fn:
def batch_hard(inputs):
"""
Batch Hard triplets as described in https://arxiv.org/pdf/1703.07737.pdf.
For each sample in input the hardest positive and hardest negative
in the given batch will be selected. A triplet is returned.
"""
class_ids, f_anchor = inputs[0], inputs[1]
def body(x):
class_id, f = x[0], x[1]
same_class = tf.equal(class_ids, class_id)
positive = same_class
negative = tf.logical_not(same_class)
positive = tf.squeeze(positive)
negative = tf.squeeze(negative)
positive.set_shape([None])
negative.set_shape([None])
samples_pos = tf.boolean_mask(f_anchor, positive)
samples_neg = tf.boolean_mask(f_anchor, negative)
# Select hardest positive example
distances = euclidean_distance(samples_pos, f)
hardest_pos = samples_pos[tf.argmax(distances)]
# Select hardest negative example
distances = euclidean_distance(samples_neg, f)
hardest_neg = samples_neg[tf.argmin(distances)]
return [hardest_pos, hardest_neg]
[f_pos, f_neg] = tf.map_fn(body, inputs, dtype=[tf.float32, tf.float32])
return [f_anchor, f_pos, f_neg]
This works perfectly when I only perform a forward pass, with no train_op specified . However when I add this line train_op = optimizer.minimize(loss, global_step=global_step) the following error occurs:
Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/gradients_impl.py", line 348, in _MaybeCompile
xla_compile = op.get_attr("_XlaCompile")
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 2003, in get_attr
raise ValueError("No attr named '" + name + "' in " + str(self._node_def))
ValueError: No attr named '_XlaCompile' in name: "map/while/strided_slice"
op: "StridedSlice"
input: "map/while/boolean_mask/Gather"
input: "map/while/strided_slice/stack"
input: "map/while/strided_slice/stack_1"
input: "map/while/strided_slice/Cast"
attr {
key: "Index"
value {
type: DT_INT64
}
}
attr {
key: "T"
value {
type: DT_FLOAT
}
}
attr {
key: "begin_mask"
value {
i: 0
}
}
attr {
key: "ellipsis_mask"
value {
i: 0
}
}
attr {
key: "end_mask"
value {
i: 0
}
}
attr {
key: "new_axis_mask"
value {
i: 0
}
}
attr {
key: "shrink_axis_mask"
value {
i: 1
}
}
Does anyone has an idea what goes wrong?
A full example of this issue is here https://gist.github.com/anonymous/0b5e9194ebf09be7ad2f0a740bf369b8
Edit: It seems the problems is in these lines
hardest_pos = samples_pos[tf.argmax(distances)]
replacing it with something like
hardest_pos = tf.zeros(32)
gives no errors, however how to solve this?

freeze_graph.py under tensorflow/tools output_node_names what should i set?

I just do the test in tensorflow/model in GitHub,and I have trained and got four files, than I freeze the model, and the problem Is coming.
I am supported that
You need to supply the name of a node to --output_node_names.
and I have red the graph.pbtxt,it is so long ...
the basic format
node{
name: "ParseSingleExample/Squeeze_Shape_image/object/bbox/ymin/size"
op: "Const"
device: "/device:CPU:0"
attr {
key: "_output_shapes"
value {
list {
shape {
dim {
size: 1
}
}
}
}
and so....
}
what should I do ,
thank you
when defining your graph you can set names of your placeholders / nodes, e.g.
# Initialize tensorflow placeholders
x = tf.placeholder('float', [None, self.time_steps, self.num_features], name='input_node')
y = tf.placeholder('float', [None, self.num_features], name='output_node')
Whereas y is the placeholder for your Labels. Consequently your output_node_name would be 'output_node'
Note: If there is no name Attribute you need to fall back on the tf.identity method.
The easiest way is to manually add an identity node with the name you want by using tf.identity.