How can i track weights when i use tf.train.adamoptimizer - tensorflow

I am using tf.train.AdamOptimizer to train my neural network, I know I can train easily by this, but how can I track the weight changes, or is there any method and function for this job?
Thank you very much.
optimizer = tf.train.AdamOptimizer(learning_rate=decoder.learning_rate).minimize(loss,global_step=global_step)

Below is an example to print the weights of a layer of the model.
import tensorflow as tf
from tensorflow.keras.applications import MobileNetV2
print("Tensorflow Version:",tf.__version__)
model = MobileNetV2(input_shape=[128, 128, 3], include_top=False) #or whatever model
print("Layer of the model:",model.layers[2])
print("Weights of the Layer",model.layers[2].get_weights())
Output:
I have cut short the putput as the weights was lengthy.
Tensorflow Version: 1.15.0
Layer of the model: <tensorflow.python.keras.layers.convolutional.Conv2D object at 0x7fe9123d2ac8>
Weights of the Layer [array([[[[ 1.60920480e-03, -1.45352582e-22, 1.54917374e-01,
2.29649822e-06, 1.49279218e-02, -5.39761280e-21,
7.01060288e-21, 1.54408276e-21, -1.12762444e-01,
-2.37320393e-01, 2.77190953e-01, 5.03320247e-02,
-4.21045721e-01, 1.73461720e-01, -5.35633206e-01,
-5.95900055e-04, 5.34933396e-02, 2.24988922e-01,
-1.49572559e-22, 2.20291526e-03, -5.38195252e-01,
-2.21309029e-02, -4.88732375e-22, -3.89234926e-21,
2.84152419e-22, -1.23437764e-02, -1.14439223e-02,
1.46071922e-22, -4.24997229e-03, -2.48236431e-09,
-4.64977883e-02, -3.43741417e-01],
[ 1.25032081e-03, -2.00014382e-22, 2.32940048e-01,
2.78269158e-06, 1.99653972e-02, 7.11864268e-20,
6.08769832e-21, 2.95990709e-22, -2.76436746e-01,
-5.15990913e-01, 6.78669810e-01, 3.02553400e-02,
-7.55709827e-01, 3.29371482e-01, -9.70950842e-01,
-3.02999169e-02, 7.99737051e-02, -4.45111930e-01,
-2.01127320e-22, 1.61909293e-02, 2.13520035e-01,
4.36614119e-02, -2.21765310e-22, 4.13772868e-21,
2.79922130e-22, 4.81817983e-02, -2.71119680e-02,
4.72275196e-22, 1.12856282e-02, 3.38369194e-10,
-1.29655674e-01, -3.85710597e-01],

Related

Convert Tensorflow frozen .pb model to keras model

I have tensorflow .pb frozen model and i would like to retrain this model with new data and in order to do that i would like to convert the tf .pb frozen model to a keras model.
i was able to visualise layers and weight values and i wana know if it is possible to convert this .pb model to a keras model in order to retain it or if there is another solution in order to retrain a .pb frozen model ?
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
detection_graph = tf.Graph()
with detection_graph.as_default():
od_graph_def = tf.GraphDef()
with tf.gfile.GFile('weight/saved_model.pb', 'rb') as fid:
serialized_graph = fid.read()
od_graph_def.ParseFromString(serialized_graph)
tf.import_graph_def(od_graph_def, name='')
conf = tf.ConfigProto()
session = tf.Session(graph=detection_graph, config=conf)
#layers name
layers = [op.name for op in detection_graph.get_operations()]
for layer in layers:
print(layer)
Const
ToFloat
Const_1
ToFloat_1
Const_2
ToFloat_2
image_tensor
ToFloat_3
Preprocessor/map/Shape
Preprocessor/map/strided_slice/stack
Preprocessor/map/strided_slice/stack_1
Preprocessor/map/strided_slice/stack_2
Preprocessor/map/strided_slice
Preprocessor/map/TensorArray
Preprocessor/map/TensorArrayUnstack/Shape
Preprocessor/map/TensorArrayUnstack/strided_slice/stack
Preprocessor/map/TensorArrayUnstack/strided_slice/stack_1
Preprocessor/map/TensorArrayUnstack/strided_slice/stack_2
Preprocessor/map/TensorArrayUnstack/strided_slice
Preprocessor/map/TensorArrayUnstack/range/start
Preprocessor/map/TensorArrayUnstack/range/delta
Preprocessor/map/TensorArrayUnstack/range
Preprocessor/map/TensorArrayUnstack/TensorArrayScatter/TensorArrayScatterV3
Preprocessor/map/Const
Preprocessor/map/TensorArray_1
Preprocessor/map/TensorArray_2
Preprocessor/map/while/Enter
Preprocessor/map/while/Enter_1
Preprocessor/map/while/Enter_2
Preprocessor/map/while/Merge
Preprocessor/map/while/Merge_1
Preprocessor/map/while/Merge_2
Preprocessor/map/while/Less/Enter
Preprocessor/map/while/Less
Preprocessor/map/while/LoopCond
Preprocessor/map/while/Switch
Preprocessor/map/while/Switch_1
Preprocessor/map/while/Switch_2
Preprocessor/map/while/Identity
Preprocessor/map/while/Identity_1
Preprocessor/map/while/Identity_2
Preprocessor/map/while/TensorArrayReadV3/Enter
Preprocessor/map/while/TensorArrayReadV3/Enter_1
Preprocessor/map/while/TensorArrayReadV3
Preprocessor/map/while/ResizeImage/stack
Preprocessor/map/while/ResizeImage/resize_images/ExpandDims/dim
Preprocessor/map/while/ResizeImage/resize_images/ExpandDims
Preprocessor/map/while/ResizeImage/resize_images/ResizeBilinear
Preprocessor/map/while/ResizeImage/resize_images/Squeeze
Preprocessor/map/while/ResizeImage/stack_1
Preprocessor/map/while/TensorArrayWrite/TensorArrayWriteV3/Enter
Preprocessor/map/while/TensorArrayWrite/TensorArrayWriteV3
Preprocessor/map/while/TensorArrayWrite_1/TensorArrayWriteV3/Enter
Preprocessor/map/while/TensorArrayWrite_1/TensorArrayWriteV3
Preprocessor/map/while/add/y
Preprocessor/map/while/add
Preprocessor/map/while/NextIteration
Preprocessor/map/while/NextIteration_1
Preprocessor/map/while/NextIteration_2
Preprocessor/map/while/Exit_1
Preprocessor/map/while/Exit_2
Preprocessor/map/TensorArrayStack/TensorArraySizeV3
Preprocessor/map/TensorArrayStack/range/start
Preprocessor/map/TensorArrayStack/range/delta
Preprocessor/map/TensorArrayStack/range
Preprocessor/map/TensorArrayStack/TensorArrayGatherV3
Preprocessor/map/TensorArrayStack_1/TensorArraySizeV3
Preprocessor/map/TensorArrayStack_1/range/start
Preprocessor/map/TensorArrayStack_1/range/delta
Preprocessor/map/TensorArrayStack_1/range
Preprocessor/map/TensorArrayStack_1/TensorArrayGatherV3
Preprocessor/sub/y
Preprocessor/sub
Shape
FirstStageFeatureExtractor/Shape
FirstStageFeatureExtractor/strided_slice/stack
FirstStageFeatureExtractor/strided_slice/stack_1
FirstStageFeatureExtractor/strided_slice/stack_2
FirstStageFeatureExtractor/strided_slice
FirstStageFeatureExtractor/GreaterEqual/y
FirstStageFeatureExtractor/GreaterEqual
FirstStageFeatureExtractor/Shape_1
FirstStageFeatureExtractor/strided_slice_1/stack
FirstStageFeatureExtractor/strided_slice_1/stack_1
FirstStageFeatureExtractor/strided_slice_1/stack_2
FirstStageFeatureExtractor/strided_slice_1
FirstStageFeatureExtractor/GreaterEqual_1/y
FirstStageFeatureExtractor/GreaterEqual_1
FirstStageFeatureExtractor/LogicalAnd
FirstStageFeatureExtractor/Assert/Assert/data_0
FirstStageFeatureExtractor/Assert/Assert
FirstStageFeatureExtractor/resnet_v1_101/resnet_v1_101/Pad/paddings
FirstStageFeatureExtractor/resnet_v1_101/resnet_v1_101/Pad
FirstStageFeatureExtractor/resnet_v1_101/conv1/weights
FirstStageFeatureExtractor/resnet_v1_101/conv1/weights/read
FirstStageFeatureExtractor/resnet_v1_101/resnet_v1_101/conv1/Conv2D
FirstStageFeatureExtractor/resnet_v1_101/conv1/BatchNorm/gamma
FirstStageFeatureExtractor/resnet_v1_101/conv1/BatchNorm/gamma/read
FirstStageFeatureExtractor/resnet_v1_101/conv1/BatchNorm/beta
FirstStageFeatureExtractor/resnet_v1_101/conv1/BatchNorm/beta/read
FirstStageFeatureExtractor/resnet_v1_101/conv1/BatchNorm/moving_mean
FirstStageFeatureExtractor/resnet_v1_101/conv1/BatchNorm/moving_mean/read
FirstStageFeatureExtractor/resnet_v1_101/conv1/BatchNorm/moving_variance
FirstStageFeatureExtractor/resnet_v1_101/conv1/BatchNorm/moving_variance/read
FirstStageFeatureExtractor/resnet_v1_101/resnet_v1_101/conv1/BatchNorm/FusedBatchNorm
FirstStageFeatureExtractor/resnet_v1_101/resnet_v1_101/conv1/Relu
FirstStageFeatureExtractor/resnet_v1_101/resnet_v1_101/pool1/MaxPool
FirstStageFeatureExtractor/resnet_v1_101/block1/unit_1/bottleneck_v1/shortcut/weights
FirstStageFeatureExtractor/resnet_v1_101/block1/unit_1/bottleneck_v1/shortcut/weights/read
FirstStageFeatureExtractor/resnet_v1_101/resnet_v1_101/block1/unit_1/bottleneck_v1/shortcut/Conv2D
FirstStageFeatureExtractor/resnet_v1_101/block1/unit_1/bottleneck_v1/shortcut/BatchNorm/gamma
FirstStageFeatureExtractor/resnet_v1_101/block1/unit_1/bottleneck_v1/shortcut/BatchNorm/gamma/read
FirstStageFeatureExtractor/resnet_v1_101/block1/unit_1/bottleneck_v1/shortcut/BatchNorm/beta
FirstStageFeatureExtractor/resnet_v1_101/block1/unit_1/bottleneck_v1/shortcut/BatchNorm/beta/read
FirstStageFeatureExtractor/resnet_v1_101/block1/unit_1/bottleneck_v1/shortcut/BatchNorm/moving_mean
FirstStageFeatureExtractor/resnet_v1_101/block1/unit_1/bottleneck_v1/shortcut/BatchNorm/moving_mean/read
FirstStageFeatureExtractor/resnet_v1_101/block1/unit_1/bottleneck_v1/shortcut/BatchNorm/moving_variance
FirstStageFeatureExtractor/resnet_v1_101/block1/unit_1/bottleneck_v1/shortcut/BatchNorm/moving_variance/read
FirstStageFeatureExtractor/resnet_v1_101/resnet_v1_101/block1/unit_1/bottleneck_v1/shortcut/BatchNorm/FusedBatchNorm
FirstStageFeatureExtractor/resnet_v1_101/block1/unit_1/bottleneck_v1/conv1/weights
FirstStageFeatureExtractor/resnet_v1_101/block1/unit_1/bottleneck_v1/conv1/weights/read
FirstStageFeatureExtractor/resnet_v1_101/resnet_v1_101/block1/unit_1/bottleneck_v1/conv1/Conv2D
FirstStageFeatureExtractor/resnet_v1_101/block1/unit_1/bottleneck_v1/conv1/BatchNorm/gamma
FirstStageFeatureExtractor/resnet_v1_101/block1/unit_1/bottleneck_v1/conv1/BatchNorm/gamma/read
FirstStageFeatureExtractor/resnet_v1_101/block1/unit_1/bottleneck_v1/conv1/BatchNorm/beta
FirstStageFeatureExtractor/resnet_v1_101/block1/unit_1/bottleneck_v1/conv1/BatchNorm/beta/read
FirstStageFeatureExtractor/resnet_v1_101/block1/unit_1/bottleneck_v1/conv1/BatchNorm/moving_mean
FirstStageFeatureExtractor/resnet_v1_101/block1/unit_1/bottleneck_v1/conv1/BatchNorm/moving_mean/read
FirstStageFeatureExtractor/resnet_v1_101/block1/unit_1/bottleneck_v1/conv1/BatchNorm/moving_variance
FirstStageFeatureExtractor/resnet_v1_101/block1/unit_1/bottleneck_v1/conv1/BatchNorm/moving_variance/read
FirstStageFeatureExtractor/resnet_v1_101/resnet_v1_101/block1/unit_1/bottleneck_v1/conv1/BatchNorm/FusedBatchNorm
FirstStageFeatureExtractor/resnet_v1_101/resnet_v1_101/block1/unit_1/bottleneck_v1/conv1/Relu
FirstStageFeatureExtractor/resnet_v1_101/block1/unit_1/bottleneck_v1/conv2/weights
FirstStageFeatureExtractor/resnet_v1_101/block1/unit_1/bottleneck_v1/conv2/weights/read
FirstStageFeatureExtractor/resnet_v1_101/resnet_v1_101/block1/unit_1/bottleneck_v1/conv2/Conv2D
FirstStageFeatureExtractor/resnet_v1_101/block1/unit_1/bottleneck_v1/conv2/BatchNorm/gamma
FirstStageFeatureExtractor/resnet_v1_101/block1/unit_1/bottleneck_v1/conv2/BatchNorm/gamma/read
FirstStageFeatureExtractor/resnet_v1_101/block1/unit_1/bottleneck_v1/conv2/BatchNorm/beta
FirstStageFeatureExtractor/resnet_v1_101/block1/unit_1/bottleneck_v1/conv2/BatchNorm/beta/read
FirstStageFeatureExtractor/resnet_v1_101/block1/unit_1/bottleneck_v1/conv2/BatchNorm/moving_mean
FirstStageFeatureExtractor/resnet_v1_101/block1/unit_1/bottleneck_v1/conv2/BatchNorm/moving_mean/read
FirstStageFeatureExtractor/resnet_v1_101/block1/unit_1/bottleneck_v1/conv2/BatchNorm/moving_variance
FirstStageFeatureExtractor/resnet_v1_101/block1/unit_1/bottleneck_v1/conv2/BatchNorm/moving_variance/read
FirstStageFeatureExtractor/resnet_v1_101/resnet_v1_101/block1/unit_1/bottleneck_v1/conv2/BatchNorm/FusedBatchNorm
FirstStageFeatureExtractor/resnet_v1_101/resnet_v1_101/block1/unit_1/bottleneck_v1/conv2/Relu
FirstStageFeatureExtractor/resnet_v1_101/block1/unit_1/bottleneck_v1/conv3/weights
FirstStageFeatureExtractor/resnet_v1_101/block1/unit_1/bottleneck_v1/conv3/weights/read
FirstStageFeatureExtractor/resnet_v1_101/resnet_v1_101/block1/unit_1/bottleneck_v1/conv3/Conv2D
FirstStageFeatureExtractor/resnet_v1_101/block1/unit_1/bottleneck_v1/conv3/BatchNorm/gamma
FirstStageFeatureExtractor/resnet_v1_101/block1/unit_1/bottleneck_v1/conv3/BatchNorm/gamma/read
FirstStageFeatureExtractor/resnet_v1_101/block1/unit_1/bottleneck_v1/conv3/BatchNorm/beta
FirstStageFeatureExtractor/resnet_v1_101/block1/unit_1/bottleneck_v1/conv3/BatchNorm/beta/read
FirstStageFeatureExtractor/resnet_v1_101/block1/unit_1/bottleneck_v1/conv3/BatchNorm/moving_mean
FirstStageFeatureExtractor/resnet_v1_101/block1/unit_1/bottleneck_v1/conv3/BatchNorm/moving_mean/read
FirstStageFeatureExtractor/resnet_v1_101/block1/unit_1/bottleneck_v1/conv3/BatchNorm/moving_variance
FirstStageFeatureExtractor/resnet_v1_101/block1/unit_1/bottleneck_v1/conv3/BatchNorm/moving_variance/read
FirstStageFeatureExtractor/resnet_v1_101/resnet_v1_101/block1/unit_1/bottleneck_v1/conv3/BatchNorm/FusedBatchNorm
FirstStageFeatureExtractor/resnet_v1_101/resnet_v1_101/block1/unit_1/bottleneck_v1/add
FirstStageFeatureExtractor/resnet_v1_101/resnet_v1_101/block1/unit_1/bottleneck_v1/Relu
#visualise weight
from tensorflow.python.framework import tensor_util
weight_nodes = [n for n in od_graph_def.node if n.op == 'Const']
for n in weight_nodes:
print("Name of the node - %s" % n.name)
print("Value - " )
print(tensor_util.MakeNdarray(n.attr['value'].tensor))
Value -
[-0.4540853 0.32056493 -0.22912885 0.2541284 -0.9532805 -0.25493076
-0.5082643 -0.33983043 -0.22494698 -0.3235537 -0.36385655 -0.28415084
-0.5575271 -0.1910503 -0.49338412 -0.42885196 -0.33558807 -0.6404074
-0.18031892 -0.2801433 -0.34614867 -0.5399439 -0.15712368 -0.2957037
0.15714812 -0.48186925 -0.33291766 -0.48722127 -0.2535558 -0.16819339
-0.43470743 -0.2855552 -0.3963132 -0.42684224 -0.3209245 -0.3658357
-0.30626386 -0.4134465 0.03801734 -0.43680775 -0.29162335 -0.36223125
-0.48077616 -0.43840355 -0.39883605 -0.31524658 -0.47961026 -0.41313347
-0.19943158 -0.9744824 -0.44492397 -0.62325597 -0.4503377 -0.32147056
-0.32696253 -0.3964685 -0.4328564 -0.28047583 -0.10022314 -0.20008062
-0.25767517 -0.32787508 -0.7490349 -0.26016906 -0.1388798 -0.00777247
-0.34971157 -0.37617198 -0.16701727 -0.49006486 0.25721186 -0.2046591
-0.39323628 -0.6523374 -0.5105675 -0.32751757 -0.2652611 -0.00875285
-0.2443423 -0.44029924 -0.37004694 -0.23282264 -0.3413406 -0.35088956
-0.46555945 -0.56252545 -0.5163544 -0.5306529 -0.62014216 -0.24062826
-0.06821899 -0.33485538 -0.4521361 -0.6593677 -0.46554345 -0.40829748
-0.49771288 -0.35069707 -0.0229042 -0.2298909 -0.25005338 -0.32683375
-0.49131876 -0.30173704 -0.29786837 -0.67599934 -0.35004795 -0.20007414
-0.16419795 -0.49680758 -0.42260775 -0.2726915 -0.6200565 -0.35487327
-0.3344669 -0.52208656 -0.4569452 -0.12290002 -0.5652516 -0.31340426
-0.3985747 -0.5281181 -0.7129867 -0.567227 -0.28814644 -0.1302681
-0.2001429 -0.10916138 -0.68222445 -0.29323065 -0.05704954 -0.22789052
-0.61558 -0.07655394 -0.52930903 -0.34198323 -0.24209192 -0.5053026
-0.4004574 -0.15969647 -0.35341594 -0.5591511 -0.40825605 -0.01070203
-0.7428409 -0.45172128 -0.43380788 0.2568167 -0.6722297 -0.35276502
-0.6669023 -0.20694211 -0.20189697 -0.5070727 -0.3972058 -0.00848175
-0.2994657 -0.34944996 -0.3389741 -0.17936742 -0.92425096 -0.05345222
-0.38853544 -0.39970097 -0.3607101 -0.7013903 -0.10112807 -0.2565767
-0.34176925 -0.52231157 -0.476935 -0.45604366 -0.5980594 -0.15625392
-0.42476812 -0.31922927 -0.31709027 -0.28081933 -0.2383788 -0.3822803
-0.58110493 -0.48278278 0.37628186 -0.28682145 -0.36748675 -0.34278873
-0.4303303 -0.31955504 -0.1693851 -0.4306607 -0.30947846 -0.29114336
-0.490207 -0.487192 -0.24403661 -0.30346867 -0.27445573 -0.35093272
-0.22612163 -0.41189626 -0.24778591 -0.31198287 -0.24912828 -0.40960005
-0.35579422 -0.43333185 -0.6709562 -0.844466 -0.14793177 -0.2053809
-0.5682712 -0.23429349 -0.35484084 -0.40705127 -0.16986188 -0.52707803
-0.371436 -0.3203282 -0.548174 -0.2454479 0.14760983 -0.23899662
-0.49904364 -0.5957814 -0.24498118 -0.15208362 -0.38576075 -0.4792993
-0.37565476 -0.48903054 -0.30210334 -0.5700767 -0.21236268 -0.57294637
-0.27853557 -0.46409875 -0.24420401 -0.48441198 -0.64328116 -0.47033066
-0.2866155 -0.24524617 -0.682954 0.22307628 -0.4910605 -0.6759475
-0.5597055 -0.08977205 -0.35018525 -0.326844 -0.407895 -0.4286294
-0.43290117 -0.43665645 -0.03084541 -0.38888484]
Name of the node - FirstStageFeatureExtractor/resnet_v1_101/block3/unit_23/bottleneck_v1/conv2/BatchNorm/moving_mean
Value -
[-0.15717725 0.07592656 0.10988383 -0.0490528 -0.0106115 -0.15786496
-0.11409757 -0.06248363 -0.06544019 -0.14081511 -0.17109169 -0.02211305
-0.03773279 -0.12710223 -0.12394962 -0.09206913 -0.16163616 -0.1723962
-0.20561686 -0.1415835 -0.06350002 -0.16257662 -0.09603897 -0.13460924
-0.02432729 -0.04199889 -0.1696589 -0.13546434 -0.06083817 -0.10085563
-0.09488994 0.04238822 -0.01342436 -0.15949754 -0.17468 -0.17448752
-0.07917193 -0.12688611 -0.22540613 0.0227522 -0.10866571 -0.05292162
-0.09184872 -0.15055841 -0.166075 -0.20481528 -0.0362504 -0.03834244
-0.0642691 0.0847474 -0.05358214 -0.01985169 -0.23435758 -0.14982411
-0.16710722 -0.15368192 -0.09551494 -0.16474426 -0.10215978 -0.16959386
-0.20461345 -0.03911749 -0.14563459 -0.00461106 -0.0700466 -0.0968714
0.00481458 -0.04512146 -0.14357145 -0.19567277 -0.19265638 -0.11200134
-0.07045556 -0.02379003 -0.10341825 -0.03273268 -0.11432283 -0.23813714
-0.02153216 -0.14714707 -0.1272525 -0.16946757 -0.1185687 -0.12256996
-0.10922257 0.11201918 -0.02812816 -0.02523885 0.03150908 -0.15362865
-0.1581234 0.00402692 -0.15670358 -0.04513093 -0.1437301 -0.01758968
-0.05778598 -0.14521386 0.01999546 -0.1154804 -0.19333984 -0.06223019
-0.07321492 -0.07283846 -0.12676379 -0.04754475 -0.12439732 -0.0936267
-0.1395944 -0.10460664 -0.1278241 -0.11326854 -0.08205313 -0.1579616
-0.04214557 -0.13230218 -0.15926664 -0.11025801 -0.17300427 -0.03396817
0.01946657 -0.02238615 -0.1087622 0.05757551 -0.06369423 -0.15768676
-0.24096929 -0.16477783 -0.08505792 -0.1452452 -0.1228498 -0.18778118
-0.10847382 -0.18381962 -0.09624809 -0.11529081 -0.08679712 -0.026652
-0.0709608 -0.13419388 -0.12301534 -0.0762615 -0.1779636 0.0712171
-0.02847089 -0.03589171 -0.16578905 0.06647447 -0.21369869 -0.11737502
-0.09483987 -0.1711197 -0.10486519 -0.13668095 -0.09134877 -0.17883773
-0.09831014 0.00876497 -0.01898824 -0.12447593 -0.11287156 -0.14073113
-0.14082104 -0.20715335 -0.0960372 -0.07659613 -0.05445954 -0.20973818
-0.1988086 -0.07440972 -0.01645763 -0.06222945 -0.10685738 -0.11946698
-0.02095333 -0.22138861 -0.03730138 -0.14171903 -0.2022561 -0.17174341
-0.10660502 -0.04700039 0.21646874 -0.2798932 -0.0933376 -0.15969937
-0.11783579 -0.08365067 -0.15353799 -0.09762033 -0.20146982 -0.10279506
-0.14905539 -0.12379898 -0.12730977 -0.06474843 -0.09232798 -0.07322481
-0.19277479 -0.04613561 -0.13307215 -0.1302453 -0.17341715 -0.07773575
-0.04985441 -0.10355914 -0.1078042 0.00853426 -0.13236587 -0.09865464
-0.00562798 -0.12181614 -0.22204153 -0.05784107 -0.18404393 -0.00629737
-0.10935344 -0.06820714 -0.0653091 -0.10879692 -0.12568252 -0.1386066
-0.10166074 -0.05466842 -0.09380525 -0.20807257 -0.09541806 -0.0593087
-0.12657529 -0.12999491 -0.08768366 -0.11886697 -0.11744718 -0.08663379
-0.1193763 -0.08805308 -0.1727127 -0.00644481 -0.11248584 0.00345422
-0.1572065 -0.07471903 -0.09579102 -0.08642395 -0.08458085 0.09465432
-0.0749066 0.04966704 -0.06954169 -0.05878955 -0.06419392 -0.03276661
-0.12470958 -0.05546655 -0.227529 -0.18447787]
Name of the node - FirstStageFeatureExtractor/resnet_v1_101/block3/unit_23/bottleneck_v1/conv2/BatchNorm/moving_variance
Value -
[0.0246328 0.15109949 0.08894898 0.18027042 0.17676981 0.03751087
0.04503109 0.05276729 0.07223138 0.0510329 0.05425516 0.03137682
0.03304449 0.04246168 0.03573015 0.04623629 0.04350977 0.03313071
0.04221025 0.04383035 0.04444639 0.03855932 0.03536414 0.03979863
0.12287893 0.03251396 0.03785344 0.03404477 0.05788973 0.05000394
0.03611772 0.04200751 0.03447625 0.04483994 0.03660354 0.05133959
0.03763206 0.03538596 0.0754443 0.03522145 0.03955411 0.03173661
0.0307942 0.04629485 0.04174245 0.06112912 0.03868164 0.03610169
0.03457952 0.03306652 0.03208408 0.03130876 0.04425795 0.0543987
0.04187379 0.05363455 0.04050811 0.03573698 0.03850965 0.0455887
0.03908563 0.03638468 0.03675755 0.03934622 0.03829562 0.05719311
0.03649542 0.02670012 0.03625882 0.03489691 0.1415123 0.04595942
0.04740899 0.0362278 0.03886651 0.02929924 0.03411594 0.05056536
0.04927529 0.03879778 0.04310857 0.03617069 0.03727381 0.03174838
0.03664678 0.11009517 0.02481679 0.03278335 0.05939993 0.03593603
0.03523111 0.02616766 0.03424251 0.0344963 0.04709259 0.03183395
0.03848327 0.03769235 0.04920188 0.03951766 0.03903662 0.03408873
0.03334353 0.03571996 0.06089876 0.05937878 0.03344719 0.04584214
0.04397506 0.03307587 0.03318075 0.04908697 0.03105905 0.05623207
0.05571087 0.03729884 0.0459931 0.04489287 0.04131918 0.03771564
0.04307055 0.03846145 0.05100024 0.043197 0.04114237 0.05911661
0.03956391 0.04361542 0.03280107 0.03139602 0.05827128 0.04082137
0.03654662 0.04679587 0.0309209 0.03587716 0.03316435 0.02562451
0.03213744 0.04613245 0.03237009 0.03835384 0.0523162 0.09062187
0.03235107 0.02755782 0.03724603 0.09937367 0.07468802 0.03093713
0.02697526 0.04269885 0.0421169 0.03236708 0.03591317 0.05501648
0.04456919 0.03508786 0.03599152 0.05607978 0.04522029 0.05920529
0.02929832 0.04003889 0.04818926 0.03470782 0.04554065 0.03561359
0.05097282 0.03162495 0.03105365 0.0479041 0.04201244 0.03967372
0.03850067 0.03719724 0.03236697 0.04217942 0.04472516 0.06050858
0.02926525 0.03916026 0.19374044 0.0456128 0.03878816 0.0363459
0.03911297 0.03797501 0.04174663 0.04230652 0.03831306 0.04085784
0.0427238 0.03057992 0.03531543 0.03794312 0.03533817 0.03804426
0.02861266 0.03188168 0.04924265 0.04050563 0.04611768 0.03238661
0.03605983 0.02848955 0.0390384 0.02533518 0.04691761 0.04362593
0.03430253 0.04009234 0.04348955 0.03593319 0.04101577 0.02840054
0.03196882 0.03733175 0.03510671 0.04594112 0.15954044 0.04420807
0.04092796 0.02599568 0.04426222 0.05545853 0.04792674 0.05130733
0.03689643 0.04262947 0.04004824 0.02642712 0.04293796 0.0296522
0.04023833 0.04849512 0.04770563 0.02930979 0.033105 0.03017242
0.03861351 0.03882564 0.03877705 0.10539907 0.03036495 0.0441276
0.02591156 0.07158393 0.03235719 0.04087997 0.02184044 0.03179693
0.04023989 0.03014399 0.06206222 0.06290652]

TF2 Keras - Feature Engineering in Keras saved model via Tensorflow Serving

The Tensorflow 2 documentation for preprocessing / feature engineering over a Keras model seems to be quite confusing and isn't very friendly.
Currently I have a simple Keras N-layer model with TF feature columns feeding as dense layer. For training I have CSV files read using tf.dataset API and I have written a feature engineering function that creates new features using dataset.map function.
def feature_engg_features(features):
#Add new features
features['nodlgrbyvpatd'] = features['NODLGR'] / features['VPATD']
return(features)
I can save the model easily using tf.keras.models.save_model method. However I am having trouble figuring out how to attach the feature_engineering steps in the serving function.
Requirement: Now I want to take the same feature engineering function above and attach it to my serving function so that in JSON input via tensorflow_model_server the same feature engineering steps are applied. I know about the lambda Layer option in Keras but I want to do this via saved_model method but there are a lot of difficulties here.
For Example, below code gives error:
def feature_engg_features(features):
#Add new features
features['nodlgrbyvpatd'] = features['NODLGR'] / features['VPATD']
return(features)
#tf.function
def serving(data):
data = tf.map_fn(feature_engg_features, data, dtype=tf.float32)
# Predict
predictions = m_(data)
version = "1"
tf.keras.models.save_model(
m_,
"./exported_model/" + version,
overwrite=True,
include_optimizer=True,
save_format=None,
signatures=serving,
options=None
)
Error:
Only `tf.functions` with an input signature or concrete functions can be used as a signature.
The above error is because I have not provided InputSignature of my Keras model but I am not able to understand that I have 13 input fields, what is expected as input signature.
So I wanted to know if anyone knows the shortest way of solving this out. This is a very basic requirement and Tensorflow seems to have kept this quite complicated for Keras Tensorflow model serving.
GIST: https://colab.research.google.com/gist/rafiqhasan/6abe93ac454e942317005febef59a459/copy-of-dl-e2e-structured-mixed-data-tf-2-keras-estimator.ipynb
EDIT:
I fixed it, so TensorSpec has to be generated and passed for each feature and also model( ) has to be called in serving function.
#tf.function
def serving(WERKS, DIFGRIRD, SCENARIO, TOTIRQTY, VSTATU, EKGRP, TOTGRQTY, VPATD, EKORG, NODLGR, DIFGRIRV, NODLIR, KTOKK):
##Feature engineering
nodlgrbyvpatd = tf.cast(NODLGR / VPATD, tf.float32)
payload = {
'WERKS': WERKS,
'DIFGRIRD': DIFGRIRD,
'SCENARIO': SCENARIO,
'TOTIRQTY': TOTIRQTY,
'VSTATU': VSTATU,
'EKGRP': EKGRP,
'TOTGRQTY': TOTGRQTY,
'VPATD': VPATD,
'EKORG': EKORG,
'NODLGR': NODLGR,
'DIFGRIRV': DIFGRIRV,
'NODLIR': NODLIR,
'KTOKK': KTOKK,
'nodlgrbyvpatd': nodlgrbyvpatd,
}
## Predict
##IF THERE IS AN ERROR IN NUMBER OF PARAMS PASSED HERE OR DATA TYPE THEN IT GIVES ERROR, "COULDN'T COMPUTE OUTPUT TENSOR"
predictions = m_(payload)
return predictions
serving = serving.get_concrete_function(WERKS=tf.TensorSpec([None,], dtype= tf.string, name='WERKS'),
DIFGRIRD=tf.TensorSpec([None,], name='DIFGRIRD'),
SCENARIO=tf.TensorSpec([None,], dtype= tf.string, name='SCENARIO'),
TOTIRQTY=tf.TensorSpec([None,], name='TOTIRQTY'),
VSTATU=tf.TensorSpec([None,], dtype= tf.string, name='VSTATU'),
EKGRP=tf.TensorSpec([None,], dtype= tf.string, name='EKGRP'),
TOTGRQTY=tf.TensorSpec([None,], name='TOTGRQTY'),
VPATD=tf.TensorSpec([None,], name='VPATD'),
EKORG=tf.TensorSpec([None,], dtype= tf.string, name='EKORG'),
NODLGR=tf.TensorSpec([None,], name='NODLGR'),
DIFGRIRV=tf.TensorSpec([None,], name='DIFGRIRV'),
NODLIR=tf.TensorSpec([None,], name='NODLIR'),
KTOKK=tf.TensorSpec([None,], dtype= tf.string, name='KTOKK')
)
version = "1"
tf.saved_model.save(
m_,
"./exported_model/" + version,
signatures=serving
)
So the right way to do this is here, Feature engineering and Pre-processing can be done in the serving_default method through below option. I tested it further via Tensorflow serving.
#tf.function
def serving(WERKS, DIFGRIRD, SCENARIO, TOTIRQTY, VSTATU, EKGRP, TOTGRQTY, VPATD, EKORG, NODLGR, DIFGRIRV, NODLIR, KTOKK):
##Feature engineering
nodlgrbyvpatd = tf.cast(NODLGR / VPATD, tf.float32)
payload = {
'WERKS': WERKS,
'DIFGRIRD': DIFGRIRD,
'SCENARIO': SCENARIO,
'TOTIRQTY': TOTIRQTY,
'VSTATU': VSTATU,
'EKGRP': EKGRP,
'TOTGRQTY': TOTGRQTY,
'VPATD': VPATD,
'EKORG': EKORG,
'NODLGR': NODLGR,
'DIFGRIRV': DIFGRIRV,
'NODLIR': NODLIR,
'KTOKK': KTOKK,
'nodlgrbyvpatd': nodlgrbyvpatd,
}
## Predict
##IF THERE IS AN ERROR IN NUMBER OF PARAMS PASSED HERE OR DATA TYPE THEN IT GIVES ERROR, "COULDN'T COMPUTE OUTPUT TENSOR"
predictions = m_(payload)
return predictions
serving = serving.get_concrete_function(WERKS=tf.TensorSpec([None,], dtype= tf.string, name='WERKS'),
DIFGRIRD=tf.TensorSpec([None,], name='DIFGRIRD'),
SCENARIO=tf.TensorSpec([None,], dtype= tf.string, name='SCENARIO'),
TOTIRQTY=tf.TensorSpec([None,], name='TOTIRQTY'),
VSTATU=tf.TensorSpec([None,], dtype= tf.string, name='VSTATU'),
EKGRP=tf.TensorSpec([None,], dtype= tf.string, name='EKGRP'),
TOTGRQTY=tf.TensorSpec([None,], name='TOTGRQTY'),
VPATD=tf.TensorSpec([None,], name='VPATD'),
EKORG=tf.TensorSpec([None,], dtype= tf.string, name='EKORG'),
NODLGR=tf.TensorSpec([None,], name='NODLGR'),
DIFGRIRV=tf.TensorSpec([None,], name='DIFGRIRV'),
NODLIR=tf.TensorSpec([None,], name='NODLIR'),
KTOKK=tf.TensorSpec([None,], dtype= tf.string, name='KTOKK')
)
version = "1"
tf.saved_model.save(
m_,
"./exported_model/" + version,
signatures=serving
)

Is it safe to use the same initializer, regularizer, and constraint for multiple TensorFlow Keras layers?

I'm worried the variables created in (tensorflow) keras layers using the same initializer, regularizer, and constraint may be connected between layers. If they can be strings (e.g., 'he_normal') there is no problem, but for those with parameters I have to pass the actual functions. For example, in the __init__ of a custom layer,
initializer_1 = tf.keras.initializers.he_normal()
regularizer_1 = tf.keras.regularizers.l2(l=0.001)
constraint_1 = tf.keras.constraints.MaxNorm(max_value=2, axis=[0,1,2])
layer_A = tf.keras.layers.Conv2D(
...
kernel_initializer=initializer_1,
kernel_regularizer=regularizer_1,
kernel_constraint=constraint_1,
...
)
layer_B = tf.keras.layers.Conv2D(
...
kernel_initializer=initializer_1,
kernel_regularizer=regularizer_1,
kernel_constraint=constraint_1,
...
)
Is this safe?
Yes, probably, but unsure if it's the best idea; I ran it - results:
Same .fit() loss for both: (1) same objects; (2) different (initializer_2, etc) objects - so each works as it would independently
Layer weight initializations are different (as they should be) w/ same initializer_1
Model saves and loads successfully .
However, the objects are the same for each layer - which you can tell from their memory footprint:
print(layer_A.kernel_regularizer)
print(layer_B.kernel_regularizer)
<tensorflow.python.keras.regularizers.L1L2 object at 0x7f211bfd0c88>
<tensorflow.python.keras.regularizers.L1L2 object at 0x7f211bfd0c88>
It's then possible that some form of model serialization may be thrown off, particularly those concerning the model graph - but nothing I discovered. Best practice would be to use a unique layer object for each layer, but your approach doesn't seem harmful either.
Thus: you can "do it until it breaks". (But you may not know when it breaks, e.g. when it causes model outputs to differ - unless you test for reproducibility).
Full test example:
import tensorflow as tf
import numpy as np
import random
from tensorflow.keras.models import Model, load_model
from tensorflow.keras.layers import Input
np.random.seed(1)
random.seed(2)
if tf.__version__ == '2':
tf.random.set_seed(3)
else:
tf.set_random_seed(3)
initializer_1 = tf.keras.initializers.he_normal()
regularizer_1 = tf.keras.regularizers.l2(l=0.001)
constraint_1 = tf.keras.constraints.MaxNorm(max_value=2, axis=[0,1,2])
layer_A = tf.keras.layers.Conv2D(4, (1,1),
kernel_initializer=initializer_1,
kernel_regularizer=regularizer_1,
kernel_constraint=constraint_1)
layer_B = tf.keras.layers.Conv2D(4, (1,1),
kernel_initializer=initializer_1,
kernel_regularizer=regularizer_1,
kernel_constraint=constraint_1)
ipt = Input((16,16,4))
x = layer_A(ipt)
out = layer_B(x)
model = Model(ipt, out)
model.compile('adam', 'mse')
print(model.layers[1].get_weights()[0])
print(model.layers[2].get_weights()[0])
x = np.random.randn(32, 16, 16, 4)
model.fit(x, x)
model.save('model.h5')
model = load_model('model.h5')

convert tensorflow .pb to .dlc fail with snpe

I save the graph to a .pb file. I get an error when I convert the .pb to .dlc. Anyone know why?
My code to build the model:
import tensorflow as tf
from tensorflow.python.framework.graph_util import convert_variables_to_constants
from tensorflow.python.ops import variable_scope
X = tf.placeholder(tf.float32, shape=[None, 1], name="input");
with variable_scope.variable_scope("input"):
a = tf.Variable([[1]], name="a", dtype=tf.float32);
g = X * a
with variable_scope.variable_scope("output"):
b = tf.Variable([[0]], name="b", dtype=tf.float32);
ss = tf.add(g, b, name="output")
sess = tf.Session();
sess.run(tf.global_variables_initializer());
graph = convert_variables_to_constants(sess, sess.graph_def, ["output/output"])
tf.train.write_graph(graph, './linear/', 'graph.pb', as_text=False)
sess.close();
convert cmd:
snpe-tensorflow-to-dlc --graph graph_sc.pb -i input 1 --out_node output/output --allow_unconsumed_nodes
error message:
2017-10-26 01:55:15,919 - 390 - INFO - INFO_ALL_BUILDING_LAYER_W_NODES: Building layer (ElementWiseMul) with nodes: [u'input_1/mul']
~/snpe-sdk/snpe-1.6.0/lib/python/converters/tensorflow/layers/eltwise.py:108: RuntimeWarning: error_code=1002; error_message=Layer paramter value is invalid. Layer input_1/mul: at least two inputs required, have 1; error_component=Model Validation; line_no=732; thread_id=140514161018688
output_name)
2017-10-26 01:55:15,920 - 390 - INFO - INFO_ALL_BUILDING_LAYER_W_NODES: Building layer (ElementWiseSum) with nodes: [u'output/output']
~/snpe-sdk/snpe-1.6.0/lib/python/converters/tensorflow/layers/eltwise.py:84: RuntimeWarning: error_code=1002; error_message=Layer paramter value is invalid. Layer output/output: at least two inputs required, have 1; error_component=Model Validation; line_no=732; thread_id=140514161018688
output_name)
SNPE requires a 3D tensor as input. Try to update your command -i input 1 to -i input 1,1,1
The input_dim argument to snpe-tensorflow-to-dlc should be of 3 dimension tensors like below example,
snpe-tensorflow-to-dlc --graph $SNPE_ROOT/models/inception_v3/tensorflow/inception_v3_2016_08_28_frozen.pb
--input_dim input "1,299,299,3" --out_node "InceptionV3/Predictions/Reshape_1" --dlc inception_v3.dlc
--allow_unconsumed_nodes
For more detailed reference to convert TensorFlow model to DLC using Neural Processing SDK follow below link,
https://developer.qualcomm.com/docs/snpe/model_conv_tensorflow.html

Understanding why results between Keras and Tensorflow are different

I am currently trying to do some work in both Keras and Tensorflow, I stumbled upon a small thing I do not understand. If you look at the code below, I am trying to predict the responses of a network either via Tensorflow session explicitly, or by using the model predict_on_batch function.
import os
import keras
import numpy as np
import tensorflow as tf
from keras import backend as K
from keras.layers import Dense, Dropout, Flatten, Input
from keras.models import Model
# Try to standardize output
np.random.seed(1)
tf.set_random_seed(1)
# Building the model
inputs = Input(shape=(224,224,3))
base_model = keras.applications.vgg16.VGG16(include_top=True, weights='imagenet', \
input_tensor=inputs, input_shape=(224, 224, 3))
x = base_model.get_layer("fc2").output
x = Dropout(0.5, name='model_fc_dropout')(x)
x = Dense(2048, activation='sigmoid', name='final_fc')(x)
x = Dropout(0.5, name='final_fc_dropout')(x)
predictions = Dense(1, activation='sigmoid', name='fcout')(x)
model = Model(outputs=predictions, inputs=inputs)
##################################################################
model.compile(loss='binary_crossentropy',
optimizer=tf.train.MomentumOptimizer(learning_rate=5e-4, momentum=0.9),
metrics=['accuracy'])
image_batch = np.random.random((64,224,224,3))
# Outputs predicted by TF
outs = [predictions]
feed_dict={inputs:image_batch, K.learning_phase():0}
init_op = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init_op)
outputs = sess.run(outs, feed_dict)[0]
print outputs.flatten()
# Outputs predicted by Keras
outputs = model.predict_on_batch(image_batch)
print outputs.flatten()
My issue is that I got two different results, even though I tried to remove any kind of sources of randomness by setting the seeds to 1 and running the operations on CPU. Even then, I get the following results:
[ 0.26079229 0.26078743 0.26079154 0.26079673 0.26078942 0.26079443
0.26078886 0.26079088 0.26078972 0.26078728 0.26079121 0.26079452
0.26078513 0.26078424 0.26079014 0.26079312 0.26079521 0.26078743
0.26078558 0.26078537 0.26078674 0.26079136 0.26078632 0.26077667
0.26079312 0.26078999 0.26079065 0.26078704 0.26078928 0.26078624
0.26078892 0.26079202 0.26079065 0.26078689 0.26078963 0.26078749
0.26078817 0.2607986 0.26078528 0.26078412 0.26079187 0.26079246
0.26079226 0.26078457 0.26078099 0.26078072 0.26078376 0.26078475
0.26078326 0.26079389 0.26079792 0.26078579 0.2607882 0.2607961
0.26079237 0.26078218 0.26078638 0.26079753 0.2607787 0.26078618
0.26078096 0.26078594 0.26078215 0.26079002]
and
[ 0.25331706 0.25228402 0.2534174 0.25033095 0.24851511 0.25099936
0.25240892 0.25139931 0.24948661 0.25183493 0.25104815 0.25164133
0.25214729 0.25265765 0.25128496 0.25249782 0.25247478 0.25314394
0.25014618 0.25280923 0.2526398 0.25381723 0.25138992 0.25072744
0.25069866 0.25307226 0.25063521 0.25133523 0.25050756 0.2536433
0.25164688 0.25054023 0.25117773 0.25352773 0.25157067 0.25173825
0.25234801 0.25182116 0.25284401 0.25297374 0.25079012 0.25146705
0.25401884 0.25111189 0.25192681 0.25252578 0.25039044 0.2525287
0.25165257 0.25357804 0.25001243 0.2495154 0.2531895 0.25270832
0.25305843 0.25064403 0.25180396 0.25231308 0.25224048 0.25068772
0.25212681 0.24812476 0.25027585 0.25243458]
Does anybody have an idea what could be going on in the background that could change the results? (These results do not change if one runs them again)
The difference gets even bigger if the network runs on a GPU (Titan X), e.g. the second output is:
[ 0.3302682 0.33054096 0.32677746 0.32830611 0.32972822 0.32807562
0.32850873 0.33161065 0.33009702 0.32811245 0.3285495 0.32966742
0.33050382 0.33156893 0.3300975 0.3298254 0.33350074 0.32991216
0.32990077 0.33203539 0.32692945 0.33036903 0.33102706 0.32648
0.32933888 0.33161271 0.32976636 0.33252293 0.32859167 0.33013415
0.33080408 0.33102706 0.32994759 0.33150592 0.32881773 0.33048317
0.33040857 0.32924038 0.32986534 0.33131596 0.3282761 0.3292698
0.32879189 0.33186096 0.32862625 0.33067161 0.329018 0.33022234
0.32904804 0.32891914 0.33122411 0.32900628 0.33088413 0.32931429
0.3268061 0.32924181 0.32940546 0.32860965 0.32828435 0.3310211
0.33098024 0.32997403 0.33025959 0.33133432]
whereas in the first one, the differences only occur in the 5th and latter decimal places:
[ 0.26075357 0.26074868 0.26074538 0.26075155 0.260755 0.26073951
0.26074919 0.26073971 0.26074231 0.26075247 0.2607362 0.26075858
0.26074955 0.26074123 0.26074299 0.26074946 0.26074076 0.26075014
0.26074076 0.26075229 0.26075041 0.26074776 0.26075897 0.26073995
0.260746 0.26074466 0.26073912 0.26075709 0.26075712 0.26073799
0.2607322 0.26075566 0.26075059 0.26073873 0.26074558 0.26074558
0.26074359 0.26073721 0.26074392 0.26074731 0.26074862 0.26074174
0.26074126 0.26074588 0.26073804 0.26074919 0.26074269 0.26074606
0.26075307 0.2607446 0.26074025 0.26074648 0.26074952 0.26073608
0.26073566 0.26073873 0.26074576 0.26074475 0.26074636 0.26073411
0.2607542 0.26074755 0.2607449 0.2607407 ]
Here results are different as initializations are different.
Tf uses the this init_op for variables initializations.
sess.run(init_op)
But Keras uses its own init_op inside its model class, not the init_op defined in your codes.