Convert Tensorflow frozen .pb model to keras model - tensorflow

I have tensorflow .pb frozen model and i would like to retrain this model with new data and in order to do that i would like to convert the tf .pb frozen model to a keras model.
i was able to visualise layers and weight values and i wana know if it is possible to convert this .pb model to a keras model in order to retain it or if there is another solution in order to retrain a .pb frozen model ?
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
detection_graph = tf.Graph()
with detection_graph.as_default():
od_graph_def = tf.GraphDef()
with tf.gfile.GFile('weight/saved_model.pb', 'rb') as fid:
serialized_graph = fid.read()
od_graph_def.ParseFromString(serialized_graph)
tf.import_graph_def(od_graph_def, name='')
conf = tf.ConfigProto()
session = tf.Session(graph=detection_graph, config=conf)
#layers name
layers = [op.name for op in detection_graph.get_operations()]
for layer in layers:
print(layer)
Const
ToFloat
Const_1
ToFloat_1
Const_2
ToFloat_2
image_tensor
ToFloat_3
Preprocessor/map/Shape
Preprocessor/map/strided_slice/stack
Preprocessor/map/strided_slice/stack_1
Preprocessor/map/strided_slice/stack_2
Preprocessor/map/strided_slice
Preprocessor/map/TensorArray
Preprocessor/map/TensorArrayUnstack/Shape
Preprocessor/map/TensorArrayUnstack/strided_slice/stack
Preprocessor/map/TensorArrayUnstack/strided_slice/stack_1
Preprocessor/map/TensorArrayUnstack/strided_slice/stack_2
Preprocessor/map/TensorArrayUnstack/strided_slice
Preprocessor/map/TensorArrayUnstack/range/start
Preprocessor/map/TensorArrayUnstack/range/delta
Preprocessor/map/TensorArrayUnstack/range
Preprocessor/map/TensorArrayUnstack/TensorArrayScatter/TensorArrayScatterV3
Preprocessor/map/Const
Preprocessor/map/TensorArray_1
Preprocessor/map/TensorArray_2
Preprocessor/map/while/Enter
Preprocessor/map/while/Enter_1
Preprocessor/map/while/Enter_2
Preprocessor/map/while/Merge
Preprocessor/map/while/Merge_1
Preprocessor/map/while/Merge_2
Preprocessor/map/while/Less/Enter
Preprocessor/map/while/Less
Preprocessor/map/while/LoopCond
Preprocessor/map/while/Switch
Preprocessor/map/while/Switch_1
Preprocessor/map/while/Switch_2
Preprocessor/map/while/Identity
Preprocessor/map/while/Identity_1
Preprocessor/map/while/Identity_2
Preprocessor/map/while/TensorArrayReadV3/Enter
Preprocessor/map/while/TensorArrayReadV3/Enter_1
Preprocessor/map/while/TensorArrayReadV3
Preprocessor/map/while/ResizeImage/stack
Preprocessor/map/while/ResizeImage/resize_images/ExpandDims/dim
Preprocessor/map/while/ResizeImage/resize_images/ExpandDims
Preprocessor/map/while/ResizeImage/resize_images/ResizeBilinear
Preprocessor/map/while/ResizeImage/resize_images/Squeeze
Preprocessor/map/while/ResizeImage/stack_1
Preprocessor/map/while/TensorArrayWrite/TensorArrayWriteV3/Enter
Preprocessor/map/while/TensorArrayWrite/TensorArrayWriteV3
Preprocessor/map/while/TensorArrayWrite_1/TensorArrayWriteV3/Enter
Preprocessor/map/while/TensorArrayWrite_1/TensorArrayWriteV3
Preprocessor/map/while/add/y
Preprocessor/map/while/add
Preprocessor/map/while/NextIteration
Preprocessor/map/while/NextIteration_1
Preprocessor/map/while/NextIteration_2
Preprocessor/map/while/Exit_1
Preprocessor/map/while/Exit_2
Preprocessor/map/TensorArrayStack/TensorArraySizeV3
Preprocessor/map/TensorArrayStack/range/start
Preprocessor/map/TensorArrayStack/range/delta
Preprocessor/map/TensorArrayStack/range
Preprocessor/map/TensorArrayStack/TensorArrayGatherV3
Preprocessor/map/TensorArrayStack_1/TensorArraySizeV3
Preprocessor/map/TensorArrayStack_1/range/start
Preprocessor/map/TensorArrayStack_1/range/delta
Preprocessor/map/TensorArrayStack_1/range
Preprocessor/map/TensorArrayStack_1/TensorArrayGatherV3
Preprocessor/sub/y
Preprocessor/sub
Shape
FirstStageFeatureExtractor/Shape
FirstStageFeatureExtractor/strided_slice/stack
FirstStageFeatureExtractor/strided_slice/stack_1
FirstStageFeatureExtractor/strided_slice/stack_2
FirstStageFeatureExtractor/strided_slice
FirstStageFeatureExtractor/GreaterEqual/y
FirstStageFeatureExtractor/GreaterEqual
FirstStageFeatureExtractor/Shape_1
FirstStageFeatureExtractor/strided_slice_1/stack
FirstStageFeatureExtractor/strided_slice_1/stack_1
FirstStageFeatureExtractor/strided_slice_1/stack_2
FirstStageFeatureExtractor/strided_slice_1
FirstStageFeatureExtractor/GreaterEqual_1/y
FirstStageFeatureExtractor/GreaterEqual_1
FirstStageFeatureExtractor/LogicalAnd
FirstStageFeatureExtractor/Assert/Assert/data_0
FirstStageFeatureExtractor/Assert/Assert
FirstStageFeatureExtractor/resnet_v1_101/resnet_v1_101/Pad/paddings
FirstStageFeatureExtractor/resnet_v1_101/resnet_v1_101/Pad
FirstStageFeatureExtractor/resnet_v1_101/conv1/weights
FirstStageFeatureExtractor/resnet_v1_101/conv1/weights/read
FirstStageFeatureExtractor/resnet_v1_101/resnet_v1_101/conv1/Conv2D
FirstStageFeatureExtractor/resnet_v1_101/conv1/BatchNorm/gamma
FirstStageFeatureExtractor/resnet_v1_101/conv1/BatchNorm/gamma/read
FirstStageFeatureExtractor/resnet_v1_101/conv1/BatchNorm/beta
FirstStageFeatureExtractor/resnet_v1_101/conv1/BatchNorm/beta/read
FirstStageFeatureExtractor/resnet_v1_101/conv1/BatchNorm/moving_mean
FirstStageFeatureExtractor/resnet_v1_101/conv1/BatchNorm/moving_mean/read
FirstStageFeatureExtractor/resnet_v1_101/conv1/BatchNorm/moving_variance
FirstStageFeatureExtractor/resnet_v1_101/conv1/BatchNorm/moving_variance/read
FirstStageFeatureExtractor/resnet_v1_101/resnet_v1_101/conv1/BatchNorm/FusedBatchNorm
FirstStageFeatureExtractor/resnet_v1_101/resnet_v1_101/conv1/Relu
FirstStageFeatureExtractor/resnet_v1_101/resnet_v1_101/pool1/MaxPool
FirstStageFeatureExtractor/resnet_v1_101/block1/unit_1/bottleneck_v1/shortcut/weights
FirstStageFeatureExtractor/resnet_v1_101/block1/unit_1/bottleneck_v1/shortcut/weights/read
FirstStageFeatureExtractor/resnet_v1_101/resnet_v1_101/block1/unit_1/bottleneck_v1/shortcut/Conv2D
FirstStageFeatureExtractor/resnet_v1_101/block1/unit_1/bottleneck_v1/shortcut/BatchNorm/gamma
FirstStageFeatureExtractor/resnet_v1_101/block1/unit_1/bottleneck_v1/shortcut/BatchNorm/gamma/read
FirstStageFeatureExtractor/resnet_v1_101/block1/unit_1/bottleneck_v1/shortcut/BatchNorm/beta
FirstStageFeatureExtractor/resnet_v1_101/block1/unit_1/bottleneck_v1/shortcut/BatchNorm/beta/read
FirstStageFeatureExtractor/resnet_v1_101/block1/unit_1/bottleneck_v1/shortcut/BatchNorm/moving_mean
FirstStageFeatureExtractor/resnet_v1_101/block1/unit_1/bottleneck_v1/shortcut/BatchNorm/moving_mean/read
FirstStageFeatureExtractor/resnet_v1_101/block1/unit_1/bottleneck_v1/shortcut/BatchNorm/moving_variance
FirstStageFeatureExtractor/resnet_v1_101/block1/unit_1/bottleneck_v1/shortcut/BatchNorm/moving_variance/read
FirstStageFeatureExtractor/resnet_v1_101/resnet_v1_101/block1/unit_1/bottleneck_v1/shortcut/BatchNorm/FusedBatchNorm
FirstStageFeatureExtractor/resnet_v1_101/block1/unit_1/bottleneck_v1/conv1/weights
FirstStageFeatureExtractor/resnet_v1_101/block1/unit_1/bottleneck_v1/conv1/weights/read
FirstStageFeatureExtractor/resnet_v1_101/resnet_v1_101/block1/unit_1/bottleneck_v1/conv1/Conv2D
FirstStageFeatureExtractor/resnet_v1_101/block1/unit_1/bottleneck_v1/conv1/BatchNorm/gamma
FirstStageFeatureExtractor/resnet_v1_101/block1/unit_1/bottleneck_v1/conv1/BatchNorm/gamma/read
FirstStageFeatureExtractor/resnet_v1_101/block1/unit_1/bottleneck_v1/conv1/BatchNorm/beta
FirstStageFeatureExtractor/resnet_v1_101/block1/unit_1/bottleneck_v1/conv1/BatchNorm/beta/read
FirstStageFeatureExtractor/resnet_v1_101/block1/unit_1/bottleneck_v1/conv1/BatchNorm/moving_mean
FirstStageFeatureExtractor/resnet_v1_101/block1/unit_1/bottleneck_v1/conv1/BatchNorm/moving_mean/read
FirstStageFeatureExtractor/resnet_v1_101/block1/unit_1/bottleneck_v1/conv1/BatchNorm/moving_variance
FirstStageFeatureExtractor/resnet_v1_101/block1/unit_1/bottleneck_v1/conv1/BatchNorm/moving_variance/read
FirstStageFeatureExtractor/resnet_v1_101/resnet_v1_101/block1/unit_1/bottleneck_v1/conv1/BatchNorm/FusedBatchNorm
FirstStageFeatureExtractor/resnet_v1_101/resnet_v1_101/block1/unit_1/bottleneck_v1/conv1/Relu
FirstStageFeatureExtractor/resnet_v1_101/block1/unit_1/bottleneck_v1/conv2/weights
FirstStageFeatureExtractor/resnet_v1_101/block1/unit_1/bottleneck_v1/conv2/weights/read
FirstStageFeatureExtractor/resnet_v1_101/resnet_v1_101/block1/unit_1/bottleneck_v1/conv2/Conv2D
FirstStageFeatureExtractor/resnet_v1_101/block1/unit_1/bottleneck_v1/conv2/BatchNorm/gamma
FirstStageFeatureExtractor/resnet_v1_101/block1/unit_1/bottleneck_v1/conv2/BatchNorm/gamma/read
FirstStageFeatureExtractor/resnet_v1_101/block1/unit_1/bottleneck_v1/conv2/BatchNorm/beta
FirstStageFeatureExtractor/resnet_v1_101/block1/unit_1/bottleneck_v1/conv2/BatchNorm/beta/read
FirstStageFeatureExtractor/resnet_v1_101/block1/unit_1/bottleneck_v1/conv2/BatchNorm/moving_mean
FirstStageFeatureExtractor/resnet_v1_101/block1/unit_1/bottleneck_v1/conv2/BatchNorm/moving_mean/read
FirstStageFeatureExtractor/resnet_v1_101/block1/unit_1/bottleneck_v1/conv2/BatchNorm/moving_variance
FirstStageFeatureExtractor/resnet_v1_101/block1/unit_1/bottleneck_v1/conv2/BatchNorm/moving_variance/read
FirstStageFeatureExtractor/resnet_v1_101/resnet_v1_101/block1/unit_1/bottleneck_v1/conv2/BatchNorm/FusedBatchNorm
FirstStageFeatureExtractor/resnet_v1_101/resnet_v1_101/block1/unit_1/bottleneck_v1/conv2/Relu
FirstStageFeatureExtractor/resnet_v1_101/block1/unit_1/bottleneck_v1/conv3/weights
FirstStageFeatureExtractor/resnet_v1_101/block1/unit_1/bottleneck_v1/conv3/weights/read
FirstStageFeatureExtractor/resnet_v1_101/resnet_v1_101/block1/unit_1/bottleneck_v1/conv3/Conv2D
FirstStageFeatureExtractor/resnet_v1_101/block1/unit_1/bottleneck_v1/conv3/BatchNorm/gamma
FirstStageFeatureExtractor/resnet_v1_101/block1/unit_1/bottleneck_v1/conv3/BatchNorm/gamma/read
FirstStageFeatureExtractor/resnet_v1_101/block1/unit_1/bottleneck_v1/conv3/BatchNorm/beta
FirstStageFeatureExtractor/resnet_v1_101/block1/unit_1/bottleneck_v1/conv3/BatchNorm/beta/read
FirstStageFeatureExtractor/resnet_v1_101/block1/unit_1/bottleneck_v1/conv3/BatchNorm/moving_mean
FirstStageFeatureExtractor/resnet_v1_101/block1/unit_1/bottleneck_v1/conv3/BatchNorm/moving_mean/read
FirstStageFeatureExtractor/resnet_v1_101/block1/unit_1/bottleneck_v1/conv3/BatchNorm/moving_variance
FirstStageFeatureExtractor/resnet_v1_101/block1/unit_1/bottleneck_v1/conv3/BatchNorm/moving_variance/read
FirstStageFeatureExtractor/resnet_v1_101/resnet_v1_101/block1/unit_1/bottleneck_v1/conv3/BatchNorm/FusedBatchNorm
FirstStageFeatureExtractor/resnet_v1_101/resnet_v1_101/block1/unit_1/bottleneck_v1/add
FirstStageFeatureExtractor/resnet_v1_101/resnet_v1_101/block1/unit_1/bottleneck_v1/Relu
#visualise weight
from tensorflow.python.framework import tensor_util
weight_nodes = [n for n in od_graph_def.node if n.op == 'Const']
for n in weight_nodes:
print("Name of the node - %s" % n.name)
print("Value - " )
print(tensor_util.MakeNdarray(n.attr['value'].tensor))
Value -
[-0.4540853 0.32056493 -0.22912885 0.2541284 -0.9532805 -0.25493076
-0.5082643 -0.33983043 -0.22494698 -0.3235537 -0.36385655 -0.28415084
-0.5575271 -0.1910503 -0.49338412 -0.42885196 -0.33558807 -0.6404074
-0.18031892 -0.2801433 -0.34614867 -0.5399439 -0.15712368 -0.2957037
0.15714812 -0.48186925 -0.33291766 -0.48722127 -0.2535558 -0.16819339
-0.43470743 -0.2855552 -0.3963132 -0.42684224 -0.3209245 -0.3658357
-0.30626386 -0.4134465 0.03801734 -0.43680775 -0.29162335 -0.36223125
-0.48077616 -0.43840355 -0.39883605 -0.31524658 -0.47961026 -0.41313347
-0.19943158 -0.9744824 -0.44492397 -0.62325597 -0.4503377 -0.32147056
-0.32696253 -0.3964685 -0.4328564 -0.28047583 -0.10022314 -0.20008062
-0.25767517 -0.32787508 -0.7490349 -0.26016906 -0.1388798 -0.00777247
-0.34971157 -0.37617198 -0.16701727 -0.49006486 0.25721186 -0.2046591
-0.39323628 -0.6523374 -0.5105675 -0.32751757 -0.2652611 -0.00875285
-0.2443423 -0.44029924 -0.37004694 -0.23282264 -0.3413406 -0.35088956
-0.46555945 -0.56252545 -0.5163544 -0.5306529 -0.62014216 -0.24062826
-0.06821899 -0.33485538 -0.4521361 -0.6593677 -0.46554345 -0.40829748
-0.49771288 -0.35069707 -0.0229042 -0.2298909 -0.25005338 -0.32683375
-0.49131876 -0.30173704 -0.29786837 -0.67599934 -0.35004795 -0.20007414
-0.16419795 -0.49680758 -0.42260775 -0.2726915 -0.6200565 -0.35487327
-0.3344669 -0.52208656 -0.4569452 -0.12290002 -0.5652516 -0.31340426
-0.3985747 -0.5281181 -0.7129867 -0.567227 -0.28814644 -0.1302681
-0.2001429 -0.10916138 -0.68222445 -0.29323065 -0.05704954 -0.22789052
-0.61558 -0.07655394 -0.52930903 -0.34198323 -0.24209192 -0.5053026
-0.4004574 -0.15969647 -0.35341594 -0.5591511 -0.40825605 -0.01070203
-0.7428409 -0.45172128 -0.43380788 0.2568167 -0.6722297 -0.35276502
-0.6669023 -0.20694211 -0.20189697 -0.5070727 -0.3972058 -0.00848175
-0.2994657 -0.34944996 -0.3389741 -0.17936742 -0.92425096 -0.05345222
-0.38853544 -0.39970097 -0.3607101 -0.7013903 -0.10112807 -0.2565767
-0.34176925 -0.52231157 -0.476935 -0.45604366 -0.5980594 -0.15625392
-0.42476812 -0.31922927 -0.31709027 -0.28081933 -0.2383788 -0.3822803
-0.58110493 -0.48278278 0.37628186 -0.28682145 -0.36748675 -0.34278873
-0.4303303 -0.31955504 -0.1693851 -0.4306607 -0.30947846 -0.29114336
-0.490207 -0.487192 -0.24403661 -0.30346867 -0.27445573 -0.35093272
-0.22612163 -0.41189626 -0.24778591 -0.31198287 -0.24912828 -0.40960005
-0.35579422 -0.43333185 -0.6709562 -0.844466 -0.14793177 -0.2053809
-0.5682712 -0.23429349 -0.35484084 -0.40705127 -0.16986188 -0.52707803
-0.371436 -0.3203282 -0.548174 -0.2454479 0.14760983 -0.23899662
-0.49904364 -0.5957814 -0.24498118 -0.15208362 -0.38576075 -0.4792993
-0.37565476 -0.48903054 -0.30210334 -0.5700767 -0.21236268 -0.57294637
-0.27853557 -0.46409875 -0.24420401 -0.48441198 -0.64328116 -0.47033066
-0.2866155 -0.24524617 -0.682954 0.22307628 -0.4910605 -0.6759475
-0.5597055 -0.08977205 -0.35018525 -0.326844 -0.407895 -0.4286294
-0.43290117 -0.43665645 -0.03084541 -0.38888484]
Name of the node - FirstStageFeatureExtractor/resnet_v1_101/block3/unit_23/bottleneck_v1/conv2/BatchNorm/moving_mean
Value -
[-0.15717725 0.07592656 0.10988383 -0.0490528 -0.0106115 -0.15786496
-0.11409757 -0.06248363 -0.06544019 -0.14081511 -0.17109169 -0.02211305
-0.03773279 -0.12710223 -0.12394962 -0.09206913 -0.16163616 -0.1723962
-0.20561686 -0.1415835 -0.06350002 -0.16257662 -0.09603897 -0.13460924
-0.02432729 -0.04199889 -0.1696589 -0.13546434 -0.06083817 -0.10085563
-0.09488994 0.04238822 -0.01342436 -0.15949754 -0.17468 -0.17448752
-0.07917193 -0.12688611 -0.22540613 0.0227522 -0.10866571 -0.05292162
-0.09184872 -0.15055841 -0.166075 -0.20481528 -0.0362504 -0.03834244
-0.0642691 0.0847474 -0.05358214 -0.01985169 -0.23435758 -0.14982411
-0.16710722 -0.15368192 -0.09551494 -0.16474426 -0.10215978 -0.16959386
-0.20461345 -0.03911749 -0.14563459 -0.00461106 -0.0700466 -0.0968714
0.00481458 -0.04512146 -0.14357145 -0.19567277 -0.19265638 -0.11200134
-0.07045556 -0.02379003 -0.10341825 -0.03273268 -0.11432283 -0.23813714
-0.02153216 -0.14714707 -0.1272525 -0.16946757 -0.1185687 -0.12256996
-0.10922257 0.11201918 -0.02812816 -0.02523885 0.03150908 -0.15362865
-0.1581234 0.00402692 -0.15670358 -0.04513093 -0.1437301 -0.01758968
-0.05778598 -0.14521386 0.01999546 -0.1154804 -0.19333984 -0.06223019
-0.07321492 -0.07283846 -0.12676379 -0.04754475 -0.12439732 -0.0936267
-0.1395944 -0.10460664 -0.1278241 -0.11326854 -0.08205313 -0.1579616
-0.04214557 -0.13230218 -0.15926664 -0.11025801 -0.17300427 -0.03396817
0.01946657 -0.02238615 -0.1087622 0.05757551 -0.06369423 -0.15768676
-0.24096929 -0.16477783 -0.08505792 -0.1452452 -0.1228498 -0.18778118
-0.10847382 -0.18381962 -0.09624809 -0.11529081 -0.08679712 -0.026652
-0.0709608 -0.13419388 -0.12301534 -0.0762615 -0.1779636 0.0712171
-0.02847089 -0.03589171 -0.16578905 0.06647447 -0.21369869 -0.11737502
-0.09483987 -0.1711197 -0.10486519 -0.13668095 -0.09134877 -0.17883773
-0.09831014 0.00876497 -0.01898824 -0.12447593 -0.11287156 -0.14073113
-0.14082104 -0.20715335 -0.0960372 -0.07659613 -0.05445954 -0.20973818
-0.1988086 -0.07440972 -0.01645763 -0.06222945 -0.10685738 -0.11946698
-0.02095333 -0.22138861 -0.03730138 -0.14171903 -0.2022561 -0.17174341
-0.10660502 -0.04700039 0.21646874 -0.2798932 -0.0933376 -0.15969937
-0.11783579 -0.08365067 -0.15353799 -0.09762033 -0.20146982 -0.10279506
-0.14905539 -0.12379898 -0.12730977 -0.06474843 -0.09232798 -0.07322481
-0.19277479 -0.04613561 -0.13307215 -0.1302453 -0.17341715 -0.07773575
-0.04985441 -0.10355914 -0.1078042 0.00853426 -0.13236587 -0.09865464
-0.00562798 -0.12181614 -0.22204153 -0.05784107 -0.18404393 -0.00629737
-0.10935344 -0.06820714 -0.0653091 -0.10879692 -0.12568252 -0.1386066
-0.10166074 -0.05466842 -0.09380525 -0.20807257 -0.09541806 -0.0593087
-0.12657529 -0.12999491 -0.08768366 -0.11886697 -0.11744718 -0.08663379
-0.1193763 -0.08805308 -0.1727127 -0.00644481 -0.11248584 0.00345422
-0.1572065 -0.07471903 -0.09579102 -0.08642395 -0.08458085 0.09465432
-0.0749066 0.04966704 -0.06954169 -0.05878955 -0.06419392 -0.03276661
-0.12470958 -0.05546655 -0.227529 -0.18447787]
Name of the node - FirstStageFeatureExtractor/resnet_v1_101/block3/unit_23/bottleneck_v1/conv2/BatchNorm/moving_variance
Value -
[0.0246328 0.15109949 0.08894898 0.18027042 0.17676981 0.03751087
0.04503109 0.05276729 0.07223138 0.0510329 0.05425516 0.03137682
0.03304449 0.04246168 0.03573015 0.04623629 0.04350977 0.03313071
0.04221025 0.04383035 0.04444639 0.03855932 0.03536414 0.03979863
0.12287893 0.03251396 0.03785344 0.03404477 0.05788973 0.05000394
0.03611772 0.04200751 0.03447625 0.04483994 0.03660354 0.05133959
0.03763206 0.03538596 0.0754443 0.03522145 0.03955411 0.03173661
0.0307942 0.04629485 0.04174245 0.06112912 0.03868164 0.03610169
0.03457952 0.03306652 0.03208408 0.03130876 0.04425795 0.0543987
0.04187379 0.05363455 0.04050811 0.03573698 0.03850965 0.0455887
0.03908563 0.03638468 0.03675755 0.03934622 0.03829562 0.05719311
0.03649542 0.02670012 0.03625882 0.03489691 0.1415123 0.04595942
0.04740899 0.0362278 0.03886651 0.02929924 0.03411594 0.05056536
0.04927529 0.03879778 0.04310857 0.03617069 0.03727381 0.03174838
0.03664678 0.11009517 0.02481679 0.03278335 0.05939993 0.03593603
0.03523111 0.02616766 0.03424251 0.0344963 0.04709259 0.03183395
0.03848327 0.03769235 0.04920188 0.03951766 0.03903662 0.03408873
0.03334353 0.03571996 0.06089876 0.05937878 0.03344719 0.04584214
0.04397506 0.03307587 0.03318075 0.04908697 0.03105905 0.05623207
0.05571087 0.03729884 0.0459931 0.04489287 0.04131918 0.03771564
0.04307055 0.03846145 0.05100024 0.043197 0.04114237 0.05911661
0.03956391 0.04361542 0.03280107 0.03139602 0.05827128 0.04082137
0.03654662 0.04679587 0.0309209 0.03587716 0.03316435 0.02562451
0.03213744 0.04613245 0.03237009 0.03835384 0.0523162 0.09062187
0.03235107 0.02755782 0.03724603 0.09937367 0.07468802 0.03093713
0.02697526 0.04269885 0.0421169 0.03236708 0.03591317 0.05501648
0.04456919 0.03508786 0.03599152 0.05607978 0.04522029 0.05920529
0.02929832 0.04003889 0.04818926 0.03470782 0.04554065 0.03561359
0.05097282 0.03162495 0.03105365 0.0479041 0.04201244 0.03967372
0.03850067 0.03719724 0.03236697 0.04217942 0.04472516 0.06050858
0.02926525 0.03916026 0.19374044 0.0456128 0.03878816 0.0363459
0.03911297 0.03797501 0.04174663 0.04230652 0.03831306 0.04085784
0.0427238 0.03057992 0.03531543 0.03794312 0.03533817 0.03804426
0.02861266 0.03188168 0.04924265 0.04050563 0.04611768 0.03238661
0.03605983 0.02848955 0.0390384 0.02533518 0.04691761 0.04362593
0.03430253 0.04009234 0.04348955 0.03593319 0.04101577 0.02840054
0.03196882 0.03733175 0.03510671 0.04594112 0.15954044 0.04420807
0.04092796 0.02599568 0.04426222 0.05545853 0.04792674 0.05130733
0.03689643 0.04262947 0.04004824 0.02642712 0.04293796 0.0296522
0.04023833 0.04849512 0.04770563 0.02930979 0.033105 0.03017242
0.03861351 0.03882564 0.03877705 0.10539907 0.03036495 0.0441276
0.02591156 0.07158393 0.03235719 0.04087997 0.02184044 0.03179693
0.04023989 0.03014399 0.06206222 0.06290652]

Related

How can i track weights when i use tf.train.adamoptimizer

I am using tf.train.AdamOptimizer to train my neural network, I know I can train easily by this, but how can I track the weight changes, or is there any method and function for this job?
Thank you very much.
optimizer = tf.train.AdamOptimizer(learning_rate=decoder.learning_rate).minimize(loss,global_step=global_step)
Below is an example to print the weights of a layer of the model.
import tensorflow as tf
from tensorflow.keras.applications import MobileNetV2
print("Tensorflow Version:",tf.__version__)
model = MobileNetV2(input_shape=[128, 128, 3], include_top=False) #or whatever model
print("Layer of the model:",model.layers[2])
print("Weights of the Layer",model.layers[2].get_weights())
Output:
I have cut short the putput as the weights was lengthy.
Tensorflow Version: 1.15.0
Layer of the model: <tensorflow.python.keras.layers.convolutional.Conv2D object at 0x7fe9123d2ac8>
Weights of the Layer [array([[[[ 1.60920480e-03, -1.45352582e-22, 1.54917374e-01,
2.29649822e-06, 1.49279218e-02, -5.39761280e-21,
7.01060288e-21, 1.54408276e-21, -1.12762444e-01,
-2.37320393e-01, 2.77190953e-01, 5.03320247e-02,
-4.21045721e-01, 1.73461720e-01, -5.35633206e-01,
-5.95900055e-04, 5.34933396e-02, 2.24988922e-01,
-1.49572559e-22, 2.20291526e-03, -5.38195252e-01,
-2.21309029e-02, -4.88732375e-22, -3.89234926e-21,
2.84152419e-22, -1.23437764e-02, -1.14439223e-02,
1.46071922e-22, -4.24997229e-03, -2.48236431e-09,
-4.64977883e-02, -3.43741417e-01],
[ 1.25032081e-03, -2.00014382e-22, 2.32940048e-01,
2.78269158e-06, 1.99653972e-02, 7.11864268e-20,
6.08769832e-21, 2.95990709e-22, -2.76436746e-01,
-5.15990913e-01, 6.78669810e-01, 3.02553400e-02,
-7.55709827e-01, 3.29371482e-01, -9.70950842e-01,
-3.02999169e-02, 7.99737051e-02, -4.45111930e-01,
-2.01127320e-22, 1.61909293e-02, 2.13520035e-01,
4.36614119e-02, -2.21765310e-22, 4.13772868e-21,
2.79922130e-22, 4.81817983e-02, -2.71119680e-02,
4.72275196e-22, 1.12856282e-02, 3.38369194e-10,
-1.29655674e-01, -3.85710597e-01],

Modify and combine two different frozen graphs generated using tensorflow object detection API for inference

I am working with TensorFlow object detection API, I have trained two different(SSD-mobilenet and FRCNN-inception-v2) models for my use case. Currently, my workflow is like this:
Take an input image, detect one particular object using SSD
mobilenet.
Crop the input image with the bounding box generated from
step 1 and then resize it to a fixed size(e.g. 200 X 300).
Feed this cropped and resized image to FRCNN-inception-V2 for detecting
smaller objects inside the ROI.
Currently at the time of inferencing, when I load two separate frozen graphs and follow the steps, I am getting my desired results. But I need only a single frozen graph because of my deployment requirement. I am new to TensorFlow and wanted to combine both graphs with crop and resizing process in between them.
Thanks, #matt and #Vedanshu for responding, Here is the updated code that works fine for my requirement, Please give suggestions, if it needs any improvement as I am still learning it.
# Dependencies
import tensorflow as tf
import numpy as np
# load graphs using pb file path
def load_graph(pb_file):
graph = tf.Graph()
with graph.as_default():
od_graph_def = tf.GraphDef()
with tf.gfile.GFile(pb_file, 'rb') as fid:
serialized_graph = fid.read()
od_graph_def.ParseFromString(serialized_graph)
tf.import_graph_def(od_graph_def, name='')
return graph
# returns tensor dictionaries from graph
def get_inference(graph, count=0):
with graph.as_default():
ops = tf.get_default_graph().get_operations()
all_tensor_names = {output.name for op in ops for output in op.outputs}
tensor_dict = {}
for key in ['num_detections', 'detection_boxes', 'detection_scores',
'detection_classes', 'detection_masks', 'image_tensor']:
tensor_name = key + ':0' if count == 0 else '_{}:0'.format(count)
if tensor_name in all_tensor_names:
tensor_dict[key] = tf.get_default_graph().\
get_tensor_by_name(tensor_name)
return tensor_dict
# renames while_context because there is one while function for every graph
# open issue at https://github.com/tensorflow/tensorflow/issues/22162
def rename_frame_name(graphdef, suffix):
for n in graphdef.node:
if "while" in n.name:
if "frame_name" in n.attr:
n.attr["frame_name"].s = str(n.attr["frame_name"]).replace("while_context",
"while_context" + suffix).encode('utf-8')
if __name__ == '__main__':
# your pb file paths
frozenGraphPath1 = '...replace_with_your_path/some_frozen_graph.pb'
frozenGraphPath2 = '...replace_with_your_path/some_frozen_graph.pb'
# new file name to save combined model
combinedFrozenGraph = 'combined_frozen_inference_graph.pb'
# loads both graphs
graph1 = load_graph(frozenGraphPath1)
graph2 = load_graph(frozenGraphPath2)
# get tensor names from first graph
tensor_dict1 = get_inference(graph1)
with graph1.as_default():
# getting tensors to add crop and resize step
image_tensor = tensor_dict1['image_tensor']
scores = tensor_dict1['detection_scores'][0]
num_detections = tf.cast(tensor_dict1['num_detections'][0], tf.int32)
detection_boxes = tensor_dict1['detection_boxes'][0]
# I had to add NMS becuase my ssd model outputs 100 detections and hence it runs out of memory becuase of huge tensor shape
selected_indices = tf.image.non_max_suppression(detection_boxes, scores, 5, iou_threshold=0.5)
selected_boxes = tf.gather(detection_boxes, selected_indices)
# intermediate crop and resize step, which will be input for second model(FRCNN)
cropped_img = tf.image.crop_and_resize(image_tensor,
selected_boxes,
tf.zeros(tf.shape(selected_indices), dtype=tf.int32),
[300, 60] # resize to 300 X 60
)
cropped_img = tf.cast(cropped_img, tf.uint8, name='cropped_img')
gdef1 = graph1.as_graph_def()
gdef2 = graph2.as_graph_def()
g1name = "graph1"
g2name = "graph2"
# renaming while_context in both graphs
rename_frame_name(gdef1, g1name)
rename_frame_name(gdef2, g2name)
# This combines both models and save it as one
with tf.Graph().as_default() as g_combined:
x, y = tf.import_graph_def(gdef1, return_elements=['image_tensor:0', 'cropped_img:0'])
z, = tf.import_graph_def(gdef2, input_map={"image_tensor:0": y}, return_elements=['detection_boxes:0'])
tf.train.write_graph(g_combined, "./", combinedFrozenGraph, as_text=False)
You can load output of one graph into another using input_map in import_graph_def. Also you have to rename the while_context because there is one while function for every graph. Something like this:
def get_frozen_graph(graph_file):
"""Read Frozen Graph file from disk."""
with tf.gfile.GFile(graph_file, "rb") as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
return graph_def
def rename_frame_name(graphdef, suffix):
# Bug reported at https://github.com/tensorflow/tensorflow/issues/22162#issuecomment-428091121
for n in graphdef.node:
if "while" in n.name:
if "frame_name" in n.attr:
n.attr["frame_name"].s = str(n.attr["frame_name"]).replace("while_context",
"while_context" + suffix).encode('utf-8')
...
l1_graph = tf.Graph()
with l1_graph.as_default():
trt_graph1 = get_frozen_graph(pb_fname1)
[tf_input1, tf_scores1, tf_boxes1, tf_classes1, tf_num_detections1] = tf.import_graph_def(trt_graph1,
return_elements=['image_tensor:0', 'detection_scores:0', 'detection_boxes:0', 'detection_classes:0','num_detections:0'])
input1 = tf.identity(tf_input1, name="l1_input")
boxes1 = tf.identity(tf_boxes1[0], name="l1_boxes") # index by 0 to remove batch dimension
scores1 = tf.identity(tf_scores1[0], name="l1_scores")
classes1 = tf.identity(tf_classes1[0], name="l1_classes")
num_detections1 = tf.identity(tf.dtypes.cast(tf_num_detections1[0], tf.int32), name="l1_num_detections")
...
# Make your output tensor
tf_out = # your output tensor (here, crop the input image with the bounding box generated from step 1 and then resize it to a fixed size(e.g. 200 X 300).)
...
connected_graph = tf.Graph()
with connected_graph.as_default():
l1_graph_def = l1_graph.as_graph_def()
g1name = 'ved'
rename_frame_name(l1_graph_def, g1name)
tf.import_graph_def(l1_graph_def, name=g1name)
...
trt_graph2 = get_frozen_graph(pb_fname2)
g2name = 'level2'
rename_frame_name(trt_graph2, g2name)
[tf_scores, tf_boxes, tf_classes, tf_num_detections] = tf.import_graph_def(trt_graph2,
input_map={'image_tensor': tf_out},
return_elements=['detection_scores:0', 'detection_boxes:0', 'detection_classes:0','num_detections:0'])
#######
# Export the graph
with connected_graph.as_default():
print('\nSaving...')
cwd = os.getcwd()
path = os.path.join(cwd, 'saved_model')
shutil.rmtree(path, ignore_errors=True)
inputs_dict = {
"image_tensor": tf_input
}
outputs_dict = {
"detection_boxes_l1": tf_boxes_l1,
"detection_scores_l1": tf_scores_l1,
"detection_classes_l1": tf_classes_l1,
"max_num_detection": tf_max_num_detection,
"detection_boxes_l2": tf_boxes_l2,
"detection_scores_l2": tf_scores_l2,
"detection_classes_l2": tf_classes_l2
}
tf.saved_model.simple_save(
tf_sess_main, path, inputs_dict, outputs_dict
)
print('Ok')

Is there a way to get bounding boxes from the Microsoft's custom vision object detection model.pb file?

Is there a way to get bounding boxes of a particular object detected via Microsoft custom vision model.pb file? I know we can get that via API calls to the azure custom vision service.
Say for example, we can get the bounding boxes from the ssd frozen inference graph.pb file as there are tensors present. Can we do the same for custom vision's model.pb file?
This is the code that I am using the print out the operations for a tensorflow model and the output.
detection_graph = tf.Graph()
with detection_graph.as_default():
graph_def = tf.GraphDef()
with tf.gfile.GFile('model.pb,'rb') as fid:
serialized_graph = fid.read()
graph_def.ParseFromString(serialized_graph)
tf.import_graph_def(graph_def, name='')
with tf.Session(graph=detection_graph) as sess:
ops = tf.get_default_graph().get_operations()
for op in ops:
for output in op.outputs:
print(output.name)
Placeholder:0
layer1_conv/weights:0
layer1_conv/weights/read:0
layer1_conv/Conv2D:0
layer1_conv/biases:0
layer1_conv/biases/read:0
layer1_conv/BiasAdd:0
layer1_leaky/alpha:0
layer1_leaky/mul:0
layer1_leaky:0
pool1:0
layer2_conv/weights:0
layer2_conv/weights/read:0
layer2_conv/Conv2D:0
layer2_conv/biases:0
layer2_conv/biases/read:0
layer2_conv/BiasAdd:0
layer2_leaky/alpha:0
layer2_leaky/mul:0
layer2_leaky:0
pool2:0
layer3_conv/weights:0
layer3_conv/weights/read:0
layer3_conv/Conv2D:0
layer3_conv/biases:0
layer3_conv/biases/read:0
layer3_conv/BiasAdd:0
layer3_leaky/alpha:0
layer3_leaky/mul:0
layer3_leaky:0
pool3:0
layer4_conv/weights:0
layer4_conv/weights/read:0
layer4_conv/Conv2D:0
layer4_conv/biases:0
layer4_conv/biases/read:0
layer4_conv/BiasAdd:0
layer4_leaky/alpha:0
layer4_leaky/mul:0
layer4_leaky:0
pool4:0
layer5_conv/weights:0
layer5_conv/weights/read:0
layer5_conv/Conv2D:0
layer5_conv/biases:0
layer5_conv/biases/read:0
layer5_conv/BiasAdd:0
layer5_leaky/alpha:0
layer5_leaky/mul:0
layer5_leaky:0
pool5:0
layer6_conv/weights:0
layer6_conv/weights/read:0
layer6_conv/Conv2D:0
layer6_conv/biases:0
layer6_conv/biases/read:0
layer6_conv/BiasAdd:0
layer6_leaky/alpha:0
layer6_leaky/mul:0
layer6_leaky:0
pool6:0
layer7_conv/weights:0
layer7_conv/weights/read:0
layer7_conv/Conv2D:0
layer7_conv/biases:0
layer7_conv/biases/read:0
layer7_conv/BiasAdd:0
layer7_leaky/alpha:0
layer7_leaky/mul:0
layer7_leaky:0
layer8_conv/weights:0
layer8_conv/weights/read:0
layer8_conv/Conv2D:0
layer8_conv/biases:0
layer8_conv/biases/read:0
layer8_conv/BiasAdd:0
layer8_leaky/alpha:0
layer8_leaky/mul:0
layer8_leaky:0
m_outputs0/weights:0
m_outputs0/weights/read:0
m_outputs0/Conv2D:0
m_outputs0/biases:0
m_outputs0/biases/read:0
m_outputs0/BiasAdd:0
model_outputs:0
The Placeholder:0 and model_outputs:0 are the inputs and the outputs. The Placeholder:0 takes a tensor of shape (?,416,416,3) and the model_outputs:0 outputs a tensor of shape (1, 13, 13, 30). If I am detecting just a single object, how do I get the bounding boxes from the model_outputs:0 tensor.
Where am I going wrong? Any suggestions are welcome.
You seem to be using python, so you can export the object-detection model from the customvision UI (select tensorflow options):
https://learn.microsoft.com/en-us/azure/cognitive-services/custom-vision-service/export-model-python
which will give you a zipfile containing:
labels.txt
model.pb
python/object_detection.py
python/predict.py
Put everything in one directory then simply execute the code:
python predict.py image.jpg
Hey presto! This will print out a list of dictionaries like
{'boundingBox': {'width': 0.92610852, 'top': -0.06989955, 'height': 0.85869097, 'left': 0.03279033}, 'tagId': 3, 'tagName': 'myTagName', 'probability': 0.24879535}
The coordinates (relative to top left) are normalized to the width and height of the image.
Here is main (not my code!):
def main(image_filename):
# Load a TensorFlow model
graph_def = tf.GraphDef()
with tf.gfile.FastGFile(MODEL_FILENAME, 'rb') as f:
graph_def.ParseFromString(f.read())
# Load labels
with open(LABELS_FILENAME, 'r') as f:
labels = [l.strip() for l in f.readlines()]
od_model = TFObjectDetection(graph_def, labels)
image = Image.open(image_filename)
predictions = od_model.predict_image(image)
print(predictions)
which you can modify as you see fit. Good luck!

tensorflow serving uninitialized

Hello I want to initialize variable named result in the code below.
I tried to initialize with this code* when I tried to serving.
sess.run(tf.global_variables_initializer(),feed_dict=
{userLat:0,userLon:0})
I just want to initialize the variable.
The reason for using the variable is to write validate_shape = false.
The reason for using this option is to resolve error 'Outer dimension for outputs must be unknown, outer dimension of 'Variable:0' is 1' when deploying the model version to the Google Cloud ml engine.
Initialization with the following code will output a value when feed_dict is 0 when attempting a prediction.
sess.run(tf.global_variables_initializer(),feed_dict=
{userLat:0,userLon:0})
Is there a way to simply initialize the value of result?
Or is it possible to store the list of stored tensor values as a String with a comma without shape?
It's a very basic question.
I'm sorry.
I am a beginner of the tensor flow.
I need help. Thank you for reading.
import tensorflow as tf
import sys,os
#define filename queue
filenameQueue =tf.train.string_input_producer(['./data.csv'],
shuffle=False,name='filename_queue')
# define reader
reader = tf.TextLineReader()
key,value = reader.read(filenameQueue)
#define decoder
recordDefaults = [ ["null"],[0.0],[0.0]]
sId,lat, lng = tf.decode_csv(
value, record_defaults=recordDefaults,field_delim=',')
taxiData=[]
with tf.Session() as sess:
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(sess=sess, coord=coord)
for i in range(18):
data=sess.run([sId, lat, lng])
tmpTaxiData=[]
tmpTaxiData.append(data[0])
tmpTaxiData.append(data[1])
tmpTaxiData.append(data[2])
taxiData.append(tmpTaxiData)
coord.request_stop()
coord.join(threads)
from math import sin, cos,acos, sqrt, atan2, radians
#server input data
userLat = tf.placeholder(tf.float32, shape=[])
userLon = tf.placeholder(tf.float32, shape=[])
R = 6373.0
radian=0.017453292519943295
distanceList=[]
for i in taxiData:
taxiId=tf.constant(i[0],dtype=tf.string,shape=[])
taxiLat=tf.constant(i[1],dtype=tf.float32,shape=[])
taxiLon=tf.constant(i[2],dtype=tf.float32,shape=[])
distanceValue=6371*tf.acos(tf.cos(radian*userLat)*
tf.cos(radian*taxiLat)*tf.cos(radian*taxiLon-
radian*126.8943311)+tf.sin(radian*37.4685225)*tf.sin(radian*taxiLat))
tmpDistance=[]
tmpDistance.append(taxiId)
tmpDistance.append(distanceValue)
distanceList.append(tmpDistance)
# result sort
sId,distances=zip(*distanceList)
indices = tf.nn.top_k(distances, k=len(distances)).indices
gather=tf.gather(sId, indices[::-1])[0:5]
result=tf.Variable(gather,validate_shape=False)
print "Done training!"
# serving
import os
from tensorflow.python.util import compat
model_version = 1
path = os.path.join("Taximodel", str(model_version))
builder = tf.saved_model.builder.SavedModelBuilder(path)
with tf.Session() as sess:
builder.add_meta_graph_and_variables(
sess,
[tf.saved_model.tag_constants.SERVING],
signature_def_map= {
"serving_default":
tf.saved_model.signature_def_utils.predict_signature_def(
inputs= {"userLat": userLat, "userLon":userLon},
outputs= {"result": result})
})
builder.save()
print 'Done exporting'
You can try to define the graph so that the output tensor preserves the shape (outer dimension) of the input tensor.
For example, something like:
#server input data
userLoc = tf.placeholder(tf.float32, shape=[None, 2])
def calculate_dist(user_loc):
distanceList = []
for i in taxiData:
taxiId=tf.constant(i[0],dtype=tf.string,shape=[])
taxiLat=tf.constant(i[1],dtype=tf.float32,shape=[])
taxiLon=tf.constant(i[2],dtype=tf.float32,shape=[])
distanceValue=6371*tf.acos(tf.cos(radian*user_loc[0])*
tf.cos(radian*taxiLat)*tf.cos(radian*taxiLon-
radian*126.8943311)+tf.sin(radian*37.4685225)*tf.sin(radian*taxiLat))
tmpDistance=[]
tmpDistance.append(taxiId)
tmpDistance.append(distanceValue)
distanceList.append(tmpDistance)
# result sort
sId,distances=zip(*distanceList)
indices = tf.nn.top_k(distances, k=len(distances)).indices
return tf.gather(sId, indices[::-1])[0:5]
result = tf.map_fn(calculate_dist, userLoc)

Understanding why results between Keras and Tensorflow are different

I am currently trying to do some work in both Keras and Tensorflow, I stumbled upon a small thing I do not understand. If you look at the code below, I am trying to predict the responses of a network either via Tensorflow session explicitly, or by using the model predict_on_batch function.
import os
import keras
import numpy as np
import tensorflow as tf
from keras import backend as K
from keras.layers import Dense, Dropout, Flatten, Input
from keras.models import Model
# Try to standardize output
np.random.seed(1)
tf.set_random_seed(1)
# Building the model
inputs = Input(shape=(224,224,3))
base_model = keras.applications.vgg16.VGG16(include_top=True, weights='imagenet', \
input_tensor=inputs, input_shape=(224, 224, 3))
x = base_model.get_layer("fc2").output
x = Dropout(0.5, name='model_fc_dropout')(x)
x = Dense(2048, activation='sigmoid', name='final_fc')(x)
x = Dropout(0.5, name='final_fc_dropout')(x)
predictions = Dense(1, activation='sigmoid', name='fcout')(x)
model = Model(outputs=predictions, inputs=inputs)
##################################################################
model.compile(loss='binary_crossentropy',
optimizer=tf.train.MomentumOptimizer(learning_rate=5e-4, momentum=0.9),
metrics=['accuracy'])
image_batch = np.random.random((64,224,224,3))
# Outputs predicted by TF
outs = [predictions]
feed_dict={inputs:image_batch, K.learning_phase():0}
init_op = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init_op)
outputs = sess.run(outs, feed_dict)[0]
print outputs.flatten()
# Outputs predicted by Keras
outputs = model.predict_on_batch(image_batch)
print outputs.flatten()
My issue is that I got two different results, even though I tried to remove any kind of sources of randomness by setting the seeds to 1 and running the operations on CPU. Even then, I get the following results:
[ 0.26079229 0.26078743 0.26079154 0.26079673 0.26078942 0.26079443
0.26078886 0.26079088 0.26078972 0.26078728 0.26079121 0.26079452
0.26078513 0.26078424 0.26079014 0.26079312 0.26079521 0.26078743
0.26078558 0.26078537 0.26078674 0.26079136 0.26078632 0.26077667
0.26079312 0.26078999 0.26079065 0.26078704 0.26078928 0.26078624
0.26078892 0.26079202 0.26079065 0.26078689 0.26078963 0.26078749
0.26078817 0.2607986 0.26078528 0.26078412 0.26079187 0.26079246
0.26079226 0.26078457 0.26078099 0.26078072 0.26078376 0.26078475
0.26078326 0.26079389 0.26079792 0.26078579 0.2607882 0.2607961
0.26079237 0.26078218 0.26078638 0.26079753 0.2607787 0.26078618
0.26078096 0.26078594 0.26078215 0.26079002]
and
[ 0.25331706 0.25228402 0.2534174 0.25033095 0.24851511 0.25099936
0.25240892 0.25139931 0.24948661 0.25183493 0.25104815 0.25164133
0.25214729 0.25265765 0.25128496 0.25249782 0.25247478 0.25314394
0.25014618 0.25280923 0.2526398 0.25381723 0.25138992 0.25072744
0.25069866 0.25307226 0.25063521 0.25133523 0.25050756 0.2536433
0.25164688 0.25054023 0.25117773 0.25352773 0.25157067 0.25173825
0.25234801 0.25182116 0.25284401 0.25297374 0.25079012 0.25146705
0.25401884 0.25111189 0.25192681 0.25252578 0.25039044 0.2525287
0.25165257 0.25357804 0.25001243 0.2495154 0.2531895 0.25270832
0.25305843 0.25064403 0.25180396 0.25231308 0.25224048 0.25068772
0.25212681 0.24812476 0.25027585 0.25243458]
Does anybody have an idea what could be going on in the background that could change the results? (These results do not change if one runs them again)
The difference gets even bigger if the network runs on a GPU (Titan X), e.g. the second output is:
[ 0.3302682 0.33054096 0.32677746 0.32830611 0.32972822 0.32807562
0.32850873 0.33161065 0.33009702 0.32811245 0.3285495 0.32966742
0.33050382 0.33156893 0.3300975 0.3298254 0.33350074 0.32991216
0.32990077 0.33203539 0.32692945 0.33036903 0.33102706 0.32648
0.32933888 0.33161271 0.32976636 0.33252293 0.32859167 0.33013415
0.33080408 0.33102706 0.32994759 0.33150592 0.32881773 0.33048317
0.33040857 0.32924038 0.32986534 0.33131596 0.3282761 0.3292698
0.32879189 0.33186096 0.32862625 0.33067161 0.329018 0.33022234
0.32904804 0.32891914 0.33122411 0.32900628 0.33088413 0.32931429
0.3268061 0.32924181 0.32940546 0.32860965 0.32828435 0.3310211
0.33098024 0.32997403 0.33025959 0.33133432]
whereas in the first one, the differences only occur in the 5th and latter decimal places:
[ 0.26075357 0.26074868 0.26074538 0.26075155 0.260755 0.26073951
0.26074919 0.26073971 0.26074231 0.26075247 0.2607362 0.26075858
0.26074955 0.26074123 0.26074299 0.26074946 0.26074076 0.26075014
0.26074076 0.26075229 0.26075041 0.26074776 0.26075897 0.26073995
0.260746 0.26074466 0.26073912 0.26075709 0.26075712 0.26073799
0.2607322 0.26075566 0.26075059 0.26073873 0.26074558 0.26074558
0.26074359 0.26073721 0.26074392 0.26074731 0.26074862 0.26074174
0.26074126 0.26074588 0.26073804 0.26074919 0.26074269 0.26074606
0.26075307 0.2607446 0.26074025 0.26074648 0.26074952 0.26073608
0.26073566 0.26073873 0.26074576 0.26074475 0.26074636 0.26073411
0.2607542 0.26074755 0.2607449 0.2607407 ]
Here results are different as initializations are different.
Tf uses the this init_op for variables initializations.
sess.run(init_op)
But Keras uses its own init_op inside its model class, not the init_op defined in your codes.