i train a model in python with keras. I save model as .h5 file and after than i use a script for .h5 file to .pb file. You can see my script:
from tensorflow.python.framework import graph_util
from tensorflow.python.framework import graph_io
from keras.models import load_model
from keras import backend as K
import os.path as osp
import os
import tensorflow as tf
model = load_model("/media/hsmnzaydn/8AD030E8D030DBDF/Projects/Machine Learning/Basic Keras/CancerDetected/modelim.h5")
nb_classes = 1 # The number of output nodes in the model
prefix_output_node_names_of_final_network = 'output_node'
K.set_learning_phase(0)
pred = [None]*nb_classes
pred_node_names = [None]*nb_classes
for i in range(nb_classes):
pred_node_names[i] = prefix_output_node_names_of_final_network+str(i)
pred[i] = tf.identity(model.output[i], name=pred_node_names[i])
print('output nodes names are: ', pred_node_names)
sess = K.get_session()
output_fld = 'tensorflow_model/'
if not os.path.isdir(output_fld):
os.mkdir(output_fld)
output_graph_name = "./" + '.pb'
output_graph_suffix = '_inference'
constant_graph = graph_util.convert_variables_to_constants(sess, sess.graph.as_graph_def(), pred_node_names)
graph_io.write_graph(constant_graph, output_fld, output_graph_name, as_text=False)
print('saved the constant graph (ready for inference) at: ', osp.join(output_fld, output_graph_name))
And i move .pb file in tensorflow root file.I try bazel command for .pb file to .lite file I use bazel command like this
bazel-bin/tensorflow/contrib/lite/toco/toco --input_file=modelim.pb --input_format=TENSORFLOW_GRAPHDEF --output_format=TFLITE --output_file=modelim.lite --inference_type=FLOAT --input_type=FLOAT --input_arrays=dense_1_input --output_arrays=output_node0 --input_shapes=1,2
but i get this error
2018-03-27 22:23:18.655997: W tensorflow/contrib/lite/toco/toco_cmdline_flags.cc:183] --input_type is deprecated. It was an ambiguous flag that set both --input_data_types and --inference_input_type. If you are trying to complement the input file with information about the type of input arrays, use --input_data_type. If you are trying to control the quantization/dequantization of real-numbers input arrays in the output file, use --inference_input_type.
2018-03-27 22:23:18.656633: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] Before Removing unused ops: 17 operators, 27 arrays (0 quantized)
2018-03-27 22:23:18.656758: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] Before general graph transformations: 17 operators, 27 arrays (0 quantized)
2018-03-27 22:23:18.656837: F tensorflow/contrib/lite/toco/graph_transformations/propagate_fixed_sizes.cc:447] Check failed: matmul_repeats * weights_shape.dims(1) == input_overall_size (0 vs. 2)
İptal edildi
Do someone know solution?
Related
I trained a neural network with tensorflow and extracted the weights from the embedding layer to make an array of embeddings. I generated it as a txt file and I can't read it with KeyedVectors
Generate the file.
Generate the file
import io
out_vectors = io.open('vecs.txt', 'w', encoding='utf-8')
out_vectors.write(f'{vocab_size} {embedding_dim}')
for word_num in range(1,vocab_size):
word = reversed_word_index[word_num]
embeddings = weights[word_num]
line_embedding ='\n' + word + '\t'+ '\t'.join([str(x) for x in embeddings])
out_vectors.write(line_embedding)
out_vectors.close()
Read the file
from gensim.models import KeyedVectors
model_embedding = KeyedVectors.load_word2vec_format('vecs.txt')
try to add a lambda method (mmap) in loading function, should look like:
model_embedding = KeyedVectors.load("vecs.txt", mmap="r")
i am trying to train a tflite model using just people in coco dataset.
I am using tflite model maker to train and fiftyone to process dataset.
when running the training file .py i get the error below.
root#85ac26b47f92:/external# root#85ac26b47f92:/external# python demofie.py
2022-11-01 21:02:01.059188: E tensorflow/stream_executor/cuda/cuda_driver.cc:271] failed call to cuInit: UNKNOWN ERROR (34)
2022-11-01 21:02:01.059234: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:169] retrieving CUDA diagnostic information for host: 85ac26b47f92
2022-11-01 21:02:01.059242: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:176] hostname: 85ac26b47f92
2022-11-01 21:02:01.059324: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:200] libcuda reported version is: NOT_FOUND: was unable to find libcuda.so DSO loaded into this program
2022-11-01 21:02:01.059381: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:204] kernel reported version is: 470.141.3
2022-11-01 21:02:01.059821: I tensorflow/core/platform/cpu_feature_guard.cc:151] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
Traceback (most recent call last):
File "demofie.py", line 20, in <module>
train_data = object_detector.DataLoader.from_pascal_voc(images_dir='/external/train/data',annotations_dir='/external/train/labels', label_map=['person'],ignore_difficult_instances= False,num_shards = 100)
File "/usr/local/lib/python3.8/dist-packages/tensorflow_examples/lite/model_maker/core/data_util/object_detector_dataloader.py", line 217, in from_pascal_voc
cache_writer.write_files(
File "/usr/local/lib/python3.8/dist-packages/tensorflow_examples/lite/model_maker/core/data_util/object_detector_dataloader_util.py", line 252, in write_files
tf_example = create_pascal_tfrecord.dict_to_tf_example(
File "/usr/local/lib/python3.8/dist-packages/tensorflow_examples/lite/model_maker/third_party/efficientdet/dataset/create_pascal_tfrecord.py", line 162, in dict_to_tf_example
if obj['difficult'] == 'Unspecified':
KeyError: 'difficult'
code that causes the error. can anyone with better coding knowledge than me shed some light on any mistakes i may have made.
I have added the fiftyone code below this (no error)
import numpy as np
import os
from tflite_model_maker.config import QuantizationConfig
from tflite_model_maker.config import ExportFormat
from tflite_model_maker import model_spec
from tflite_model_maker import object_detector
import tensorflow as tf
assert tf.__version__.startswith('2')
tf.get_logger().setLevel('ERROR')
from absl import logging
logging.set_verbosity(logging.ERROR)
spec = model_spec.get('efficientdet_lite1')
train_data = object_detector.DataLoader.from_pascal_voc(images_dir='/external/train/data',annotations_dir='/external/train/labels', label_map=['person'],ignore_difficult_instances= False,num_shards = 100)
validation_data = object_detector.DataLoader.from_pascal_voc(images_dir='/external/val/data',annotations_dir='/external/val/labels',label_map= ['person'],ignore_difficult_instances= False,num_shards = 100)
test_data = object_detector.DataLoader.from_pascal_voc(images_dir='/external/test/data',annotations_dir='/external/test/labels',label_map= ['person'],ignore_difficult_instances= False,num_shards = 100)
model = object_detector.create(train_data, model_spec=spec, batch_size=8,epochs=2000, train_whole_model=True, validation_data=validation_data)
model.evaluate(test_data)
model.export(export_dir='/external/')
**dataset generation code
**
import fiftyone.zoo as foz
import fiftyone as fo
from fiftyone import ViewField as F
cocodataset_test = foz.load_zoo_dataset(
"coco-2017",
splits="test",
label_types=["detections"],
classes=["person"],
only_matching=True,
# max_samples=50,
)
cocodataset_validation = foz.load_zoo_dataset(
"coco-2017",
splits="validation",
label_types=["detections"],
classes=["person"],
only_matching=True,
# max_samples=50
)
cocodataset_train = foz.load_zoo_dataset(
"coco-2017",
splits="train",
label_types=["detections"],
classes=["person"],
only_matching=True,
# max_samples=50,
)
cocodataset_validation.export(
'/external/val',
fo.types.VOCDetectionDataset,
)
cocodataset_train.export(
'/external/train/',
fo.types.VOCDetectionDataset,
)
cocodataset_test.export(
'/external/test/',
fo.types.VOCDetectionDataset,
)
I save the graph to a .pb file. I get an error when I convert the .pb to .dlc. Anyone know why?
My code to build the model:
import tensorflow as tf
from tensorflow.python.framework.graph_util import convert_variables_to_constants
from tensorflow.python.ops import variable_scope
X = tf.placeholder(tf.float32, shape=[None, 1], name="input");
with variable_scope.variable_scope("input"):
a = tf.Variable([[1]], name="a", dtype=tf.float32);
g = X * a
with variable_scope.variable_scope("output"):
b = tf.Variable([[0]], name="b", dtype=tf.float32);
ss = tf.add(g, b, name="output")
sess = tf.Session();
sess.run(tf.global_variables_initializer());
graph = convert_variables_to_constants(sess, sess.graph_def, ["output/output"])
tf.train.write_graph(graph, './linear/', 'graph.pb', as_text=False)
sess.close();
convert cmd:
snpe-tensorflow-to-dlc --graph graph_sc.pb -i input 1 --out_node output/output --allow_unconsumed_nodes
error message:
2017-10-26 01:55:15,919 - 390 - INFO - INFO_ALL_BUILDING_LAYER_W_NODES: Building layer (ElementWiseMul) with nodes: [u'input_1/mul']
~/snpe-sdk/snpe-1.6.0/lib/python/converters/tensorflow/layers/eltwise.py:108: RuntimeWarning: error_code=1002; error_message=Layer paramter value is invalid. Layer input_1/mul: at least two inputs required, have 1; error_component=Model Validation; line_no=732; thread_id=140514161018688
output_name)
2017-10-26 01:55:15,920 - 390 - INFO - INFO_ALL_BUILDING_LAYER_W_NODES: Building layer (ElementWiseSum) with nodes: [u'output/output']
~/snpe-sdk/snpe-1.6.0/lib/python/converters/tensorflow/layers/eltwise.py:84: RuntimeWarning: error_code=1002; error_message=Layer paramter value is invalid. Layer output/output: at least two inputs required, have 1; error_component=Model Validation; line_no=732; thread_id=140514161018688
output_name)
SNPE requires a 3D tensor as input. Try to update your command -i input 1 to -i input 1,1,1
The input_dim argument to snpe-tensorflow-to-dlc should be of 3 dimension tensors like below example,
snpe-tensorflow-to-dlc --graph $SNPE_ROOT/models/inception_v3/tensorflow/inception_v3_2016_08_28_frozen.pb
--input_dim input "1,299,299,3" --out_node "InceptionV3/Predictions/Reshape_1" --dlc inception_v3.dlc
--allow_unconsumed_nodes
For more detailed reference to convert TensorFlow model to DLC using Neural Processing SDK follow below link,
https://developer.qualcomm.com/docs/snpe/model_conv_tensorflow.html
I am currently trying to do some work in both Keras and Tensorflow, I stumbled upon a small thing I do not understand. If you look at the code below, I am trying to predict the responses of a network either via Tensorflow session explicitly, or by using the model predict_on_batch function.
import os
import keras
import numpy as np
import tensorflow as tf
from keras import backend as K
from keras.layers import Dense, Dropout, Flatten, Input
from keras.models import Model
# Try to standardize output
np.random.seed(1)
tf.set_random_seed(1)
# Building the model
inputs = Input(shape=(224,224,3))
base_model = keras.applications.vgg16.VGG16(include_top=True, weights='imagenet', \
input_tensor=inputs, input_shape=(224, 224, 3))
x = base_model.get_layer("fc2").output
x = Dropout(0.5, name='model_fc_dropout')(x)
x = Dense(2048, activation='sigmoid', name='final_fc')(x)
x = Dropout(0.5, name='final_fc_dropout')(x)
predictions = Dense(1, activation='sigmoid', name='fcout')(x)
model = Model(outputs=predictions, inputs=inputs)
##################################################################
model.compile(loss='binary_crossentropy',
optimizer=tf.train.MomentumOptimizer(learning_rate=5e-4, momentum=0.9),
metrics=['accuracy'])
image_batch = np.random.random((64,224,224,3))
# Outputs predicted by TF
outs = [predictions]
feed_dict={inputs:image_batch, K.learning_phase():0}
init_op = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init_op)
outputs = sess.run(outs, feed_dict)[0]
print outputs.flatten()
# Outputs predicted by Keras
outputs = model.predict_on_batch(image_batch)
print outputs.flatten()
My issue is that I got two different results, even though I tried to remove any kind of sources of randomness by setting the seeds to 1 and running the operations on CPU. Even then, I get the following results:
[ 0.26079229 0.26078743 0.26079154 0.26079673 0.26078942 0.26079443
0.26078886 0.26079088 0.26078972 0.26078728 0.26079121 0.26079452
0.26078513 0.26078424 0.26079014 0.26079312 0.26079521 0.26078743
0.26078558 0.26078537 0.26078674 0.26079136 0.26078632 0.26077667
0.26079312 0.26078999 0.26079065 0.26078704 0.26078928 0.26078624
0.26078892 0.26079202 0.26079065 0.26078689 0.26078963 0.26078749
0.26078817 0.2607986 0.26078528 0.26078412 0.26079187 0.26079246
0.26079226 0.26078457 0.26078099 0.26078072 0.26078376 0.26078475
0.26078326 0.26079389 0.26079792 0.26078579 0.2607882 0.2607961
0.26079237 0.26078218 0.26078638 0.26079753 0.2607787 0.26078618
0.26078096 0.26078594 0.26078215 0.26079002]
and
[ 0.25331706 0.25228402 0.2534174 0.25033095 0.24851511 0.25099936
0.25240892 0.25139931 0.24948661 0.25183493 0.25104815 0.25164133
0.25214729 0.25265765 0.25128496 0.25249782 0.25247478 0.25314394
0.25014618 0.25280923 0.2526398 0.25381723 0.25138992 0.25072744
0.25069866 0.25307226 0.25063521 0.25133523 0.25050756 0.2536433
0.25164688 0.25054023 0.25117773 0.25352773 0.25157067 0.25173825
0.25234801 0.25182116 0.25284401 0.25297374 0.25079012 0.25146705
0.25401884 0.25111189 0.25192681 0.25252578 0.25039044 0.2525287
0.25165257 0.25357804 0.25001243 0.2495154 0.2531895 0.25270832
0.25305843 0.25064403 0.25180396 0.25231308 0.25224048 0.25068772
0.25212681 0.24812476 0.25027585 0.25243458]
Does anybody have an idea what could be going on in the background that could change the results? (These results do not change if one runs them again)
The difference gets even bigger if the network runs on a GPU (Titan X), e.g. the second output is:
[ 0.3302682 0.33054096 0.32677746 0.32830611 0.32972822 0.32807562
0.32850873 0.33161065 0.33009702 0.32811245 0.3285495 0.32966742
0.33050382 0.33156893 0.3300975 0.3298254 0.33350074 0.32991216
0.32990077 0.33203539 0.32692945 0.33036903 0.33102706 0.32648
0.32933888 0.33161271 0.32976636 0.33252293 0.32859167 0.33013415
0.33080408 0.33102706 0.32994759 0.33150592 0.32881773 0.33048317
0.33040857 0.32924038 0.32986534 0.33131596 0.3282761 0.3292698
0.32879189 0.33186096 0.32862625 0.33067161 0.329018 0.33022234
0.32904804 0.32891914 0.33122411 0.32900628 0.33088413 0.32931429
0.3268061 0.32924181 0.32940546 0.32860965 0.32828435 0.3310211
0.33098024 0.32997403 0.33025959 0.33133432]
whereas in the first one, the differences only occur in the 5th and latter decimal places:
[ 0.26075357 0.26074868 0.26074538 0.26075155 0.260755 0.26073951
0.26074919 0.26073971 0.26074231 0.26075247 0.2607362 0.26075858
0.26074955 0.26074123 0.26074299 0.26074946 0.26074076 0.26075014
0.26074076 0.26075229 0.26075041 0.26074776 0.26075897 0.26073995
0.260746 0.26074466 0.26073912 0.26075709 0.26075712 0.26073799
0.2607322 0.26075566 0.26075059 0.26073873 0.26074558 0.26074558
0.26074359 0.26073721 0.26074392 0.26074731 0.26074862 0.26074174
0.26074126 0.26074588 0.26073804 0.26074919 0.26074269 0.26074606
0.26075307 0.2607446 0.26074025 0.26074648 0.26074952 0.26073608
0.26073566 0.26073873 0.26074576 0.26074475 0.26074636 0.26073411
0.2607542 0.26074755 0.2607449 0.2607407 ]
Here results are different as initializations are different.
Tf uses the this init_op for variables initializations.
sess.run(init_op)
But Keras uses its own init_op inside its model class, not the init_op defined in your codes.
I use ssd_mobilenets in Object detection API to train my own model, and get .ckpt files. It works well on my computer, but now I want to use the model on my phone. So, I need convert it to .pb file. I do not know how to do it, can any one help? By the way, the graph of ssd_mobilenets is complex, I can not find which is the output of model. Is there any one knowing the name of the output?
Use export_inference_graph.py to convert model checkpoint file into a .pb file.
python tensorflow_models/object_detection/export_inference_graph.py \
--input_type image_tensor \
--pipeline_config_path architecture_used_while_training.config \
--trained path_to_saved_ckpt/model.ckpt-NUMBER \
--output_directory model/
This is the 4th code cell in object_detection_tutorial.ipynb in this link -https://github.com/tensorflow/models/blob/master/research/object_detection/object_detection_tutorial.ipynb
# What model to download.
MODEL_NAME = 'ssd_mobilenet_v1_coco_2017_11_17'
MODEL_FILE = MODEL_NAME + '.tar.gz'
DOWNLOAD_BASE = 'http://download.tensorflow.org/models/object_detection/'
# Path to frozen detection graph. This is the actual model that is used for the object detection.
PATH_TO_CKPT = MODEL_NAME + '/frozen_inference_graph.pb'
# List of the strings that is used to add correct label for each box.
PATH_TO_LABELS = os.path.join('data', 'mscoco_label_map.pbtxt')
NUM_CLASSES = 90
Now the cell clearly says the .pb filename which is /frozen_inference_graph.pb
So you already have the .pb file why do you want to convert ??
Anyways you can refer thsi link for freezing the graph: https://github.com/jayshah19949596/Tensorboard-Visualization-Freezing-Graph
you need to use tensorflow.python.tools.freeze_graph() function to convert your .ckpt file to .pb file
The below code line shows how you do it
freeze_graph.freeze_graph(input_graph_path,
input_saver_def_path,
input_binary,
input_checkpoint_path,
output_node_names,
restore_op_name,
filename_tensor_name,
output_graph_path,
clear_devices,
initializer_nodes)
input_graph_path : is the path to .pb file where you will write your graph and this .pb file is not frozen. you will use tf.train.write_graph() to write the graph
input_saver_def_path : you can keep it an empty string
input_binary : it is a boolean value keep it false so that the file genertaed is not binary and human readable
input_checkpoint_path : path to the .ckpt file
output_graph_path : path where you want to write you pb file
clear_devices : boolean value ... keep it False
output_node_names : explicit tensor node names that you want to save
restore_op_name : string value that should be "save/restore_all"
filename_tensor_name = "save/Const:0"
initializer_nodes = ""