I am using TensorFlow 2.4.1 and have a working script that classifies audio files. It is pretty much copy/paste from the examples and works well. I now want to send an audio file to Model Server for prediction.
I am stuck and how to get the audio into the json message. I am new to TensorFlow and Python so am probably missing something basic.
Full code: https://gitlab.com/-/snippets/2089884
Where I try to use an existing dataset (of wrong type) to be the data:
data = json.dumps({"signature_name": "serving_default", "instances": train_ds.batch(3).tolist()})
In this case the error is:
Saved model:
Traceback (most recent call last):
File "./xmits_train", line 361, in <module>
data = json.dumps({"signature_name": "serving_default", "instances": train_ds.batch(3).tolist()})
AttributeError: 'BatchDataset' object has no attribute 'tolist'
What I don't see he how to get BatchDataset into a structure that will be correct for ".tolist". Of course I may need to use a different starting structure as well.
In the script I have tried all the structures that hold the audio and none can directly be used.
I was able to utilize the test_audio data structure. To encode the first few requests into json:
data = json.dumps({"signature_name": "serving_default", "instances": test_audio[0:3].tolist()})
There may be ways to utilize the other data structures but this addresses the question.
Related
Does anyone know how I can recover a list of Keras history objects that I saved to drive by pickling with the following code:
import pickle
with open("H:/hists", "wb") as fp: #Pickling
pickle.dump(hists, fp)
Currently I'm trying :
with open("H:/hists", "rb") as fp: # Unpickling
hists = pickle.load(fp)
but getting this error:
FileNotFoundError: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for ram://b3edea45-d0d4-442d-ab4f-b0c43a87d19e/variables/variables
You may be trying to load on a different device from the computational device. Consider setting the `experimental_io_device` option in `tf.saved_model.LoadOptions` to the io_device such as '/job:localhost'.
Which I believe is because the python kernel in which I saved the history object has been terminated and a new one started.
I now think that the best way to save histories is to convert them to DataFrames or numpy arrays and save those, but that's not possible now since the histories are no longer in memory. It took about 5 hours to produce the histories, so I'm hoping it's possible to recover them.
I have a tensorflow "graph-model" consisting of a model.json and several .bin files. In javascript I am able to read those files using
const weights = browser.runtime.getURL("web_model/model.json");
tf.loadGraphModel(weights)
However I would like to be able to use this model in python, in order to process the results better.
When I try to load the model in python with
new_model = keras.models.load_model('./web_model/model.json')
I get the following error:
File "h5py/h5f.pyx", line 106, in h5py.h5f.open
OSError: Unable to open file (file signature not found)
I don't understand, since the javascript code is able to run the model, I think python should be able to do the same as well. What am I doing wrong ?
I have this following snippet on Colab:
def getMap7Data():
filename = "/content/maze_7.pkl"
infile = open(filename, "rb")
maze = pickle.load(infile)
infile.close()
return maze
The maze_7.pkl involves an object called maze, and it was produced by the previous person who was working on the file.
I can locally load the pickle and see its attributes. It's a very long list, and I don't know the exact structure. I'm going to use 10-15 attributes for now, but I might need more in the future.
Google Colab gives the following error:
Traceback (most recent call last):
File "/content/loadData.py", line 27, in getMap7Data
maze = pickle.load(infile)
ModuleNotFoundError: No module named 'maze'
Is there a way to load this pickle?
It's similar to the situation given here, and this question remains unanswered, as well.
I am using tensorflow-gpu==1.10.0 and keras from tensorflow as tf.keras.
I am trying to use source code written by someone else to implement it on my network.
I saved my network using save_model and load it using load_model. when I use model.get_config(), I expect a dictionary, but i"m getting a list. Keras source documentation also says that get_config returns a dictionary (https://keras.io/models/about-keras-models/).
I tried to check if it has to do with saving type : save_model or model.save that makes the difference in how it is saved, but both give me this error:
TypeError: list indices must be integers or slices, not str
my code block :
model_config = self.keras_model.get_config()
for layer in model_config['layers']:
name = layer['name']
if name in update_layers:
layer['config']['filters'] = update_layers[name]['filters']
my pip freeze :
absl-py==0.6.1
astor==0.7.1
bitstring==3.1.5
coverage==4.5.1
cycler==0.10.0
decorator==4.3.0
Django==2.1.3
easydict==1.7
enum34==1.1.6
futures==3.1.1
gast==0.2.0
geopy==1.11.0
grpcio==1.16.1
h5py==2.7.1
image==1.5.15
ImageHash==3.7
imageio==2.5.0
imgaug==0.2.5
Keras==2.1.3
kiwisolver==1.1.0
lxml==4.1.1
Markdown==3.0.1
matplotlib==2.1.0
networkx==2.2
nose==1.3.7
numpy==1.14.1
olefile==0.46
opencv-python==3.3.0.10
pandas==0.20.3
Pillow==4.2.1
prometheus-client==0.4.2
protobuf==3.6.1
pyparsing==2.3.0
pyquaternion==0.9.2
python-dateutil==2.7.5
pytz==2018.7
PyWavelets==1.0.1
PyYAML==3.12
Rtree==0.8.3
scikit-image==0.13.1
scikit-learn==0.19.1
scipy==0.19.1
Shapely==1.6.4.post1
six==1.11.0
sk-video==1.1.8
sklearn-porter==0.6.2
tensorboard==1.10.0
tensorflow-gpu==1.10.0
termcolor==1.1.0
tqdm==4.19.4
utm==0.4.2
vtk==8.1.0
Werkzeug==0.14.1
xlrd==1.1.0
xmltodict==0.11.0
I am trying out ways to deploy tensorflow model on android/iOS devices. So I did:
1) use tf.saved_model.builder.SavedModelBuilder to get model in .pb file
2) use tf.saved_model.loader.load() to verify that I can restore the model
However, when I want to do further inspection of the model using import_pb_to_tensorboard.py following suggestions at
1) https://medium.com/#daj/how-to-inspect-a-pre-trained-tensorflow-model-5fd2ee79ced0
2) https://hackernoon.com/running-a-tensorflow-model-on-ios-and-android-ce89446c8143
I got this error:
File "/Users/rjtang/_hack/env.tensorflow_src/lib/python3.4/site-packages/google/protobuf/internal/python_message.py", line 1083, in MergeFromString
if self._InternalParse(serialized, 0, length) != length:
.....
File "/Users/rjtang/_hack/env.tensorflow_src/lib/python3.4/site-packages/google/protobuf/internal/decoder.py", line 612, in DecodeRepeatedField
if value.add()._InternalParse(buffer, pos, new_pos) != new_pos:
....
File "/Users/rjtang/_hack/env.tensorflow_src/lib/python3.4/site-packages/google/protobuf/internal/decoder.py", line 746, in DecodeMap
raise _DecodeError('Unexpected end-group tag.')
The code and the generated .pb files are here:
https://github.com/rjt10/hear_it/blob/master/urban_sound/saved_model.pb
https://github.com/rjt10/hear_it/blob/master/urban_sound/savedmodel_save.py
https://github.com/rjt10/hear_it/blob/master/urban_sound/savedmodel_load.py
The version of tensorflow that I use is built from source "HEAD detached at v1.4.1"
Well, I understand what's happening now. Tensorflow has at least 3 ways to save and load a model. The graph will be serialized as one of the following 3 protobuf objects:
GraphDef
MetaGraphDef
SavedModel
You just need to deserialize it properly, such as https://github.com/rjt10/hear_it/blob/master/urban_sound/model_check.py
For Android, TensorFlowInferenceInterface() expects a GraphDef, https://github.com/tensorflow/tensorflow/blob/e2be6d4c4fc9f1b7f6040b51b23190c14202e797/tensorflow/contrib/android/java/org/tensorflow/contrib/android/TensorFlowInferenceInterface.java#L541
That explains why.