TensorflowJS - React Native - Error predicting from tensor image - react-native

I'm trying to get a keras model I converted to tensorflowJS to work in react native but getting the error in predicting the image tensor.
I have trained a custom tensorflow keras model and converted the keras h5 model to JavaScript JSON model. In React Native project, I have used expo camera with tensorflow to capture the image to predict the damage on the captured image.
Here, I have included the versions use in my projects, commands used to convert and predict the model.
Versions used
React Native Project
"expo": ">=45.0.0-0 <46.0.0",
"expo-camera": "~12.2.0",
"expo-gl": "~11.3.0",
"expo-gl-cpp": "~11.3.0",
"#tensorflow/tfjs": "^4.0.0",
"#tensorflow/tfjs-react-native": "^0.8.0",
"react-native-fs": "^2.20.0",
Python
"tensorflow Version" : 2.10.0
Command used to convert Keras h5 to JS json :
tensorflowjs_converter --input_format=keras --weight_shard_size_bytes=419430400 --quantize_float16=* /path/to/model.h5 /path/to/output
Creating and using Layers model in RN
// Load layers model using model json and weights file
const models = await tf.loadLayersModel(bundleResourceIO(modelJSON, weights));
Logs of Capturing Image via expo camera and cameraWithTensors
LOG imageAsTensors: {"kept":false,"isDisposedInternal":false,"shape":[224,224,3],"dtype":"int32","size":150528,"strides":[672,3],"dataId":{"id":670},"id":980,"rankType":"3"}
LOG imageTensorReshaped: {"kept":false,"isDisposedInternal":false,"shape":[1,224,224,3],"dtype":"int32","size":150528,"strides":[150528,672,3],"dataId":{"id":670},"id":981,"rankType":"4","scopeId":408}
Predicting image tensor using model
try {
// predict against the model
const output = await models.predict(imageTensorReshaped, { batchSize: 1 });
return output.dataSync();
} catch (error) {
console.log('Error predicting from tensor image', error);
}
Receiving the below error in catch
Error predicting from tensor image [TypeError: null is not an object (evaluating 'opHandler.clone')]
Expectations : Prediction result array

The issue is that the setOpHandler function is not being called in the tfjs-node files in the package.
You can check out this issue on github here! for a temporary fix using the provided patch files

Related

TypeError: gl.texStorage2D is not a function. (In 'gl.texStorage2D(tex2d, 1, internalFormat, width, height)', 'gl.texStorage2D' is undefined)

Log & part of code:
Possible Unhandled Promise Rejection (id: 0):
TypeError: gl.texStorage2D is not a function. (In 'gl.texStorage2D(tex2d, 1, internalFormat, width, height)', 'gl.texStorage2D' is undefined)
const file = `${FileSystem.documentDirectory}image-new-${imageIndex}.jpg`;
const imgB64 = await FileSystem.readAsStringAsync(file, {
encoding: FileSystem.EncodingType.Base64,
});
tf.engine().startScope();
const imgBuffer = tf.util.encodeString(imgB64, "base64").buffer;
console.log("String encoded.");
const imageData = new Uint8Array(imgBuffer);
const tensor = decodeJpeg(imageData);
console.log("String decoded.");
const model = await cocossd.load();
console.log("Model loaded.");
const predictions = await model.detect(tensor);
console.log("Prediction found.");```
Dependecies:
"expo-gl": "~12.0.1",
"expo-gl-cpp": "^11.4.0",
"#tensorflow-models/coco-ssd": "^2.2.2",
"#tensorflow/tfjs": "4.1.0",
"#tensorflow/tfjs-react-native": "^0.8.0",
"#react-native-async-storage/async-storage": "~1.17.3",
"react-native-fs": "^2.20.0",
"expo-file-system": "~15.1.1",
Hello, I am trying to perform object detection using Tensorflow and COCO-SSD in a React Native app. It is an Expo dev build. This is the error I'm getting. Any pointers as to what might be going wrong? Both console.log("String encoded."); and console.log("String decoded."); do run so I suspect it is something to do with the COCO-SSD model and "expo-gl" and "expo-gl-cpp" packages, both of which are installed. Thank you.
EDIT I get the same error running https://github.com/tensorflow/tfjs-examples/tree/master/react-native/image-classification/expo on an Android emulator. It works perfectly fine on iPhone 13 Max though.

DeepPavlov error loading the model from Tensorflow (from_tf=True)

I'm trying to load the ruBERT model into Deeppavlov as follows:
#is a dict
config_path = {
"chainer": {
"in": [
"x"
],
"in_y": [
"y"
],
"out": [
"y_pred_labels",
"y_pred_probas"
],
"pipe": [
...
}
}
model = build_model(config_path, download=False)
At the same time, I have all the files of the original ruBERT model locally. However, an error throws when building the model:
OSError: Error no file named pytorch_model.bin found in directory ruBERT_hFace2 but there is a file for TensorFlow weights. Use `from_tf=True` to load this model from those weights.
At the same time, there is nowhere a clear explanation of how to pass this parameter through the build_model function.
How to pass this parameter across build_model correctly?
UPDATE 1
At the moment, the version of Deep Pavlov 1.0.2 is installed.
The checkpoint of the model consists of following files:
Currently there is no way to pass any parameter via build_model. In case of additional parameter you should align the configuration file accordingly. Alternatively you can change it via Python code.
from deeppavlov import build_model, configs, evaluate_model
from deeppavlov.core.commands.utils import parse_config
config = parse_config(f"config.json")
...
model = build_model(config, download=True, install=True)
But first please make sure that you are using the latest version of DeepPavlov. In addition please take a look at out recent article on Medium. If you need a further assistance please provide more details.

`new NativeEventEmitter()` requires a non-null argument

I am facing this problem:
new NativeEventEmitter() requires a non-null argument
I am working on Expo GO (version Expo: 45.0.0) on IOS. The problem is when I import the following two libraries:
import { utils } from '#react-native-firebase/app';
import vision from '#react-native-firebase/ml-vision';
I am using:
"#react-native-firebase/app": "^14.9.3",
"#react-native-firebase/ml-vision": "^7.4.13",

Can you change your learning rate while training in Tensorflow Object Detection API

I understand that it's probably better to lower your learning rate when it is converging.
My confusion is, can you just change the value in the config file after certain steps?
If yes, which config file should I change? The one generated in train folder or the one in the downloaded model folder?
Do I need to export to frozen graph first for the changes to take effect?
Thank you in advance for helping me!
You have to change the the config file in the downloaded model folder. The config file in the train folder is jsut a copy from it.
To decay the learning rate during training you can write something like this in your config file:
optimizer {
momentum_optimizer: {
learning_rate: {
manual_step_learning_rate {
initial_learning_rate: 0.0002
schedule {
step: 900000
learning_rate: .00002
}
schedule {
step: 1200000
learning_rate: .000002
}
}
}
momentum_optimizer_value: 0.9
}
use_moving_average: false}
Have a look here for more example config files.
With the export to a frozen graph you freeze all the parameters of your model, thus they can not be trained anymore. That is why you only freeze the graph when you finished training and want to use your model for inference.

Error when restoring model (Multiple OpKernel registrations match NodeDef)

I'm getting an error when attempting to restore a model from a checkpoint.
This is with the nightly Windows GPU build for python 3.5 on 2017-06-13.
InvalidArgumentError (see above for traceback):
Multiple OpKernel registrations match NodeDef 'Decoder/decoder/GatherTree = GatherTree[T=DT_INT32, _device="/device:CPU:0"](Decoder/decoder/TensorArrayStack_1/TensorArrayGatherV3, Decoder/decoder/TensorArrayStack
_2/TensorArrayGatherV3, Decoder/decoder/while/Exit_18)': 'op: "GatherTree" device_type: "GPU" constraint { name: "T" allowed_values { list { type: DT_INT32 } } }' and 'op: "GatherTree" device_type: "GPU" constraint { name: "T" allowed_values { list { type: DT_
INT32 } } }'[[Node: Decoder/decoder/GatherTree = GatherTree[T=DT_INT32, _device="/device:CPU:0"](Decoder/decoder/TensorArrayStack_1/TensorArrayGatherV3, Decoder/decoder/TensorArrayStack_2/TensorArrayGatherV3, Decoder/decoder/while/Exit_18)]]
The model is using dynamic_decode with beam search, which otherwise works fine in training mode when not using beam search for decoding.
Any ideas on what this means or how to debug it?
I also faced the same issue a day ago. Turns out it was a bug in tensorflow. It's resolved now and BeamSearchDecoder should work with the latest build of tensorflow.