how to use other tokenlizer(NLTK,Jiebe etc.) in tensorflow serving - tensorflow

Recently, I have been using estimator to train and deploy a tensorflow model, but when I deploy the model (it was exported using estimator serving_fn including tf.py_func) using tensorflow seving, there is an error (see below).
I found this question on Github that said the serving can't support tf.py_func.
Can anyone help?
I want to implement a token function using other tokenlizer(NLTK,Jieba).
The error:
Invalid argument: No OpKernel was registered to support Op 'PyFunc' used by {{node map/while/PyFunc}}with these attrs: [Tout=[DT_STRING], token="pyfunc_4", _output_shapes=[<unknown>], Tin=[DT_STRING]]
Registered devices: [CPU]
Registered kernels:
<no registered kernels>

Have you tried using the tensorflow native tokenizer,eg. see https://www.tensorflow.org/beta/tutorials/tensorflow_text/intro#tokenization

Related

model optimzer in intel open vino

I used
import tensorflow as tf
model = tf.keras.models.load_model('model.h5')
tf.saved_model.save(model,'model')
for saving my image classification model (tensorflow version on google colab = 2.9.2, intel open vino version[Development Tools] = 2021.4.2 LTS)
---------------------------------------------------------------------------------------
C:\Program Files (x86)\Intel\openvino_2021.4.752\deployment_tools\model_optimizer>python mo_tf.py --saved_model_dir C:\Users\dchoi\CNNProejct_Only_saved_English\saved_model --input_shape [1,32,320,240,3] --output_dir C:\Users\dchoi\CNNproject_only_output_English\output_model
Model Optimizer arguments:
Common parameters:
- Path to the Input Model: None
- Path for generated IR: C:\Users\dchoi\CNNproject_only_output_English\output_model
- IR output name: saved_model
- Log level: ERROR
- Batch: Not specified, inherited from the model
- Input layers: Not specified, inherited from the model
- Output layers: Not specified, inherited from the model
- Input shapes: [1,32,320,240,3]
- Mean values: Not specified
- Scale values: Not specified
- Scale factor: Not specified
- Precision of IR: FP32
- Enable fusing: True
- Enable grouped convolutions fusing: True
- Move mean values to preprocess section: None
- Reverse input channels: False
TensorFlow specific parameters:
- Input model in text protobuf format: False
- Path to model dump for TensorBoard: None
- List of shared libraries with TensorFlow custom layers implementation: None
- Update the configuration file with input/output node names: None
- Use configuration file used to generate the model with Object Detection API: None
- Use the config file: None
- Inference Engine found in: C:\Users\dchoi\AppData\Local\Programs\Python\Python38\lib\site-packages\openvino
Inference Engine version: 2021.4.0-3839-cd81789d294-releases/2021/4
Model Optimizer version: 2021.4.2-3974-e2a469a3450-releases/2021/4
[ WARNING ] Model Optimizer and Inference Engine versions do no match.
[ WARNING ] Consider building the Inference Engine Python API from sources or reinstall OpenVINO (TM) toolkit using "pip install openvino==2021.4"
2022-11-19 01:34:44.207311: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'cudart64_110.dll'; dlerror: cudart64_110.dll not found
2022-11-19 01:34:44.207542: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
C:\Users\dchoi\AppData\Roaming\Python\Python38\site-packages\tensorflow\python\autograph\impl\api.py:22: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp
2022-11-19 01:34:46.961002: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set
2022-11-19 01:34:46.961949: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'nvcuda.dll'; dlerror: nvcuda.dll not found
2022-11-19 01:34:46.962904: W tensorflow/stream_executor/cuda/cuda_driver.cc:326] failed call to cuInit: UNKNOWN ERROR (303)
2022-11-19 01:34:46.969471: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:169] retrieving CUDA diagnostic information for host: DESKTOP-SCBPOUA
2022-11-19 01:34:46.969727: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:176] hostname: DESKTOP-SCBPOUA
2022-11-19 01:34:46.970663: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-11-19 01:34:46.971135: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
[ FRAMEWORK ERROR ] Cannot load input model: SavedModel format load failure: NodeDef mentions attr 'validate_shape' not in Op<name=AssignVariableOp; signature=resource:resource, value:dtype -> ; attr=dtype:type; is_stateful=true>; NodeDef: {{node AssignNewValue}}. (Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary.).
------------------------------------------------------------------------------------------
I am getting this kind of error even after I downloaded
install_prerequirement/install_prerequisites_tf2.bat
need help
Your error seems to indicate the mismatch between the TensorFlow version used to load GraphDef file. From my replication, I am able to generate the Intermediate Representation (IR) files using TensorFlow 2.5.3 version. Here is the full Model Optimizer command used:
mo_tf.py --saved_model_dir <path_to_model\IMGC.h5_to_saved_model.pb> --input_shape [1,320,240,3] --output_dir <path_for_output_files>

Time consuming Tensorflow's CUDA driver check in AWS Lambda

I've been running an AWS Lambda and mounted an EFS, where I've installed Tensorflow 2.4. When I try to run the Lambda (and every Lambda that uses Tensorflow 2.4) it wastes a lot of time (about 4 minutes, or maybe more sometimes) on some Tensorflow's settings check. So I need to set a very wide timeout to overcome this issue.
These are the prints that the Lambda produces:
2022-05-17 06:33:21.917336: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set
2022-05-17 06:33:21.921992: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /var/lang/lib:/lib64:/usr/lib64:/var/runtime:/var/runtime/lib:/var/task:/var/task/lib:/opt/lib
2022-05-17 06:33:21.922025: W tensorflow/stream_executor/cuda/cuda_driver.cc:326] failed call to cuInit: UNKNOWN ERROR (303)
2022-05-17 06:33:21.922048: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (169.254.137.137): /proc/driver/nvidia/version does not exist
2022-05-17 06:33:21.922460: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
2022-05-17 06:33:22.339905: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:116] None of the MLIR optimization passes are enabled (registered 2)
2022-05-17 06:33:22.340468: I tensorflow/core/platform/profile_utils/cpu_utils.cc:112] CPU Frequency: 2500010000 Hz
[WARNING] 2022-05-17T06:33:22.436Z c4500036-5b77-4808-a062-f8ae820b0317 AutoGraph could not transform <function Model.make_predict_function..predict_function at 0x7f65bfb37280> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, export AUTOGRAPH_VERBOSITY=10) and attach the full output.
Cause: unsupported operand type(s) for -: 'NoneType' and 'int'
To silence this warning, decorate the function with #tf.autograph.experimental.do_not_convert
What I need is to overcome this waste of time, and run a clean elaboration.

Error with TPUClusterResolver for Cloud TPU v3 Pod with TensorFlow 2.1

I'm trying to use my (pre-emptible) Cloud TPU v3-256 on my Google Cloud Compute Engine VM with TensorFlow 2.1, but it doesn't seem to be working as the TPUClusterResolver throws a Could not lookup TPU metadata error.
Using individual (non-preemptible) TPUs works fine as long as I use the grpc:// address rather than the TPU Name. However, neither individual TPUs nor my TPU Pod work when using the TPU Name, and throw this error.
Can someone help me fix this issue?
Code:
resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='my-tpu-name', zone='europe-west4-a', project='my-project') # The zone, project and TPU Name are correct
Output:
ValueError: Could not lookup TPU metadata from name 'my-tpu-name'. Please double
check the tpu argument in the TPUClusterResolver constructor.
Exception: Failed to retrieve http://metadata.google.internal/computeMetadata/v1/
instance/service-accounts/default/?recursive=True
from the Google Compute Enginemetadata service. Response: {'metadata-flavor': 'Google',
'date': 'Thu, 28 May 2020 17:42:35 GMT', 'content-type': 'text/html; charset=UTF-8',
'server': 'Metadata Server for VM', 'content-length': '1629', 'x-xss-protection': '0', 'x
frame-options': 'SAMEORIGIN', 'status': '404'}
I suspect it could be a mismatch in either one of the following: Tensorflow version, zone or project between compute VM and TPU.
If you create both TPU and GCE VM with the same Tensorflow version (2.1 or 2.2) and they both are created in the same project and zone. You can just provide the TPU name in TPUClusterResolver and it should work fine:
resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='my-tpu-name')
You can omit TPU name if you set TPU_NAME environment variable (export TPU_NAME=my-tpu-name) on your VM.

Can't use tensorflow.keras.layers.CuDNNLSTM or keras.layers.CuDNNLSTM in my Colab hosted runtime

When I tried to use either tensorflow.keras.layers.CuDNNLSTM or keras.layers.CuDNNLSTM, I got the following error:
InvalidArgumentError: No OpKernel was registered to support Op 'CudnnRNN' used by {{node cu_dnnlstm/CudnnRNN}}with these attrs: [dropout=0, seed=0, T=DT_FLOAT, input_mode="linear_input", direction="unidirectional", rnn_mode="lstm", is_training=true, seed2=0]
Registered devices: [CPU, XLA_CPU]
I am using the hosted runtime and I presume that supports GPU as well but I noticed the error message above shows there is no GPU. Not so sure what the problem is but any clue will be appreciated
You need to explicitly request a GPU enabled runtime.
From the Runtime menu select "Change runtime type" then select GPU under "hardware accelerator":

No OpKernel was registered to support Op 'LRNGrad' on Android

I developed some Tensorflow-based C++ application that run successfully on Linux. Now I'm trying to developed Android version but I can't fix the following error: Invalid argument: No OpKernel was registered to support Op 'LRNGrad' with these attrs. Registered kernels:
[[Node: gradients/localresponsenorm1_grad/LRNGrad =
LRNGrad[T=DT_FLOAT, alpha=0.0001, beta=0.5, bias=2, depth_radius=5]
(gradients/maxpool1_grad/MaxPoolGrad,conv2d2, localresponsenorm1)]]
I've added to Android Build all kernels available for Android by means of //tensorflow/core/kernels:android_all_ops in core/kernels/BUILD and even included lrn_op.cc to build separately but there is no effect.
My Linux build works fine. What should I do? Thanks.
Does it actually say <no registered kernels> in your output? I'm assuming this just got interpreted as HTML and rendered invisible.
Which library are you depending on on for your Android app? tensorflow/core:android_tensorflow_lib should already contain this kernel. (android_all_ops is not actually used for any targets internal to TensorFlow, which is somewhat misleading).