Unable to run tensorflow serving on Ubuntu using docker - tensorflow

I followed all the steps listed here to use tensorflow serving
https://www.tensorflow.org/tfx/serving/tensorboard
used below command to run using docker
sudo docker run -it --rm -p 8500:8500 -p 8502:8502 -v /tmp/serving/tensorflow_serving/servables/tensorflow/testdata/saved_model_half_plus_two_cpu:/models/half_plus_two -v /tmp/tensorboard:/tmp/tensorboard -e MODEL_NAME=half_plus_two tensorflow/serving
But I keep getting the below error
2022-12-19 15:51:44.300838: I tensorflow_serving/core/loader_harness.cc:66] Approving load for servable version {name: half_plus_two version: 123} 2022-12-19 15:51:44.300857: I tensorflow_serving/core/loader_harness.cc:74] Loading servable version {name: half_plus_two version: 123} 2022-12-19 15:51:44.300920: E tensorflow_serving/util/retrier.cc:37] Loading servable: {name: half_plus_two version: 123} failed: NOT_FOUND: Specified file path does not appear to contain a SavedModel bundle (should have a file called saved_model.pb) Specified file path: /models/half_plus_two/00000123 2022-12-19 15:52:44.301166: I tensorflow_serving/util/retrier.cc:33] Retrying of Loading servable: {name: half_plus_two version: 123} retry: 1 2022-12-19 15:52:44.301281: E tensorflow_serving/util/retrier.cc:37] Loading servable: {name: half_plus_two version: 123} failed: NOT_FOUND: Specified file path does not appear to contain a SavedModel bundle (should have a file called saved_model.pb) Specified file path: /models/half_plus_two/00000123
echo $MODELS_DIR /tmp/serving/tensorflow_serving/servables/tensorflow/testdata
`ls /tmp/serving/tensorflow_serving/servables/tensorflow/testdata/saved_model_half_plus_two_cpu/00000123/
variables
saved_model.pb
assets`
In some of the answers I saw that I need folder 1 in testdata/saved_model_half_plus_two_cpu, I tried that and put variables, saved_model.pb and assets folders inside that but that also didn't work

Please follow TensorFlow Serving with Docker tutorial to install TF serving on docker.
To verify install, provide the path to demo models and load them to TF serving container using below commands.
#Location of demo models
TESTDATA="$(pwd)/serving/tensorflow_serving/servables/tensorflow/testdata"
# Start TensorFlow Serving container and open the REST API port
docker run -t --rm -p 8501:8501 \
-v "$TESTDATA/saved_model_half_plus_two_cpu:/models/half_plus_two" \
-e MODEL_NAME=half_plus_two \
tensorflow/serving &
Once verified, run below commands.
MODELS_DIR="$(pwd)/tensorflow_serving/servables/tensorflow/testdata"
docker run -it --rm -p 8500:8500 -p 8501:8501 \
-v "$MODELS_DIR/saved_model_half_plus_two_cpu:/models/half_plus_two" \
-v "/tmp/tensorboard:/tmp/tensorboard" \
-e MODEL_NAME=half_plus_two \
tensorflow/serving

Related

hdfsBuilderConnect error while using tfserving load model from hdfs

there is my environment info:
TensorFlow Serving version 1.14
os mac10.15.7
i want to load modle from hdfs by using tfserving.
when i build a tensorflow-serving:hadoop docker image,like this:
FROM tensorflow/serving:2.2.0
RUN apt update && apt install -y openjdk-8-jre
RUN mkdir /opt/hadoop-2.8.2
COPY /hadoop-2.8.2 /opt/hadoop-2.8.2
ENV JAVA_HOME /usr/lib/jvm/java-8-openjdk-amd64
ENV HADOOP_HDFS_HOME /opt/hadoop-2.8.2
ENV HADOOP_HOME /opt/hadoop-2.8.2
ENV LD_LIBRARY_PATH
${LD_LIBRARY_PATH}:${JAVA_HOME}/jre/lib/amd64/server
# ENV PATH $PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin
RUN echo '#!/bin/bash \n\n\
CLASSPATH=(${HADOOP_HDFS_HOME}/bin/hadoop classpath --glob)
tensorflow_model_server --port=8500 --rest_api_port=9000 \
--model_name=${MODEL_NAME} --
model_base_path=${MODEL_BASE_PATH}/${MODEL_NAME} \
"#"' > /usr/bin/tf_serving_entrypoint.sh \
&& chmod +x /usr/bin/tf_serving_entrypoint.sh
EXPOSE 8500
EXPOSE 9000
ENTRYPOINT ["/usr/bin/tf_serving_entrypoint.sh"]
and then run :
docker run -p 9001:9000 --name tensorflow-serving-11 -e MODEL_NAME=tfrest -e MODEL_BASE_PATH=hdfs://ip:port/user/cess2_test/workspace/cess/models -t tensorflow_serving:1.14-hadoop-2.8.2
i met this problem. ps:i have already modify hadoop config in hadoop-2.8.2
hdfsBuilderConnect(forceNewInstance=0, nn=ip:port, port=0, kerbTicketCachePath=(NULL), userName=(NULL))
error:(unable to get stack trace for java.lang.NoClassDefFoundError exception: ExceptionUtils::getStackTrace error.)
is there any suggestions how to solve this problem?
thanks
i solved this problem by adding hadoop absolute path to classpath.

Tensorflow Serving: no versions of servable half_plus_two found under base path /models

I'm doing Tensorflow Serving with Docker (see here for docs). Server runs on our infra here. I've succeeded at requesting my model when the command to run the container is something like:
tensorflow_model_server --port=8500 --rest_api_port=8501 \
--model_name=${MODEL_NAME} --model_base_path=${MODEL_BASE_PATH}/${MODEL_NAME}
A curl request to the server returns the expected answer. Problem occurs when I try to use the model_config_file parameter. Command:
tensorflow_model_server --port=8500 --rest_api_port=8501 \
--model_config_file=/serving/models.conf
Config file is:
model_config_list: {
config: {
name: "half_plus_two",
base_path: "/models/",
model_platform: "tensorflow"
}
}
When I run the container with this command, I get the error:
No versions of servable half_plus_two found under base path /models/
(I've also tried to remove the trailing backslash on the base_path with no more success). I've seen this post on SO that reminds us to use a version under model dir and I have one. My /models dir is:
models
|
- half_plus_two
|
- 1
|
- saved_model.pb
- variables
- assets
Someone can help?
Had the same thing on Windows 10. I finally:
Noticed that I forgot to clone the tensorflow/serving repository to my local machine
Ran on Ubuntu-wsl-2 console, as using Windows command line, I could convince docker to correctly map the container's /models/half_plus_two to my local path (the -v option in the following command):
docker run -t --rm -p 8501:8501
-v "$TESTDATA/saved_model_half_plus_two_cpu:/models/half_plus_two"
-e MODEL_NAME=half_plus_two
tensorflow/serving &

How to make tensorflow-serving example work

I am trying out the tensorflow example from the tutorial page
at the third step
# Start TensorFlow Serving container and open the REST API port
docker run -t --rm -p 8501:8501 \
-v "$TESTDATA/saved_model_half_plus_two_cpu:/models/half_plus_two" \
-e MODEL_NAME=half_plus_two \
tensorflow/serving &
I get the following error message
2020-07-19 11:54:52.858203: E tensorflow_serving/sources/storage_path/file_system_storage_path_source.cc:362] FileSystemStoragePathSource encountered a filesystem access error: /models/half_plus_two; Permission denied
This is continuously repeated. I have installed the demo model as mentioned in the tutorial.
git clone https://github.com/tensorflow/serving
TESTDATA="$(pwd)/serving/tensorflow_serving/servables/tensorflow/testdata"
Can someone please help what am i missing? I am just starting off on the serving part.
Thanks
Krishnan
The problem could be with your -v parameter where you are binding the path.
Try (Change the source parameter):
docker run -p 8501:8501 --mount type=bind,\
source=/path/to/yourmodels/,\
target=/models/half_plus_two/1 \
-e MODEL_NAME=half_plus_two -t tensorflow/serving

Invalid argument --model_config_file_poll_wait_seconds

I'm trying to start tensorflow-serving with the following two options like on the documentation
docker run -t --rm -p 8501:8501 \
-v "$(pwd)/models/:/models/" tensorflow/serving \
--model_config_file=/models/models.config \
--model_config_file_poll_wait_seconds=60
The container does not start because it does not recognize the argument --model_config_file_poll_wait_seconds.
unknown argument: --model_config_file_poll_wait_seconds=60
usage: tensorflow_model_server
I'm on the latest docker image, 1.14.0 and the line is taken straight from the documentation
https://www.tensorflow.org/tfx/serving/serving_config
Does this argument even work?
Many thanks.
It seems https://www.tensorflow.org/tfx/serving/serving_config is talking about code that has not been released as a new version yet, which is odd. I will ask about that.
That package is generated from this source:
https://github.com/tensorflow/serving/blob/master/tensorflow_serving/g3doc/serving_config.md, it mentions the --model_config_file_poll_wait_seconds flag.
However, the same document for 1.14.0 has no mention of the flag:
https://github.com/tensorflow/serving/blob/1.14.0/tensorflow_serving/g3doc/serving_config.md
Try using the nightly tensorflow serving image and see if it works.
docker run -t --rm -p 8501:8501 \
-v "$(pwd)/models/:/models/" tensorflow/serving:nightly \
--model_config_file=/models/models.config \
--model_config_file_poll_wait_seconds=60
Just tried. Tensorflow Serving 2.1.0 supports it while 1.14.0 doesn't.

Facing error while running Docker on Tensorflow serving image

I have trained a Tensorflow Object detection model. I am trying to make a REST request using the tensorflow serving image on docker. (following instruction from https://github.com/tensorflow/serving )
TESTDATA="$(pwd)/serving/tensorflow_serving/servables/tensorflow/testdata/"
docker run -t --rm -p 8501:8501 \
-v "$TESTDATA/my_model:/models/work_place_safety" \
-e MODEL_NAME=work_place_safety \
tensorflow/serving &
I am facing below error message-
$ C:\Program Files\Docker\Docker\Resources\bin\docker.exe: Error response from daemon: Mount denied:
The source path "C:/Users/Desktop/models/serving/tensorflow_serving/servables/tensorflow/testdata/saved_model_work_place_safety;C"
doesn't exist and is not known to Docker.
See 'C:\Program Files\Docker\Docker\Resources\bin\docker.exe run --help'.
I wonder why its including ";C" at the end of source path and throwing an error.
Any Help is much appreciated.
Thanks
resolved the issue by adding a / before $ in Git bash.
docker run -t --rm -p 8501:8501 \
-v /$TESTDATA/my_model:/models/my_model \
-e MODEL_NAME=my_model \
tensorflow/serving &
What is the value of my_model. Is it saved_model_work_place_safety.
Are you sure that your Saved Objection Detection Model is in the Folder, saved_model_work_place_safety and that Folder is in the path, $(pwd)/serving/tensorflow_serving/servables/tensorflow/testdata/?
If it is not inside testdata, you should mention the correct path, where saved_model_work_place_safety is present.
Folder structure should be something like this =>
saved_model_work_place_safety => 00000123 or 1556272508 or 1 => .pb file and Variables Folder.