Tensorflow Serving: no versions of servable half_plus_two found under base path /models - tensorflow

I'm doing Tensorflow Serving with Docker (see here for docs). Server runs on our infra here. I've succeeded at requesting my model when the command to run the container is something like:
tensorflow_model_server --port=8500 --rest_api_port=8501 \
--model_name=${MODEL_NAME} --model_base_path=${MODEL_BASE_PATH}/${MODEL_NAME}
A curl request to the server returns the expected answer. Problem occurs when I try to use the model_config_file parameter. Command:
tensorflow_model_server --port=8500 --rest_api_port=8501 \
--model_config_file=/serving/models.conf
Config file is:
model_config_list: {
config: {
name: "half_plus_two",
base_path: "/models/",
model_platform: "tensorflow"
}
}
When I run the container with this command, I get the error:
No versions of servable half_plus_two found under base path /models/
(I've also tried to remove the trailing backslash on the base_path with no more success). I've seen this post on SO that reminds us to use a version under model dir and I have one. My /models dir is:
models
|
- half_plus_two
|
- 1
|
- saved_model.pb
- variables
- assets
Someone can help?

Had the same thing on Windows 10. I finally:
Noticed that I forgot to clone the tensorflow/serving repository to my local machine
Ran on Ubuntu-wsl-2 console, as using Windows command line, I could convince docker to correctly map the container's /models/half_plus_two to my local path (the -v option in the following command):
docker run -t --rm -p 8501:8501
-v "$TESTDATA/saved_model_half_plus_two_cpu:/models/half_plus_two"
-e MODEL_NAME=half_plus_two
tensorflow/serving &

Related

Unable to run tensorflow serving on Ubuntu using docker

I followed all the steps listed here to use tensorflow serving
https://www.tensorflow.org/tfx/serving/tensorboard
used below command to run using docker
sudo docker run -it --rm -p 8500:8500 -p 8502:8502 -v /tmp/serving/tensorflow_serving/servables/tensorflow/testdata/saved_model_half_plus_two_cpu:/models/half_plus_two -v /tmp/tensorboard:/tmp/tensorboard -e MODEL_NAME=half_plus_two tensorflow/serving
But I keep getting the below error
2022-12-19 15:51:44.300838: I tensorflow_serving/core/loader_harness.cc:66] Approving load for servable version {name: half_plus_two version: 123} 2022-12-19 15:51:44.300857: I tensorflow_serving/core/loader_harness.cc:74] Loading servable version {name: half_plus_two version: 123} 2022-12-19 15:51:44.300920: E tensorflow_serving/util/retrier.cc:37] Loading servable: {name: half_plus_two version: 123} failed: NOT_FOUND: Specified file path does not appear to contain a SavedModel bundle (should have a file called saved_model.pb) Specified file path: /models/half_plus_two/00000123 2022-12-19 15:52:44.301166: I tensorflow_serving/util/retrier.cc:33] Retrying of Loading servable: {name: half_plus_two version: 123} retry: 1 2022-12-19 15:52:44.301281: E tensorflow_serving/util/retrier.cc:37] Loading servable: {name: half_plus_two version: 123} failed: NOT_FOUND: Specified file path does not appear to contain a SavedModel bundle (should have a file called saved_model.pb) Specified file path: /models/half_plus_two/00000123
echo $MODELS_DIR /tmp/serving/tensorflow_serving/servables/tensorflow/testdata
`ls /tmp/serving/tensorflow_serving/servables/tensorflow/testdata/saved_model_half_plus_two_cpu/00000123/
variables
saved_model.pb
assets`
In some of the answers I saw that I need folder 1 in testdata/saved_model_half_plus_two_cpu, I tried that and put variables, saved_model.pb and assets folders inside that but that also didn't work
Please follow TensorFlow Serving with Docker tutorial to install TF serving on docker.
To verify install, provide the path to demo models and load them to TF serving container using below commands.
#Location of demo models
TESTDATA="$(pwd)/serving/tensorflow_serving/servables/tensorflow/testdata"
# Start TensorFlow Serving container and open the REST API port
docker run -t --rm -p 8501:8501 \
-v "$TESTDATA/saved_model_half_plus_two_cpu:/models/half_plus_two" \
-e MODEL_NAME=half_plus_two \
tensorflow/serving &
Once verified, run below commands.
MODELS_DIR="$(pwd)/tensorflow_serving/servables/tensorflow/testdata"
docker run -it --rm -p 8500:8500 -p 8501:8501 \
-v "$MODELS_DIR/saved_model_half_plus_two_cpu:/models/half_plus_two" \
-v "/tmp/tensorboard:/tmp/tensorboard" \
-e MODEL_NAME=half_plus_two \
tensorflow/serving

GraphDB Docker Container Fails to Run: adoptopenjdk/openjdk12:alpine

When using the standard DockerFile available here, GraphDB fails to start with the following output:
Could not find any executable java binary. Please install java in your PATH or set JAVA_HOME
Looking into it, the DockerFile uses adoptopenjdk/openjdk11:alpine which was recently updated to Alpine 3.14.
If I switch to an older Docker image (or use adoptopenjdk/openjdk12:alpine) then GraphDB starts without a problem.
How can I fix this while still using the latest version of adoptopenjdk/openjdk11:alpine?
Below is the DockerFile:
FROM adoptopenjdk/openjdk11:alpine
# Build time arguments
ARG version=9.1.1
ARG edition=ee
ENV GRAPHDB_PARENT_DIR=/opt/graphdb
ENV GRAPHDB_HOME=${GRAPHDB_PARENT_DIR}/home
ENV GRAPHDB_INSTALL_DIR=${GRAPHDB_PARENT_DIR}/dist
WORKDIR /tmp
RUN apk add --no-cache bash curl util-linux procps net-tools busybox-extras wget less && \
curl -fsSL "http://maven.ontotext.com/content/groups/all-onto/com/ontotext/graphdb/graphdb-${edition}/${version}/graphdb-${edition}-${version}-dist.zip" > \
graphdb-${edition}-${version}.zip && \
bash -c 'md5sum -c - <<<"$(curl -fsSL http://maven.ontotext.com/content/groups/all-onto/com/ontotext/graphdb/graphdb-${edition}/${version}/graphdb-${edition}-${version}-dist.zip.md5) graphdb-${edition}-${version}.zip"' && \
mkdir -p ${GRAPHDB_PARENT_DIR} && \
cd ${GRAPHDB_PARENT_DIR} && \
unzip /tmp/graphdb-${edition}-${version}.zip && \
rm /tmp/graphdb-${edition}-${version}.zip && \
mv graphdb-${edition}-${version} dist && \
mkdir -p ${GRAPHDB_HOME}
ENV PATH=${GRAPHDB_INSTALL_DIR}/bin:$PATH
CMD ["-Dgraphdb.home=/opt/graphdb/home"]
ENTRYPOINT ["/opt/graphdb/dist/bin/graphdb"]
EXPOSE 7200
The issue comes from an update in the base image. From a few weeks adopt switched to alpine 3.14 which has some issues with older container runtime (runc). The issue can be seen in the release notes: https://wiki.alpinelinux.org/wiki/Release_Notes_for_Alpine_3.14.0
Updating your Docker will fix the issue. However, if you don't wish to update your Docker, there's a workaround.
Some additional info:
The cause of the issue is that for some reason containers running in older docker versions and alpine 3.14 seem to have issues with the test flag "-x" so an if [ -x /opt/java/openjdk/bin/java ] returns false, although java is there and is executable.
You can workaround this for now by
Pull the GraphDB distribution
Unzip it
Open "setvars.in.sh" in the bin folder
Find and remove the if block around line 32
if [ ! -x "$JAVA" ]; then
echo "Could not find any executable java binary. Please install java in your PATH or set JAVA_HOME"
exit 1
fi
Zip it again and provide it in the Dockerfile without pulling it from maven.ontotext.com
Passing it to the Dockerfile is done with 'ADD'
You can check the GraphDB free version's Dockerfile for a reference on how to pass the zip file to the Dockerfile https://github.com/Ontotext-AD/graphdb-docker/blob/master/free-edition/Dockerfile

Locally test AWS Lambda container with .NET 5 web api and Lambda RIE

I'm following the instructions to locally test lambda container https://docs.aws.amazon.com/lambda/latest/dg/images-test.html
but I am unable to do so.
I've created a sample project to reproduce it https://gitlab.com/sunnyatticsoftware/sandbox/lambda-dotnet5-webapi (see the README for step by step on its generation)
Basically I am using an Amazon dotnet template that generates an AWS Lambda function as a .NET 5 web api using containers.
It's all good with the project. The Dockerfile is described as
FROM public.ecr.aws/lambda/dotnet:5.0
WORKDIR /var/task
COPY "bin/Release/net5.0/publish" .
Now I want to test it locally using the Amazon Lambda Runtime Interface Emulator (RIE) and these are the steps I follow:
Build project with dotnet build -c Release
Publish artifacts with dotnet publish -c Release
Build docker image with docker build -t lambda-dotnet .
Download the RIE with
mkdir -p ~/.aws-lambda-rie && curl -Lo ~/.aws-lambda-rie/aws-lambda-rie https://github.com/aws/aws-lambda-runtime-interface-emulator/releases/latest/download/aws-lambda-rie && chmod +x ~/.aws-lambda-rie/aws-lambda-rie
I can see the emulator downloaded properly
ls -la ~/.aws-lambda-rie/aws-lambda-rie
-rw-r--r-- 1 diego.martin 1049089 8155136 Feb 22 14:32 /c/Users/diego.martin/.aws-lambda-rie/aws-lambda-rie
Run the emulator passing the lambda image
docker run -d -v ~/.aws-lambda-rie:/aws-lambda -p 9000:8080 --entrypoint /aws-lambda/aws-lambda-rie lambda-dotnet:latest
Here is when I get the error
12997dddc6e50aca3020527be30a1479eee9ceef412ab5009b99e9eb8cf1fa67
docker: Error response from daemon: OCI runtime create failed: container_linux.go:370: starting container process caused: exec: "C:/Users/diego.martin/AppData/Local/Programs/Git/aws-lambda/aws-lambda-rie": stat C:/Users/diego.martin/AppData/Local/Programs/Git/aws-lambda/aws-lambda-rie: no such file or directory: unknown.
What am I missing? I am not specifying any entrypoint because I don't have any.
PS: The last step would be to send some lambda event to my container's function with
curl -XPOST "http://localhost:9000/2015-03-31/functions/function/invocations" -d '{}'
The lambda docker images for dotnet already include the RIE, so it's enough with the following (see repo with further details):
To build image
docker build -t lambda-dotnet:latest .
To run it
docker run -p 9000:8080 lambda-dotnet "LambdaDotNet5::LambdaDotNet5.LambdaEntryPoint::FunctionHandlerAsync"
And then to test it, I can use CURL from a different terminal
curl -vX POST http://localhost:9000/2015-03-31/functions/function/invocations -d #test_request.json --header "Content-Type: application/json"
and in the test_request.json file I can have the json for the event I want to send to the lambda.

Invalid argument --model_config_file_poll_wait_seconds

I'm trying to start tensorflow-serving with the following two options like on the documentation
docker run -t --rm -p 8501:8501 \
-v "$(pwd)/models/:/models/" tensorflow/serving \
--model_config_file=/models/models.config \
--model_config_file_poll_wait_seconds=60
The container does not start because it does not recognize the argument --model_config_file_poll_wait_seconds.
unknown argument: --model_config_file_poll_wait_seconds=60
usage: tensorflow_model_server
I'm on the latest docker image, 1.14.0 and the line is taken straight from the documentation
https://www.tensorflow.org/tfx/serving/serving_config
Does this argument even work?
Many thanks.
It seems https://www.tensorflow.org/tfx/serving/serving_config is talking about code that has not been released as a new version yet, which is odd. I will ask about that.
That package is generated from this source:
https://github.com/tensorflow/serving/blob/master/tensorflow_serving/g3doc/serving_config.md, it mentions the --model_config_file_poll_wait_seconds flag.
However, the same document for 1.14.0 has no mention of the flag:
https://github.com/tensorflow/serving/blob/1.14.0/tensorflow_serving/g3doc/serving_config.md
Try using the nightly tensorflow serving image and see if it works.
docker run -t --rm -p 8501:8501 \
-v "$(pwd)/models/:/models/" tensorflow/serving:nightly \
--model_config_file=/models/models.config \
--model_config_file_poll_wait_seconds=60
Just tried. Tensorflow Serving 2.1.0 supports it while 1.14.0 doesn't.

Facing error while running Docker on Tensorflow serving image

I have trained a Tensorflow Object detection model. I am trying to make a REST request using the tensorflow serving image on docker. (following instruction from https://github.com/tensorflow/serving )
TESTDATA="$(pwd)/serving/tensorflow_serving/servables/tensorflow/testdata/"
docker run -t --rm -p 8501:8501 \
-v "$TESTDATA/my_model:/models/work_place_safety" \
-e MODEL_NAME=work_place_safety \
tensorflow/serving &
I am facing below error message-
$ C:\Program Files\Docker\Docker\Resources\bin\docker.exe: Error response from daemon: Mount denied:
The source path "C:/Users/Desktop/models/serving/tensorflow_serving/servables/tensorflow/testdata/saved_model_work_place_safety;C"
doesn't exist and is not known to Docker.
See 'C:\Program Files\Docker\Docker\Resources\bin\docker.exe run --help'.
I wonder why its including ";C" at the end of source path and throwing an error.
Any Help is much appreciated.
Thanks
resolved the issue by adding a / before $ in Git bash.
docker run -t --rm -p 8501:8501 \
-v /$TESTDATA/my_model:/models/my_model \
-e MODEL_NAME=my_model \
tensorflow/serving &
What is the value of my_model. Is it saved_model_work_place_safety.
Are you sure that your Saved Objection Detection Model is in the Folder, saved_model_work_place_safety and that Folder is in the path, $(pwd)/serving/tensorflow_serving/servables/tensorflow/testdata/?
If it is not inside testdata, you should mention the correct path, where saved_model_work_place_safety is present.
Folder structure should be something like this =>
saved_model_work_place_safety => 00000123 or 1556272508 or 1 => .pb file and Variables Folder.