I'm pretty new with Tensorflow Serving, and I followed the basic instruction from: https://tensorflow.google.cn/serving/serving_basic
All previous steps are ok, but when I use:
docker run -p 8500:8500 --mount type=bind,source=/tmp/mnist,target=/models/mnist -e MODEL_NAME=mnist -t tensorflow/serving &
It gave me:
flag provided but not defined: --mount
Can someone help me with it?
Related
I am trying out the tensorflow example from the tutorial page
at the third step
# Start TensorFlow Serving container and open the REST API port
docker run -t --rm -p 8501:8501 \
-v "$TESTDATA/saved_model_half_plus_two_cpu:/models/half_plus_two" \
-e MODEL_NAME=half_plus_two \
tensorflow/serving &
I get the following error message
2020-07-19 11:54:52.858203: E tensorflow_serving/sources/storage_path/file_system_storage_path_source.cc:362] FileSystemStoragePathSource encountered a filesystem access error: /models/half_plus_two; Permission denied
This is continuously repeated. I have installed the demo model as mentioned in the tutorial.
git clone https://github.com/tensorflow/serving
TESTDATA="$(pwd)/serving/tensorflow_serving/servables/tensorflow/testdata"
Can someone please help what am i missing? I am just starting off on the serving part.
Thanks
Krishnan
The problem could be with your -v parameter where you are binding the path.
Try (Change the source parameter):
docker run -p 8501:8501 --mount type=bind,\
source=/path/to/yourmodels/,\
target=/models/half_plus_two/1 \
-e MODEL_NAME=half_plus_two -t tensorflow/serving
I am using django+celery+redis,celery==4.4.0 in local it is working fine but when I am dockerizing it , I am getting the above error.
I am using following commands to run in local as well as inside container
**CMDs**
celery -A nrn worker -l info
docker run -d -p 6379:6379 redis
flower -A nrn --port=5555
Any help is highly appreciated
*settings.py**
CELERY_ACCEPT_CONTENT = ['json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_BROKER_URL = os.environ.get('redis', 'redis://127.0.0.1:6379/')
Take a look in the documentation. It's a warning, though, not an error (see the code). Running Celery under root is an error only when you allow pickle serialization which is not enabled by default (see here).
However, it's still the best practice to run Celery with lower privileges. In Docker (with Debian based image), I choose to run Celery under nobody:nogroup. I use this Dockerfile:
FROM python:3.6
ENV PYTHONDONTWRITEBYTECODE=1 \
PYTHONUNBUFFERED=1
WORKDIR /srv/celery
COPY ./app app
COPY ./requirements.txt /tmp/requirements.txt
COPY ./celery.sh celery.sh
RUN pip install --no-cache-dir \
-r /tmp/requirements.txt
VOLUME ["/var/log/celery", "/var/run/celery"]
CMD ["./celery.sh"]
where celery.sh looks as follows:
#!/usr/bin/env bash
mkdir -p /var/run/celery /var/log/celery
chown -R nobody:nogroup /var/run/celery /var/log/celery
exec celery --app=app worker \
--loglevel=INFO --logfile=/var/log/celery/worker-example.log \
--statedb=/var/run/celery/worker-example#%h.state \
--hostname=worker-example#%h \
--queues=celery.example -O fair \
--uid=nobody --gid=nogroup
I'm trying to start tensorflow-serving with the following two options like on the documentation
docker run -t --rm -p 8501:8501 \
-v "$(pwd)/models/:/models/" tensorflow/serving \
--model_config_file=/models/models.config \
--model_config_file_poll_wait_seconds=60
The container does not start because it does not recognize the argument --model_config_file_poll_wait_seconds.
unknown argument: --model_config_file_poll_wait_seconds=60
usage: tensorflow_model_server
I'm on the latest docker image, 1.14.0 and the line is taken straight from the documentation
https://www.tensorflow.org/tfx/serving/serving_config
Does this argument even work?
Many thanks.
It seems https://www.tensorflow.org/tfx/serving/serving_config is talking about code that has not been released as a new version yet, which is odd. I will ask about that.
That package is generated from this source:
https://github.com/tensorflow/serving/blob/master/tensorflow_serving/g3doc/serving_config.md, it mentions the --model_config_file_poll_wait_seconds flag.
However, the same document for 1.14.0 has no mention of the flag:
https://github.com/tensorflow/serving/blob/1.14.0/tensorflow_serving/g3doc/serving_config.md
Try using the nightly tensorflow serving image and see if it works.
docker run -t --rm -p 8501:8501 \
-v "$(pwd)/models/:/models/" tensorflow/serving:nightly \
--model_config_file=/models/models.config \
--model_config_file_poll_wait_seconds=60
Just tried. Tensorflow Serving 2.1.0 supports it while 1.14.0 doesn't.
I have trained a Tensorflow Object detection model. I am trying to make a REST request using the tensorflow serving image on docker. (following instruction from https://github.com/tensorflow/serving )
TESTDATA="$(pwd)/serving/tensorflow_serving/servables/tensorflow/testdata/"
docker run -t --rm -p 8501:8501 \
-v "$TESTDATA/my_model:/models/work_place_safety" \
-e MODEL_NAME=work_place_safety \
tensorflow/serving &
I am facing below error message-
$ C:\Program Files\Docker\Docker\Resources\bin\docker.exe: Error response from daemon: Mount denied:
The source path "C:/Users/Desktop/models/serving/tensorflow_serving/servables/tensorflow/testdata/saved_model_work_place_safety;C"
doesn't exist and is not known to Docker.
See 'C:\Program Files\Docker\Docker\Resources\bin\docker.exe run --help'.
I wonder why its including ";C" at the end of source path and throwing an error.
Any Help is much appreciated.
Thanks
resolved the issue by adding a / before $ in Git bash.
docker run -t --rm -p 8501:8501 \
-v /$TESTDATA/my_model:/models/my_model \
-e MODEL_NAME=my_model \
tensorflow/serving &
What is the value of my_model. Is it saved_model_work_place_safety.
Are you sure that your Saved Objection Detection Model is in the Folder, saved_model_work_place_safety and that Folder is in the path, $(pwd)/serving/tensorflow_serving/servables/tensorflow/testdata/?
If it is not inside testdata, you should mention the correct path, where saved_model_work_place_safety is present.
Folder structure should be something like this =>
saved_model_work_place_safety => 00000123 or 1556272508 or 1 => .pb file and Variables Folder.
After downloading openshift/node Docker container the container fails to run:
$ docker logs 64e3eeb60cbc
/usr/local/bin/origin-node-run.sh: line 15: HOST_ETC: unbound variable
This is on Windows 7 with Docker Quickstart Terminal. I ran it with
docker run -d openshift/node
Probably I need to set HOST_ETC in the command line or elsewhere, but I can find no documentation on using this Docker image, so would like some guidance on what to fix here, and any other additional settings that might be required but undocumented.
Thanks for any expert advice here.
The official documentation is telling to start the container this way:
$ sudo docker run -d --name "origin" \
--privileged --pid=host --net=host \
-v /:/rootfs:ro -v /var/run:/var/run:rw -v /sys:/sys -v /var/lib/docker:/var/lib/docker:rw \
-v /var/lib/origin/openshift.local.volumes:/var/lib/origin/openshift.local.volumes \
openshift/origin start