I've created a folder "Model_en" and the path to my saved model is:
"--model_base_path=/Users/tarunkumar/Documents/tensor_models/Model_en/1/"
also, my model name is:
"--model_name=Model_en"
After running the command:
"tensorflow_model_server --rest_api_port=8501 --model_name=Model_en --model_base_path=/Users/tarunkumar/Documents/tensor_models/Model_en/1/
"
I'm getting the error as:
"bash: tensorflow_model_server: command not found"
The installation may not have happen properly. For safer side, first remove the corrupt packages that may have installed,
apt-get remove tensorflow-model-server
Then,
Add TensorFlow Serving distribution URI as a package source:
echo "deb [arch=amd64] http://storage.googleapis.com/tensorflow-serving-apt stable tensorflow-model-server tensorflow-model-server-universal" | sudo tee /etc/apt/sources.list.d/tensorflow-serving.list && curl https://storage.googleapis.com/tensorflow-serving-apt/tensorflow-serving.release.pub.gpg | sudo apt-key add -
To this, you should get OK as output in the terminal.
Install and update TensorFlow ModelServer:
apt-get update && apt-get install tensorflow-model-server
Once installed, upgrade to a newer version of tensorflow-model-server with:
apt-get upgrade tensorflow-model-server
The binary can now be invoked using the command tensorflow_model_server
You may get output,
Failed to start server. Error: Invalid argument: Both server_options.model_base_path and server_options.model_config_file are empty!
This means the installation is successful and you can run the command to start the server.
source
This may help as well, at least it worked for me.
pip install tensorflow-serving-api
You have to install it first. Here is how it is done on Debian/Ubuntu:
Installation
Add TensorFlow Serving distribution URI as a package source (one time setup)
echo "deb [arch=amd64] http://storage.googleapis.com/tensorflow-serving-apt stable tensorflow-model-server tensorflow-model-server-universal" | sudo tee /etc/apt/sources.list.d/tensorflow-serving.list && \
curl https://storage.googleapis.com/tensorflow-serving-apt/tensorflow-serving.release.pub.gpg | sudo apt-key add -
Install and update TensorFlow ModelServer
apt-get update && apt-get install tensorflow-model-server
Once installed, the binary can be invoked using the command tensorflow_model_server.
You can upgrade to a newer version of tensorflow-model-server with:
apt-get upgrade tensorflow-model-server
Related
I am trying to install Tensorflow-serving to my Centos 8 machine. Installing with Docker image is not an option for Centos. So I try to install with pip. These are the commands for installing tensorflow-model-server:
pip3 install tensorflow-serving-api==1.15
echo "deb [arch=amd64] http://storage.googleapis.com/tensorflow-serving-apt stable tensorflow-model-server tensorflow-model-server-universal" | sudo tee /etc/apt/sources.list.d/tensorflow-serving.list
curl https://storage.googleapis.com/tensorflow-serving-apt/tensorflow-serving.release.pub.gpg | sudo apt-key add -
sudo apt-get update && sudo apt-get install tensorflow-model-server
The problem is I need version 1.15.0 and I couldn't find how to modify links to install the 1.15 version. Any help for modifying links, ideas for installing "tensorflow/serving" to Centos 8 will be appreciated by me :)
I found the links:
wget 'http://storage.googleapis.com/tensorflow-serving-apt/pool/tensorflow-model-server-1.15.0/t/tensorflow-model-server/tensorflow-model-server_1.15.0_all.deb'
dpkg -i tensorflow-model-server_1.15.0_all.deb
pip3 install tensorflow-serving-api==1.15
With these commands, it works :)
I installed NodeJS (via Linux terminal) but it doesn't seem to come bundled with NPM:
:~$ sudo apt-get install -y nodejs
:~$ node -v
:~$ v10.23.1
:~$ npm -v
:~$ -bash: npm: command not found
I have an Acer Chromebook R 13 with an ARM processor.
Installing NodeJs
sudo apt-get install curl gnupg -y
curl -sl https://deb.nodesource.com/setup_13.x | sudo -E bash -
sudo apt-get install -y nodejs
In my case I didn't have the install script for npm, so got it externally
curl -L https://npmjs.org/install.sh | sudo sh
nvm is recommended on the npmjs.com install page, and nvm has installation instructions of using an install script.
Open the Terminal app on Chromebook.
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh | bash
nvm install node
npm is now ready.
Due to trying to answer in comments i will attempt to give a complete answer here.
You need to first aquire the correct package for your architecture. As you have noted you are using the Acer Chromebook R13. This uses the MediaTek MT8173C processsor which utilizes ARMv8 instruction set.
following these commands should get you up and running.
#first, download the proper package for your architecture
mkdir myNodeJS
cd myNodeJS
wget https://nodejs.org/dist/v14.15.5/node-v14.15.5-linux-arm64.tar.xz
tar -xf node-v14.15.5-linux-arm64.tar.xz
cd node-v14.15.5-linux-arm64
cd bin
sudo cp node /usr/local/bin
cd ..
cd /lib/node_modules/npm/scripts
./install.sh
That should be it, if you have problems, you can resort to the first link that i sent for install instructions
IM trying to install "wkhtmltopdf"
sudo apt install ./wkhtmltox_0.12.6-1.bionic_amd64.deb
when I try to run I get this command
E: Unsupported file ./wkhtmltox_0.12.6-1.bionic_amd64.deb given on commandline
can anyone show me how to fix this?
Use
# sudo dpkg -i wkhtmltox_0.12.6-1.bionic_amd64.deb
# sudo apt install -f
The last command should install any missing dependencies when running the first command.
Since today I experience the following issue with Travis CI: the required package specified in before_install section of .travis.yml:
before_install:
- sudo apt-get update -qq
- sudo apt-get install -qq rabbitmq-server
cannot be installed, breaking the build.
Response as shown Travis console log:
$ sudo apt-get update -qq
W: http://ppa.launchpad.net/couchdb/stable/ubuntu/dists/trusty/Release.gpg: Signature by key 15866BAFD9BCC4F3C1E0DFC7D69548E1C17EAB57 uses weak digest algorithm (SHA1)
$ sudo apt-get install -qq rabbitmq-server
E: Unable to correct problems, you have held broken packages.
The command "sudo apt-get install -qq rabbitmq-server" failed and exited with 100 during .
Your build has been stopped.
How would I overcome this problem? I have tried to replicate the issue locally, but it seems to be not reproducible.
I searched Google with this search string - "failed and exited with 100" apt install - this is the first hit:
https://github.com/travis-ci/travis-ci/issues/7998
I am working through the Tensorflow serving_basic example at:
https://tensorflow.github.io/serving/serving_basic
Setup
Following: https://tensorflow.github.io/serving/setup#prerequisites
Within a docker container based off of ubuntu:latest, I have installed:
bazel:
echo "deb [arch=amd64] http://storage.googleapis.com/bazel-apt stable jdk1.8" | sudo tee /etc/apt/sources.list.d/bazel.list
curl https://bazel.build/bazel-release.pub.gpg | sudo apt-key
sudo apt-get update && sudo apt-get install bazel
sudo apt-get upgrade bazel
grpcio:
pip install grpcio
all packages:
sudo apt-get update && sudo apt-get install -y build-essential curl libcurl3-dev git libfreetype6-dev libpng12-dev libzmq3-dev pkg-config python-dev python-numpy python-pip software-properties-common swig zip zlib1g-dev
tensorflow serving:
git clone --recurse-submodules https://github.com/tensorflow/serving
cd serving
cd tensorflow
./configure
cd ..
I've built the source with bazel and all tests ran successfully:
bazel build tensorflow_serving/...
bazel test tensorflow_serving/...
I can successfully export the mnist model with:
bazel-bin/tensorflow_serving/example/mnist_export /tmp/mnist_model
And I can serve the exported model with:
bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server --port=9000 --model_name=mnist --model_base_path=/tmp/mnist_model/
The problem
When I test the server and try to connect a client to the model server with:
bazel-bin/tensorflow_serving/example/mnist_client --num_tests=1000 --server=localhost:9000
I see this output:
root#dc3ea7993fa9:~/serving# bazel-bin/tensorflow_serving/example/mnist_client --num_tests=2 --server=localhost:9000
Extracting /tmp/train-images-idx3-ubyte.gz
Extracting /tmp/train-labels-idx1-ubyte.gz
Extracting /tmp/t10k-images-idx3-ubyte.gz
Extracting /tmp/t10k-labels-idx1-ubyte.gz
AbortionError(code=StatusCode.NOT_FOUND, details="FeedInputs: unable to find feed output images")
AbortionError(code=StatusCode.NOT_FOUND, details="FeedInputs: unable to find feed output images")
Inference error rate is: 100.0%
The "--use_saved_model" model flag is set to default "true"; use the --use_saved_model=false when starting the server. This should work:
bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server --use_saved_model=false --port=9000 --model_name=mnist --model_base_path=/tmp/mnist_model/
I mentioned this on the tensorflow github, and the solution was to remove the original model that had been created. If you're running into this, run
rm -rf /tmp/mnist_model
and rebuild it