how to install tensorflow serving offline - tensorflow

My server is in my company network which disconnected to the internet. I can only downloads the required files in other office computer and upload them.
I git clone
git clone --recurse-submodules https://github.com/tensorflow/serving
cd serving
cd tensorflow
./configure cd ..
Configuration is ok as the default advices.
But when I want to bazel the repository.
[root#centos7 /data/mig_predictor/pre_installation/serving]# bazel build -c opt ... --package_path /data/mig_predictor/pre_installation .........
```
ERROR: error loading package 'serving/tensorflow/tensorflow/tools/pip_package': Extension file not found. Unable to load package for '//tensorflow/core:platform/default/build_config_root.bzl': BUILD file not found on package path. ERROR: error loading package 'serving/tensorflow/tensorflow/tools/pip_package': Extension file not found. Unable to load package for '//tensorflow/core:platform/default/build_config_root.bzl': BUILD file not found on package path. INFO: Elapsed time: 0.481s
How can I pointed the local repository for the predownloaded packages?
Thank you!

Related

lpdf luarocks dependency installation external dependency error

I want to use lpdf dependency in the luarocks. When I am trying to install it using rockspec file following error has occurred .
lpdf - https://luarocks.org/modules/tomasguisasola/lpdf/20130702.51-1
Error: Failed installing dependency: https://luarocks.org/lpdf-20130702.51-1.src.rock
- Could not find header file for PDFLIB
No file pdflib.h in /usr/local/include
No file pdflib.h in /usr/include
No file pdflib.h in /include
You may have to install PDFLIB in your system and/or pass PDFLIB_DIR or PDFLIB_INCDIR
to the luarocks command.
Example: luarocks install lpdf PDFLIB_DIR=/usr/local
Makefile:110: recipe for target 'install' failed
make: *** [install] Error 1
I want to install in on a docker container. Can anyone share an idea on this?

jupyterlab-plotly build npm extensions failed to install on Linux

This is to help those who face a similar issue. My builds were failing when trying to install the jupyterlab-plotly extension. My Jupyter Lab version is Version 1.2.6. The log was as follows:
[LabBuildApp] Building in /home/***/anaconda3/share/jupyter/lab
[LabBuildApp] Yarn configuration loaded.
[LabBuildApp] Node v6.13.1
[LabBuildApp] Building jupyterlab assets (build:prod:minimize)
[LabBuildApp] > node /home/***/anaconda3/lib/python3.7/site-packages/jupyterlab/staging/yarn.js install --non-interactive
[LabBuildApp] yarn install v1.15.2
[1/5] Validating package.json...
[2/5] Resolving packages...
warning jupyterlab-plotly > plotly.js > regl-splom > left-pad#1.3.0: use String.prototype.padStart()
warning jupyterlab-plotly > plotly.js > point-cluster > bubleify > buble > os-homedir#2.0.0: This is not needed anymore. Use `require('os').homedir()` instead.
[3/5] Fetching packages...
error ws#7.2.1: The engine "node" is incompatible with this module. Expected version ">=8.3.0". Got "6.13.1"
error Found incompatible module
info Visit https://yarnpkg.com/en/docs/cli/install for documentation about this command.
[LabBuildApp] npm dependencies failed to install
[LabBuildApp] Traceback (most recent call last):
[LabBuildApp] File "/home/***/anaconda3/lib/python3.7/site-packages/jupyterlab/debuglog.py", line 47, in debug_logging
yield
[LabBuildApp] File "/home/***/anaconda3/lib/python3.7/site-packages/jupyterlab/labapp.py", line 98, in start
command=command, app_options=app_options)
[LabBuildApp] File "/home/***/anaconda3/lib/python3.7/site-packages/jupyterlab/commands.py", line 459, in build
command=command, clean_staging=clean_staging)
[LabBuildApp] File "/home/***/anaconda3/lib/python3.7/site-packages/jupyterlab/commands.py", line 660, in build
raise RuntimeError(msg)
[LabBuildApp] RuntimeError: npm dependencies failed to install
[LabBuildApp] Exiting application: JupyterLab
~
~
"/tmp/jupyterlab-debug-7x6sz5zm.log" 34L, 1758C
Solution is in answers
The issue, as indicated by the log file, seemed to be that the node in my anaconda environment was outdated.
$ type node
node is hashed (/home/***/anaconda3/bin/node)
$ node --version
v6.13.1
Looking at the nodejs on my machine:
$ type nodejs
nodejs is hashed (/usr/bin/nodejs)
$ nodejs --version
v10.15.2
To get around this issue I did the following:
Navigated to node's parent directory
Made a backup of node just in case
Made a symlink to nodejs here named as "node"
Ran the build
Enabled the jupyterlab-plotly extension
Restarted the Jupyter Lab server
Commands were as follows:
cd /home/***/anaconda3/bin/
cp node node_bak
rm node
sudo ln -s /usr/bin/nodejs /home/***/anaconda3/bin/node
jupyter lab clean
jupyter lab build
After some time, the build concluded successfully.
I enabled the jupyterlab-plotly extension from the inbuild extension manager. And I restarted the server.
My pretty plots started rendering as intended after this. :) Hope this saves you some time.
Note: replace *** with the paths on your machine
I had the same error and simply updated the nodejs package on my environment.
conda update nodejs

How to Clone Repository using GitPython

I am new to Python and Git. Found GitPython library to run Git commands using Python. I am trying to clone an already created private repository on Google Cloud to my local directory on Mac. My code is as follow:
repo = Repo.clone_from('https://source.developers.google.com/p/my-project/r/my-project--data', 'my-local-dir', no_checkout=True)
And I am getting following error:
git.exc.GitCommandError: Cmd('git') failed due to: exit code(128) cmdline: git clone --no-checkout -v https://source.developers.google.com/p/my-project/r/my-project-data /my-local-dir stderr: 'Cloning into '/my-local-dir'... fatal: could not read Username for 'https://source.developers.google.com': Device not configured
Please help.
Thanks in advance.
I was trying to the same thing, then I discoverd pygit2, and with it I was able to clone a repository using two lines.
How? First make sure you have installed pygit2 in your python environment. I did that using the following command line:
pip install pygit2
Here the two lines I mentioned above:
import pygit2
pygit2.clone_repository("https://github.com/libgit2/pygit2", "pygit2")

Distributed tensorflow fails with "BUILD file not found on package"

When attempting to build in the core/distributed_runtime module using:
$ bazel build -c opt
//tensorflow/core/distributed_runtime/rpc:grpc_tensorflow_server
We get the following error:
ERROR: error loading package 'tensorflow/core/distributed_runtime/rpc':
Extension file not found. Unable to load package for
'//google/protobuf:protobuf.bzl': BUILD file not found on package path.
INFO: Elapsed time: 0.097s
Are there additional steps required (and not mentioned in the README.md) ?
This sounds like a git submodule issue—and it would affect building any part of TensorFlow from source. To recover, run the following command in your git repository:
$ git submodule update --init --recursive
(There are many other ways to do the same thing: see this question for some suggestions.)

Hadoop MapReduce 1.0.2 eclipse-plugin build fails...I don't get it

I have tried to build the Hadoop MapReduce eclipse-plugin from source, but get the following error.
SRC_BASE_DIR/hadoop-common/hadoop-mapreduce-project/build/ivy/lib/Hadoop/common
does not exist.
I cloned the Hadoop source from the Apache GIT repo and managed to build the actual Hadoop binaries using the following commands
cd SRC_BASE_DIR/hadoop-common
mvn clean install
This was successful so next I changed directory
cd SRC_BASE_DIR/hadoop-common/hadoop-mapreduce-project/src/contrib/eclipse-plugin
I appended eclipse.home property to the build.properties file...
echo "eclipse.home=/opt/eclipse" >> build.properties
then tried to build the plugin...
ant jar
But I still get the error outlined above.
What am I missing?
OK I was missing a step.
In the folder
SRC_BASE_DIR/hadoop-common/hadoop-mapreduce-project
I ran the following command
mvn -DskipTests install
which was successful. Then in folder
SRC_BASE_DIR/hadoop-common/hadoop-mapreduce-project/src/contrib/eclipse-plugin
I ran the command
ant jar
and this time it was succesful and created the JAR file hadoop-1.0.2-eclipse-plugin.jar
Now, I just hope that that the plugin works!