Installing tensorboard built from source - tensorflow

This is about a tensorboard which is built from source, not about pip-installed one.
I could successfully build it.
$ git clone https://github.com/tensorflow/tensorboard.git
$ cd tensorboard/
$ bazel build //tensorboard
tensorflow/tensorboard$ bazel build //tensorboard
Starting local Bazel server and connecting to it...
......................................
: (log messages here)
Target //tensorboard:tensorboard up-to-date:
bazel-bin/tensorboard/tensorboard
INFO: Elapsed time: 326.553s, Critical Path: 187.92s
INFO: 619 processes: 456 linux-sandbox, 12 local, 151 worker.
INFO: Build completed successfully, 1268 total actions
Then yes I can run it as documented in tensorboard/README.md, and it works.
$ ./bazel-bin/tensorboard/tensorboard --logdir path/to/logs
The problem is, I'd like to run it as if installed via pip like this:
$ tensorboard --logdir path/to/logs
But as far as I looked for, no script provided to create .whl file so that we can locally-pip-install it, unlike tensorflow provides one like this.
$ bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg
$ sudo pip install /tmp/tensorflow_pkg/tensorflow-1.8.0-py2-none-any.whl
So... can anybody show how to do that? Making packaging script would solve this, but it should exist somewhere as long as tensorboard is provided via pip anyway. :)
My workaround so far is not clean enough:
$ ln -s /my/build/folder/tensorboard/bazel-bin/tensorboard/tensorboard ~/bin
$ ln -s /my/build/folder/tensorboard/bazel-bin/tensorboard/tensorboard.runfiles ~/bin
I appreciate your suggestions, thanks!
Update July-21:
Thanks to W JC, I found instruction is already there in tensorboard/pip_package/BUILD.
# rm -rf /tmp/tensorboard
# bazel run //tensorboard/pip_package:build_pip_package
# pip install -U /tmp/tensorboard/*py2*.pip
Though unfortunately it shows error in my environment, and I guess it's local issue maybe because I'm using anaconda.
But basically the problem was resolved. It should basically work as long as running on supported environment.

It seems there exists an script in the /tensorboard/pip_packages try to build wheels

bazel run //tensorboard/pip_package:build_pip_package ./ did generate the wheel out, but in the folder where bazel-bin points to. In my case, it's generated at ~/.cache/bazel/_bazel_peijia/b64ba42719633ff75eec6880decefcd3/execroot/org_tensorflow_tensorboard/bazel-out/k8-fastbuild/bin/tensorboard/pip_package/build_pip_package.runfiles/org_tensorflow_tensorboard/tensorboard-2.10.0a0-py3-none-any.whl

Related

Tensorflow Serving Compiling Failure For CPU AVX AVX2

I use the method in the tfx official document to compile the tfx devel in docker file. The OS is MacOS, intel CPU.
here is the docker build code for it
#!/bin/bash
USER=$1
TAG=$2
TF_SERVING_VERSION_GIT_BRANCH="2.4.1"
git clone --branch="${TF_SERVING_VERSION_GIT_BRANCH}" https://github.com/tensorflow/serving
TF_SERVING_BUILD_OPTIONS="--copt=-mavx --local_ram_resources=4096"
cd serving && \
docker build --pull -t $USER/tensorflow-serving-devel:$TAG \
--build-arg TF_SERVING_VERSION_GIT_BRANCH="${TF_SERVING_VERSION_GIT_BRANCH}" \
--build-arg TF_SERVING_BUILD_OPTIONS="${TF_SERVING_BUILD_OPTIONS}" \
-f tensorflow_serving/tools/docker/Dockerfile.devel .
Then I run the shell script with >3hrs and get the following failure:
Actually I cannot know the detail because the log file from docker is clipped by the builder.
Does anyone met the similar problem and can help on this topic?
Thanks a lot in advance!
These instruction sets are not available on all machines, especially with older processors.
If you'd like to apply generally recommended optimizations, including utilizing platform-specific instruction sets for your processor, you can add --config=nativeopt to Bazel build commands when building TensorFlow Serving.
tools/run_in_docker.sh bazel build --config=nativeopt tensorflow_serving/...

Problems at running ImageDataBunch in Deepnote

I'm having trouble running this line of code in Deepnote, does anyone know why?
data = ImageDataBunch.from_folder(path, train="train", valid ="test",ds_tfms=get_transforms(), size=(256,256), bs=32, num_workers=4).normalize()
The error says:
NameError: name 'ImageDataBunch' is not defined
And previously, I have imported the Fastai library. So I don't get it!
The FastAI setup in Deepnote is not that straightforward. It's best to use a custom environment where you set stuff up in a Dockerfile and everything works afterwards in the notebook. I am not sure if the ImageDataBunch or whatever you're trying to do works the same way in FastAI v1 and v2, but here are the details for v1.
This is a Dockerfile which sets up the FastAI environment via conda:
# This is Dockerfile
FROM deepnote/python:3.9
RUN wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda.sh
RUN bash ~/miniconda.sh -b -p $HOME/miniconda
ENV PATH $HOME/miniconda/bin:$PATH
ENV PYTONPATH $HOME/miniconda
RUN $HOME/miniconda/bin/conda install python=3.9 ipykernel -y
RUN $HOME/miniconda/bin/conda install -c fastai -c pytorch fastai -y
RUN $HOME/miniconda/bin/python -m ipykernel install --user --name=conda
ENV DEFAULT_KERNEL_NAME "conda"
After that, you can test the fastai imports in the notebook:
import fastai
from fastai.vision import *
print(fastai.__version__)
ImageDataBunch
And if you download and unpack this sample MNIST dataset, you should be able to load the data like you suggested:
data = ImageDataBunch.from_folder(path, train="train", valid ="test",ds_tfms=get_transforms(), size=(256,256), bs=32, num_workers=4).normalize()
Feel free to check out or clone my Deepnote project to continue working on this.

Multiple issues trying to install PyBOSSA

I am trying to set up PyBOSSA on an AWS EC2 instance running Ubuntu 18.04 LTS. I am following the official instructions and have encountered three errors so far.
sudo apt-get install -y git postgresql postgresql-all postgresql-server-dev-all libpq-dev python-psycopg2 libsasl2-dev libldap2-dev libssl-dev python-virtualenv python-dev build-essential libjpeg-dev libssl-dev libffi-dev dbus libdbus-1-dev libdbus-glib-1-dev libldap2-dev libsasl2-dev python-pip python3-pip redis-server
cd ~
git clone --recursive https://github.com/Scifabric/pybossa
cd pybossa
virtualenv -p python3 env (I'm using Python3 explicitly as my system also has Python 2.7 installed).
source env/bin/activate
pip install -U pip
pip install -r ~/pybossa/requirements.txt
At this point, I start getting error messages... I have copied the stdout and stderr into a file, which I have uploaded here.
I'm not sure if the errors there are what have caused my later errors, but I pushed on through the instructions anyway in hopes it'd work...
cp settings_local.py.tmpl settings_local.py
cp alembic.ini.template alembic.ini
redis-server contrib/sentinel.conf --sentinel
I noted that the Redis server version was 4.0.9 (the instructions say it needs to be v2.6 or greater).
The output from starting the Redis server was as follows:
30284:X 30 Mar 03:09:22.004 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
30284:X 30 Mar 03:09:22.004 # Redis version=4.0.9, bits=64, commit=00000000, modified=0, pid=30284, just started
30284:X 30 Mar 03:09:22.004 # Configuration loaded
...I gather that's ok...
rqscheduler --host 127.0.0.1
This command wasn't installed on my system. I tried to use apt to install it, but there was nothing there. I also tried apt install rq rqscheduler rq-scheduler - nothing found. I then Googled and found the website for rq-scheduler, and found that I could install it by running pip install rq-scheduler
That installed correctly. Nonetheless, running the command rqscheduler --host 127.0.0.1 in the terminal still failed: rqscheduler: command not found.
Knowing that it was a Python package, I wondered if maybe I needed to prepend python3 onto the start of the command: python3 rqscheduler --host 127.0.0.1. Response: python3: can't open file 'rqscheduler': [Errno 2] No such file or directory.
I also tried pip3 install rq-scheduler (which installed fine) and then running the command, but encountered the same error.
I would appreciate knowing how to get that running, but for the purpose of this test, I skipped setting up Regis and the scheduler, and continued with the PyBOSSA instructions:
sudo su postgres
createuser -d -P pybossa
(Password set)
createdb pybossa -O pybossa
exit
python3 cli.py db_create
...and then I got this error:
File "cli.py", line 162
'''SELECT id, created FROM task_run WHERE created LIKE ('\x%')''')
^
SyntaxError: (unicode error) 'unicodeescape' codec can't decode bytes in position 54-55: truncated \xXX escape
I instead tried python cli.py db_create, just in case it'd work, and got a different error:
python cli.py db_create
ValueError: invalid \x escape
So I'm seeing three separate issues:
Installing the PyBOSSA-required Python packages.
The issue with the rqscheduler command.
The error when starting the PyBOSSA server.
What do these errors mean?
1 ) For the installation, try this:
virtualenv env
source env/bin/activate
sudo apt install python3-pip
pip3 install -r requirements.txt
Which ended with no error.
2) Try :
pip install rq-scheduler==0.9.1
or
pip3 install rq-scheduler==0.9.1
3) The \ char need to be escaped (like \\) in python.
So you may alter the cli.py line 162 (using text editor) from:
'''SELECT id, created FROM task_run WHERE created LIKE ('\x%')''')
To:
'''SELECT id, created FROM task_run WHERE created LIKE ('\\x%')''')
But it will be better to be fixed by dev on github ...
CONCLUSION
According to official documentation,
PYBOSSA for python 3 We’ve finally migrated PYBOSSA to python 3. We’re
not going to merge into master until we test it in production a bit
more, so please, help us by testing it. All you have to do is
basically, check out the python3 branch (migrate-python3) and run it.
Then, any bug, issue you find, you just report it and we will be happy
to help you.
The PYBOSSA python3 version is freshly migrated so finaly is not very stable ... I expect that it will be better to use the PYBOSSA python2.7 branch and follow exactly the documentation.
And according to official github account they try to make money with support (?...)
Get professional support You can hire us to help you with your PYBOSSA
project or server (specially for python 2.7). Go to our website, and
contact us.
The issue has now been fixed for the master branch (https://github.com/Scifabric/pybossa/pull/1986). You can fetch the new code and use it.

Can't install nautilus-dropbox on Centos 8

I try to install dropbox on Centos8, however Terminal gives strange errors. Tried different commands, same error.
Firstly downloaded *.rpm file from dropbox website, currently trying to install it.
Commands I tried:
rpm -ivh nautilus-dropbox-2020.03.04-1.fedora.x86_64.rpm
yum localinstall nautilus-dropbox-2020.03.04-1.fedora.x86_64.rpm
Error:
Last metadata expiration check: 0:18:27 ago on Thu 12 Mar 2020 03:46:17 PM EET
Error:
Problem: conflicting requests
nothing provides libgnome >= %{gnome_version} needed by nautilus-dropbox-2020.03.04-1.fc21.x86_64
(try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages)
[root#localhost Downloads]
Also tried --skip-broken and --nobest - but no luck.
Also tried sudo yum install libgnome but it gives error:
Last metadata expiration check: 9:51:39 ago on Thu 12 Mar 2020 02:42:06 PM UTC.
No match for argument: libgnome
Error: Unable to find a match: libgnome
I have:
[adminuser#localhost ~]$ cat /etc/centos-release
CentOS Linux release 8.1.1911 (Core)
Tried to google this mistake, but no luck. Could you please give me any hint how I could overcome this?
Thank you
This is a bug in packaging. Contact Dropbox support and report it as a bug.
Technical details (just in case you are Dropbox employee):
During building rpm, when you use macro then it is expanded. Try yoursel:
$ rpm --eval '%{_bindir}'
/usr/bin
However, when the macro is not defined, you get original value:
$ rpm --eval '%{some_bullshit}'
%{some_bullshit}
So the macro gnome_version should likely contain some version, but this macro was not defined.
nothing provides libgnome
"libgnome" is about libgnome-2 → https://linux.dropbox.com/fedora/ → I.e. Fedora only packages. CentOS 8 has no libgnome* available.
https://www.dropbox.com/install-linux → Compile from source → CentOS 8
# dnf install nautilus-devel-3.28.1-10.el8.x86_64 python3-docutils
tar xvf nautilus-dropbox-2020.03.04.tar.bz2
cd nautilus-dropbox-2020.03.04/
./configure && make
# make install
Result : nautilus-dropbox-2020.03.04-1.el8.x86_64.rpm https://drive.google.com/file/d/1AcxlVdbWOzQvcoVOFYCiaVny9MzgC-Ea/view?usp=sharing
# rpm -Uvh nautilus-dropbox-2020.03.04-1.el8.x86_64.rpm : No issues.
First, realize that the command showing at the install page is for the headless installation. It will probably work, but my preference is to use Dropbox with nautilus integration.
This instructions assumes a installation of Dropbox with Nautilus integration.
We need to compile the installer from source.
a. Download last package
wget https://linux.dropbox.com/packages/nautilus-dropbox-2020.03.04.tar.bz2
b. Extract tarball
tar xjf ./nautilus-dropbox-2020.03.04.tar.bz2
c. Try to compile
cd nautilus-dropbox-2020.03.04; ./configure;
Then you get an Error:
Erro:
Problema: conflicting requests
- nothing provides libgnome >= %{gnome_version} needed by nautilus-dropbox-2020.03.04-1.fc21.x86_64
(try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages)
Now we need to install nautilus-devel and python3-docutils
NOTE: You will get configure: error: couldn't find docutils if forget python3-docutils.
This command will enable the PowerTools repository and install what is needed:
dnf --enablerepo=PowerTools install nautilus-devel python3-docutils
Now you can run ./configure && sudo make install
That's it. Go for the start menu type "Dropbox", it will start the installer.
Restore a local backup of Dropbox (optional)
If you have a local backup, turn of the network after you see the Dropbox folder created. Then copy all your files to that folder and turn it on after copy.
This solution worked for me running CentOS Linux release 8.2.2004 (Core).

installing TensorFlow on the IBM power8

I have access to a large IBM Power8 machine, and would like to install TensorFlow on it. Naturally, I tried the quick pip install, but it failed:
sudo pip install https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.6.0-cp27-none-linux_x86_64.whl
tensorflow-0.6.0-cp27-none-linux_x86_64.whl is not a supported wheel on this platform.
Storing debug log for failure in /home/pv/.pip/pip.log
Unfortunately, pip.log cotains little useful info.
/usr/bin/pip run on Sat Feb 6 17:29:34 2016
tensorflow-0.6.0-cp27-none-linux_x86_64.whl is not a supported wheel on this platform.
Exception information:
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/pip/basecommand.py", line 122, in main
status = self.run(options, args)
File "/usr/lib/python2.7/dist-packages/pip/commands/install.py", line 283, in run
InstallRequirement.from_line(name, None))
File "/usr/lib/python2.7/dist-packages/pip/req.py", line 168, in from_line
raise UnsupportedWheel("%s is not a supported wheel on this platform." % wheel.filename)
UnsupportedWheel: tensorflow-0.6.0-cp27-none-linux_x86_64.whl is not a supported wheel on this platform.
Next thing I tried was to build TensorFlow from source. To no avail, all my attempts ended with some cannot execute binary file: Exec format error message, e.g.:
/usr/local/bin/bazel: line 86: /usr/local/lib/bazel/bin/bazel-real: cannot execute binary file: Exec format error
So then I tried to compile Bazel from source, which also resulted in a similar hard error.
me#machine:~/bazel-0.1.5$ ./compile.sh
INFO: You can skip this first step by providing a path to the bazel binary as second argument:
INFO: ./compile.sh compile /path/to/bazel
🍃 Building Bazel from scratch.
Compiling Java stubs for protocol buffers...
third_party/protobuf/protoc-linux-x86_32.exe -Isrc/main/protobuf/ --java_out=/tmp/bazel.T9C83cNa/src src/main/protobuf/android_studio_ide_info.proto
scripts/bootstrap/buildenv.sh: line 63: third_party/protobuf/protoc-linux-x86_32.exe: cannot execute binary file: Exec format error
pv#sardonis:~/bazel-0.1.5$ ^C
I however found this link http://www.cnblogs.com/rodenpark/p/5007744.html that explains how to build the Protobuf compiler from source on the Power8 machine. This worked and after the modifications described in his other topic http://www.cnblogs.com/rodenpark/p/5007846.html I managed to at least get the compilation process started. But now it crashes with a ton of errors which each seem less severe on their own but the vast amount of them makes it look really hopeless, I posted them on http://pastebin.com/KjkseaGx for reference.
So... I'm running out of inspiration. What can I do to make TensorFlow work on the Power8 machine?
Install bazel 0.2.0-ppc
tf#ubuntu16:~$ git clone https://github.com/ibmsoe/bazel
tf#ubuntu16:~/bazel$ git checkout v0.2.0-ppc
tf#ubuntu16:~/bazel$ ./compile.sh
Install tensorflow
tf#ubuntu16:~$ git clone --recurse-submodules https://github.com/tensorflow/tensorflow
tf#ubuntu16:~/tensorflow$ git checkout v0.10.0rc0
tf#ubuntu16:~/tensorflow$ git commit -m"v0.10.0rc0"
tf#ubuntu16:~/tensorflow$ git cherry-pick ce70f6cf842a46296119337247c24d307e279fa0
tf#ubuntu16:~/tensorflow$ git cherry-pick f1acb3bd828a73b15670fc8019f06a5cd51bd564
tf#ubuntu16:~/tensorflow$ git cherry-pick 9b6215a691a2eebaadb8253bd0cf706f2309a0b8
tf#ubuntu16:~/tensorflow$ ./configure
tf#ubuntu16:~/tensorflow$ bazel build -c opt --config=cuda //tensorflow/tools/pip_package:build_pip_package
Here you'll encounter an error, something like this
ERROR: /home/tf/.cache/bazel/_bazel_tf/b2f766da603b0bed56d4c1d0b178456a/external/farmhash_archive/BUILD:5:1: Executing genrule #farmhash_archive//:configure failed: bash failed: error executing command /bin/bash -c ... (remaining 1 argument(s) skipped): com.google.devtools.build.lib.shell.BadExitStatusException: Process exited with status 1.
/home/tf/.cache/bazel/_bazel_tf/b2f766da603b0bed56d4c1d0b178456a/tensorflow/external/farmhash_archive/farmhash-34c13ddfab0e35422f4c3979f360635a8c050260 /home/tf/.cache/bazel/_bazel_tf/b2f766da603b0bed56d4c1d0b178456a/tensorflow
/tmp/tmp.XdCPQefJyZ /home/tf/.cache/bazel/_bazel_tf/b2f766da603b0bed56d4c1d0b178456a/tensorflow/external/farmhash_archive/farmhash-34c13ddfab0e35422f4c3979f360635a8c050260 /home/tf/.cache/bazel/_bazel_tf/b2f766da603b0bed56d4c1d0b178456a/tensorflow
You'll have to edit config.guess as below to insert a stanza for ppc64le
tf#ubuntu16:~/.cache/bazel/_bazel_tf/b2f766da603b0bed56d4c1d0b178456a/external/farmhash_archive/farmhash-34c13ddfab0e35422f4c3979f360635a8c050260$ vi config.guess
*:BSD/OS:*:*)
echo ${UNAME_MACHINE}-unknown-bsdi${UNAME_RELEASE}
exit ;;
+ ppc64le:Linux:*:*)
+ echo powerpc64le-unknown-linux-gnu
+ exit ;;
*:FreeBSD:*:*)
case ${UNAME_MACHINE} in
tf#ubuntu16:~/tensorflow$ bazel build -c opt --config=cuda //tensorflow/tools/pip_package:build_pip_package
tf#ubuntu16:~/tensorflow$ bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg
tf#ubuntu16:~/tensorflow$ sudo pip install /tmp/tensorflow_pkg/tensorflow*.whl
tf#ubuntu16:~/tensorflow/bazel-bin/tensorflow/tools/pip_package/build_pip_package.runfi
tf#ubuntu16:~/tensorflow$ mkdir _python_build
tf#ubuntu16:~/tensorflow$ cd _python_build
tf#ubuntu16:~/tensorflow/_python_build$ ln -s ~/tensorflow/bazel-bin/tensorflow/tools/pip_package/build_pip_package.runfiles/tensorflow/* .
tf#ubuntu16:~/tensorflow/_python_build$ ln -s ~/tensorflow/tools/* .
tf#ubuntu16:~/tensorflow/_python_build$ python __init__.py develop
Using miniconda:
Installing miniconda:
wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux--ppc64le.sh -O miniconda.sh
bash miniconda.sh
Accept the condition and allow conda to be added to PATH
rm miniconda.sh
echo export IBM_POWERAI_LICENSE_ACCEPT=yes >> ~/.bashrc
source ~/.bashrc
This sould add (base) on terminal. Add the correct channel as first priority
conda config --add default_channels https://repo.anaconda.com/pkgs/main
conda config --prepend channels https://public.dhe.ibm.com/ibmdl/export/pub/software/server/ibm-ai/conda/
Create environment (it is good practice not to install packages on base)
conda create -n ai python=3.7
conda activate ai
conda install --strict-channel-priority tensorflow-gpu
For more information on miniconda on IBM Power 8 and Anaconda: IBM Source & Anaconda Source