Errno13 Tensorflow Object Detection API - tensorflow

I have been trying to find a solution for the errno13 permission error. I tried adding the path in windows already, changing permissions to the file in windows, but nothing works. Thanks for any help in advance!
if os.name=='nt':
url="https://github.com/protocolbuffers/protobuf/releases/download/v3.15.6/protoc-3.15.6-win64.zip"
wget.download(url)
!move protoc-3.15.6-win64.zip {paths['PROTOC_PATH']}
!cd {paths['PROTOC_PATH']} && tar -xf protoc-3.15.6-win64.zip
os.environ['PATH'] += os.pathsep + os.path.abspath(os.path.join(paths['PROTOC_PATH'], 'bin'))
!cd Tensorflow/models/research && protoc object_detection/protos/*.proto --python_out=. && copy object_detection\\packages\\tf2\\setup.py setup.py && python setup.py build && python setup.py install
!cd Tensorflow/models/research/slim && pip install -e.
It is able to do all the lines up to the second last line, where it gave me this error
errno13
Just to be clear, I am the administrator and updated the paths in the system environment to the target location, but it still gives me this error.
If you need more information, I am following the guide by https://www.youtube.com/watch?v=yqkISICHH-U

Related

Problems at running ImageDataBunch in Deepnote

I'm having trouble running this line of code in Deepnote, does anyone know why?
data = ImageDataBunch.from_folder(path, train="train", valid ="test",ds_tfms=get_transforms(), size=(256,256), bs=32, num_workers=4).normalize()
The error says:
NameError: name 'ImageDataBunch' is not defined
And previously, I have imported the Fastai library. So I don't get it!
The FastAI setup in Deepnote is not that straightforward. It's best to use a custom environment where you set stuff up in a Dockerfile and everything works afterwards in the notebook. I am not sure if the ImageDataBunch or whatever you're trying to do works the same way in FastAI v1 and v2, but here are the details for v1.
This is a Dockerfile which sets up the FastAI environment via conda:
# This is Dockerfile
FROM deepnote/python:3.9
RUN wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda.sh
RUN bash ~/miniconda.sh -b -p $HOME/miniconda
ENV PATH $HOME/miniconda/bin:$PATH
ENV PYTONPATH $HOME/miniconda
RUN $HOME/miniconda/bin/conda install python=3.9 ipykernel -y
RUN $HOME/miniconda/bin/conda install -c fastai -c pytorch fastai -y
RUN $HOME/miniconda/bin/python -m ipykernel install --user --name=conda
ENV DEFAULT_KERNEL_NAME "conda"
After that, you can test the fastai imports in the notebook:
import fastai
from fastai.vision import *
print(fastai.__version__)
ImageDataBunch
And if you download and unpack this sample MNIST dataset, you should be able to load the data like you suggested:
data = ImageDataBunch.from_folder(path, train="train", valid ="test",ds_tfms=get_transforms(), size=(256,256), bs=32, num_workers=4).normalize()
Feel free to check out or clone my Deepnote project to continue working on this.

Installing tensorboard built from source

This is about a tensorboard which is built from source, not about pip-installed one.
I could successfully build it.
$ git clone https://github.com/tensorflow/tensorboard.git
$ cd tensorboard/
$ bazel build //tensorboard
tensorflow/tensorboard$ bazel build //tensorboard
Starting local Bazel server and connecting to it...
......................................
: (log messages here)
Target //tensorboard:tensorboard up-to-date:
bazel-bin/tensorboard/tensorboard
INFO: Elapsed time: 326.553s, Critical Path: 187.92s
INFO: 619 processes: 456 linux-sandbox, 12 local, 151 worker.
INFO: Build completed successfully, 1268 total actions
Then yes I can run it as documented in tensorboard/README.md, and it works.
$ ./bazel-bin/tensorboard/tensorboard --logdir path/to/logs
The problem is, I'd like to run it as if installed via pip like this:
$ tensorboard --logdir path/to/logs
But as far as I looked for, no script provided to create .whl file so that we can locally-pip-install it, unlike tensorflow provides one like this.
$ bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg
$ sudo pip install /tmp/tensorflow_pkg/tensorflow-1.8.0-py2-none-any.whl
So... can anybody show how to do that? Making packaging script would solve this, but it should exist somewhere as long as tensorboard is provided via pip anyway. :)
My workaround so far is not clean enough:
$ ln -s /my/build/folder/tensorboard/bazel-bin/tensorboard/tensorboard ~/bin
$ ln -s /my/build/folder/tensorboard/bazel-bin/tensorboard/tensorboard.runfiles ~/bin
I appreciate your suggestions, thanks!
Update July-21:
Thanks to W JC, I found instruction is already there in tensorboard/pip_package/BUILD.
# rm -rf /tmp/tensorboard
# bazel run //tensorboard/pip_package:build_pip_package
# pip install -U /tmp/tensorboard/*py2*.pip
Though unfortunately it shows error in my environment, and I guess it's local issue maybe because I'm using anaconda.
But basically the problem was resolved. It should basically work as long as running on supported environment.
It seems there exists an script in the /tensorboard/pip_packages try to build wheels
bazel run //tensorboard/pip_package:build_pip_package ./ did generate the wheel out, but in the folder where bazel-bin points to. In my case, it's generated at ~/.cache/bazel/_bazel_peijia/b64ba42719633ff75eec6880decefcd3/execroot/org_tensorflow_tensorboard/bazel-out/k8-fastbuild/bin/tensorboard/pip_package/build_pip_package.runfiles/org_tensorflow_tensorboard/tensorboard-2.10.0a0-py3-none-any.whl

Installation of sql relay fails

I just created a docker container and tried to install SQL Relay inside it.
I've checked the prerequisites here and followed the installation documents here.
However, at the end of make install of sqlrelay, I saw an error like this:
update-rc.d: /etc/init.d/sqlrelay: file does not exist
update-rc.d: /etc/init.d/sqlrcachemanager: file does not exist
make[1]: *** [install] Error 1
make[1]: Leaving directory `/sqlrelay-0.66.0/init'
make: *** [install-init] Error 2
What might be wrong with my installation?
Here's the docker file I used to start my installation:
FROM ubuntu:trusty
RUN apt-get update && \
apt-get install libxml2-dev libpcre3 libpcre3-dev libmysqld-dev -y
RUN apt-get install mysql-server libmysqlclient-dev -y
# sql relay prerequisites
RUN apt-get install g++ make perl php5-dev python-dev ruby-dev \
tcl-dev openjdk-7-jdk erlang-dev nodejs-dev node-gyp mono-devel \
libmariadbclient-dev libpq-dev firebird-dev libfbclient2 libsqlite3-dev \
unixodbc-dev freetds-dev mdbtools-dev -y
COPY rudiments-0.56.0.tar.gz /
COPY sqlrelay-0.66.0.tar.gz /
EXPOSE 80
Here are the outputs of ./configure, make, and make install inside sqlrelay-0.66.0 folder:
configure_log
make_log
make_install_log
If you need more information of my installation process, just let me know. I can provide it.
I think you should use ADD instead of COPY in your lines such as
COPY rudiments-0.56.0.tar.gz /
Your COPY just copies the .tar.gz, but does not unpack them
as with ADD
If the <src> parameter of ADD is an archive in a recognised compression format, it will be unpacked
This is extracted from
What is the difference between the `COPY` and `ADD` commands in a Dockerfile?
I have recently hit the same issue. The issue I found was that the init Makefile was incorrectly detecting the use of systemctl on Ubuntu Trusty and putting the scripts there. Later on the script would try to find the scripts in init.d and fail.
The solution is to edit the Makefile: sqlrelay-X.X.X/init/Makefile
Replace:
install:
if ( test -d "/lib/systemd/system" ); \
With:
install:
if ( test -d "/lib/systemd/system_x" ); \
Make a similar change to the uninstall option later in the script and it will now correctly install on Ubuntu.

installing TensorFlow on the IBM power8

I have access to a large IBM Power8 machine, and would like to install TensorFlow on it. Naturally, I tried the quick pip install, but it failed:
sudo pip install https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.6.0-cp27-none-linux_x86_64.whl
tensorflow-0.6.0-cp27-none-linux_x86_64.whl is not a supported wheel on this platform.
Storing debug log for failure in /home/pv/.pip/pip.log
Unfortunately, pip.log cotains little useful info.
/usr/bin/pip run on Sat Feb 6 17:29:34 2016
tensorflow-0.6.0-cp27-none-linux_x86_64.whl is not a supported wheel on this platform.
Exception information:
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/pip/basecommand.py", line 122, in main
status = self.run(options, args)
File "/usr/lib/python2.7/dist-packages/pip/commands/install.py", line 283, in run
InstallRequirement.from_line(name, None))
File "/usr/lib/python2.7/dist-packages/pip/req.py", line 168, in from_line
raise UnsupportedWheel("%s is not a supported wheel on this platform." % wheel.filename)
UnsupportedWheel: tensorflow-0.6.0-cp27-none-linux_x86_64.whl is not a supported wheel on this platform.
Next thing I tried was to build TensorFlow from source. To no avail, all my attempts ended with some cannot execute binary file: Exec format error message, e.g.:
/usr/local/bin/bazel: line 86: /usr/local/lib/bazel/bin/bazel-real: cannot execute binary file: Exec format error
So then I tried to compile Bazel from source, which also resulted in a similar hard error.
me#machine:~/bazel-0.1.5$ ./compile.sh
INFO: You can skip this first step by providing a path to the bazel binary as second argument:
INFO: ./compile.sh compile /path/to/bazel
🍃 Building Bazel from scratch.
Compiling Java stubs for protocol buffers...
third_party/protobuf/protoc-linux-x86_32.exe -Isrc/main/protobuf/ --java_out=/tmp/bazel.T9C83cNa/src src/main/protobuf/android_studio_ide_info.proto
scripts/bootstrap/buildenv.sh: line 63: third_party/protobuf/protoc-linux-x86_32.exe: cannot execute binary file: Exec format error
pv#sardonis:~/bazel-0.1.5$ ^C
I however found this link http://www.cnblogs.com/rodenpark/p/5007744.html that explains how to build the Protobuf compiler from source on the Power8 machine. This worked and after the modifications described in his other topic http://www.cnblogs.com/rodenpark/p/5007846.html I managed to at least get the compilation process started. But now it crashes with a ton of errors which each seem less severe on their own but the vast amount of them makes it look really hopeless, I posted them on http://pastebin.com/KjkseaGx for reference.
So... I'm running out of inspiration. What can I do to make TensorFlow work on the Power8 machine?
Install bazel 0.2.0-ppc
tf#ubuntu16:~$ git clone https://github.com/ibmsoe/bazel
tf#ubuntu16:~/bazel$ git checkout v0.2.0-ppc
tf#ubuntu16:~/bazel$ ./compile.sh
Install tensorflow
tf#ubuntu16:~$ git clone --recurse-submodules https://github.com/tensorflow/tensorflow
tf#ubuntu16:~/tensorflow$ git checkout v0.10.0rc0
tf#ubuntu16:~/tensorflow$ git commit -m"v0.10.0rc0"
tf#ubuntu16:~/tensorflow$ git cherry-pick ce70f6cf842a46296119337247c24d307e279fa0
tf#ubuntu16:~/tensorflow$ git cherry-pick f1acb3bd828a73b15670fc8019f06a5cd51bd564
tf#ubuntu16:~/tensorflow$ git cherry-pick 9b6215a691a2eebaadb8253bd0cf706f2309a0b8
tf#ubuntu16:~/tensorflow$ ./configure
tf#ubuntu16:~/tensorflow$ bazel build -c opt --config=cuda //tensorflow/tools/pip_package:build_pip_package
Here you'll encounter an error, something like this
ERROR: /home/tf/.cache/bazel/_bazel_tf/b2f766da603b0bed56d4c1d0b178456a/external/farmhash_archive/BUILD:5:1: Executing genrule #farmhash_archive//:configure failed: bash failed: error executing command /bin/bash -c ... (remaining 1 argument(s) skipped): com.google.devtools.build.lib.shell.BadExitStatusException: Process exited with status 1.
/home/tf/.cache/bazel/_bazel_tf/b2f766da603b0bed56d4c1d0b178456a/tensorflow/external/farmhash_archive/farmhash-34c13ddfab0e35422f4c3979f360635a8c050260 /home/tf/.cache/bazel/_bazel_tf/b2f766da603b0bed56d4c1d0b178456a/tensorflow
/tmp/tmp.XdCPQefJyZ /home/tf/.cache/bazel/_bazel_tf/b2f766da603b0bed56d4c1d0b178456a/tensorflow/external/farmhash_archive/farmhash-34c13ddfab0e35422f4c3979f360635a8c050260 /home/tf/.cache/bazel/_bazel_tf/b2f766da603b0bed56d4c1d0b178456a/tensorflow
You'll have to edit config.guess as below to insert a stanza for ppc64le
tf#ubuntu16:~/.cache/bazel/_bazel_tf/b2f766da603b0bed56d4c1d0b178456a/external/farmhash_archive/farmhash-34c13ddfab0e35422f4c3979f360635a8c050260$ vi config.guess
*:BSD/OS:*:*)
echo ${UNAME_MACHINE}-unknown-bsdi${UNAME_RELEASE}
exit ;;
+ ppc64le:Linux:*:*)
+ echo powerpc64le-unknown-linux-gnu
+ exit ;;
*:FreeBSD:*:*)
case ${UNAME_MACHINE} in
tf#ubuntu16:~/tensorflow$ bazel build -c opt --config=cuda //tensorflow/tools/pip_package:build_pip_package
tf#ubuntu16:~/tensorflow$ bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg
tf#ubuntu16:~/tensorflow$ sudo pip install /tmp/tensorflow_pkg/tensorflow*.whl
tf#ubuntu16:~/tensorflow/bazel-bin/tensorflow/tools/pip_package/build_pip_package.runfi
tf#ubuntu16:~/tensorflow$ mkdir _python_build
tf#ubuntu16:~/tensorflow$ cd _python_build
tf#ubuntu16:~/tensorflow/_python_build$ ln -s ~/tensorflow/bazel-bin/tensorflow/tools/pip_package/build_pip_package.runfiles/tensorflow/* .
tf#ubuntu16:~/tensorflow/_python_build$ ln -s ~/tensorflow/tools/* .
tf#ubuntu16:~/tensorflow/_python_build$ python __init__.py develop
Using miniconda:
Installing miniconda:
wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux--ppc64le.sh -O miniconda.sh
bash miniconda.sh
Accept the condition and allow conda to be added to PATH
rm miniconda.sh
echo export IBM_POWERAI_LICENSE_ACCEPT=yes >> ~/.bashrc
source ~/.bashrc
This sould add (base) on terminal. Add the correct channel as first priority
conda config --add default_channels https://repo.anaconda.com/pkgs/main
conda config --prepend channels https://public.dhe.ibm.com/ibmdl/export/pub/software/server/ibm-ai/conda/
Create environment (it is good practice not to install packages on base)
conda create -n ai python=3.7
conda activate ai
conda install --strict-channel-priority tensorflow-gpu
For more information on miniconda on IBM Power 8 and Anaconda: IBM Source & Anaconda Source

Can I remove directory after $git clone and $make install

I wrote myself a litte script to install opencv under ubuntu14.04. Can I remove the directory 3party after the make install sorted the lib into system directories or are there dependencies? (Remove not only the MYBUILD but the complete 3party)
echo "\nInstall OpenCV?...<any key>\n"
read inp1; # $inp1
mkdir 3party;
cd 3party;
git clone https://github.com/Itseez/opencv.git
cd opencv;
mkdir MYBUILD;
cd MYBUILD;
#sudo mkdir -p /usr/local/lib/opencv;
cmake -L -DCMAKE_BUILD_TYPE=RELEASE -DCMAKE_INSTALL_PREFIX=/usr/local .. ;
echo"check if path is ok?...<any key> or abort";
read inp1; # $inp1
make;
#sudo mkdir -p /usr/local/lib/opencv;
make install;
cd ../../..;
chmod -R 777 3party;
echo "\nDone.\nPlease exit...<any key>";
EDIT: I did tag it cmake because the configuration step is performed with this build tool. Also the tutorial on the OpenCV website stated it. Please correct me if wrong.
Building OpenCV from Source Using CMake, Using the Command Line
Normally, after installation of any package its source and binary directories can be safetly removed. OpenCV follows this convention too.