keras failed to import pydot - tensorflow

I'm trying to run the Pix2Pix tutorial for Tensorflow. I'm using the official docker container for this. This is how I start my container:
docker run --gpus all -it -p 8888:8888 --rm -v $PWD:/tf -w /tmp tensorflow/tensorflow:latest-gpu-py3-jupyter
I'm not able to get pass by this cell
generator = Generator()
tf.keras.utils.plot_model(generator, show_shapes=True, dpi=64)
# output -> Failed to import pydot. You must install pydot and graphviz for `pydotprint` to work.
I have tried also installing the pydot and graphviz using pip and also apt-get. Even if this libraries are installed I get the same error.

I had same problem and follow this link
In short:
run these command on command prompt.
pip install pydot
pip install graphviz
From website, download and install graphviz software
Note: in install time, check "add to system path" option to add bin
folder to path variable otherwise you should do it manually. restart
your windows

Related

Jenkins build not running after a successful build the first time

So I am setting up a Jenkins job to run my Pytest project on Github, i successfully did the configuration for my Job.
this is how my custom python builder command looks like:
python3 -m venv venv
source venv/bin/activate
python3 -m pip install --upgrade pip -r requirements.txt
export CHROMEDRIVER_PATH="/Users/Nprashanth/drivers/chromedriver"
pytest -m "regression" tests/regression/regression_test.py --metadata "Scanner" HG20320005 --metadata "S/W Release" 8.5.1 --metadata "S/W Build" 6.0.1.2212091223 --headless
Please find below, my console output where it is getting stuck and not doing anything.
The first time i ran this build it ran through all my tests successfully, i am having this issue with the second time i am running this build, also even if i make a new project wiht new config i am having the same issue.
I also tried deleting the virtual jenkins directory which gets created and tried running it again but i still have the same issue

Install Numpy Requirement in a Dockerfile. Results in error

I am attempting to install a numpy dependancy inside a docker container. (My code heavily uses it). On building the container the numpy library simply does not install and the build fails. This is on OS raspbian-buster/stretch. This does however work when building the container on MAC OS.
I suspect some kind of python related issue, but can not for the life of me figure out how to make it work.
I should point out that removing the pip install numpy from the requirements file and using it in its own RUN statement in the dockerfile does not solve the issue.
The Dockerfile:
FROM python:3.6
ENV PYTHONUNBUFFERED 1
ENV APP /app
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
RUN mkdir $APP
WORKDIR $APP
ADD requirements.txt .
RUN pip install -r requirements.txt
COPY . .
The requirements.txt contains all the project requirements, amounf which is numpy.
Step 6/15 : RUN pip install numpy==1.14.3
---> Running in 266a2132b078
Collecting numpy==1.14.3
Downloading https://files.pythonhosted.org/packages/b0/2b/497c2bb7c660b2606d4a96e2035e92554429e139c6c71cdff67af66b58d2/numpy-1.14.3.zip (4.9MB)
Building wheels for collected packages: numpy
Building wheel for numpy (setup.py): started
Building wheel for numpy (setup.py): still running...
Building wheel for numpy (setup.py): still running...
EDIT:
So after the comment by skybunk and the suggestion to head to official docs, some more debugging on my part, the solution wound up being pretty simple. Thanks skybunk to you go all the glory. Yay.
Solution:
Use alpine and install python install package dependencies, upgrade pip before doing a pip install requirements.
This is my edited Dockerfile - working obviously...
FROM python:3.6-alpine3.7
RUN apk add --no-cache --update \
python3 python3-dev gcc \
gfortran musl-dev \
libffi-dev openssl-dev
RUN pip install --upgrade pip
ENV PYTHONUNBUFFERED 1
ENV APP /app
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
RUN mkdir $APP
WORKDIR $APP
ADD requirements.txt .
RUN pip install -r requirements.txt
COPY . .
To use Numpy on python3 here, we first head over to the official documentation to find what dependencies are required to build Numpy.
Mainly these 5 packages + their dependencies must be installed:
Python3 - 70 mb
Python3-dev - 25 mb
gfortran - 20 mb
gcc - 70 mb
musl-dev -10 mb (used for tracking unexpected behaviour/debugging)
An POC setup would look something like this -
Dockerfile:
FROM gliderlabs/alpine
ADD repositories.txt /etc/apk/repositories
RUN apk add --no-cache --update \
python3 python3-dev gcc \
gfortran musl-dev
ADD requirements-pip.txt .
RUN pip3 install --upgrade pip setuptools && \
pip3 install -r requirements-pip.txt
ADD . /app
WORKDIR /app
ENV PYTHONPATH=/app/
ENTRYPOINT python3 testscript.py
repositories.txt
http://dl-5.alpinelinux.org/alpine/v3.4/main
requirements-pip.txt
numpy
testscript.py
import numpy as np
def random_array(a, b):
return np.random.random((a, b))
a = random_array(2,2)
b = random_array(2,2)
print(np.dot(a,b))
To run this - clone alpine, build it using "docker build -t gliderlabs/alpine ."
Build and Run your Dockerfile
docker build -t minidocker .
docker run minidocker
Output should be something like this-
[[ 0.03573961 0.45351115]
[ 0.28302967 0.62914049]]
Here's the git link, if you want to test it out
From the error logs, it does not seem that it is from numpy. but you can install numpy before the requirment.txt and verify if it's working.
FROM python:3.6
RUN pip install numpy==1.14.3
Build
docker build -t numpy .
Run and Test
docker run numpy bash -c "echo import numpy as np > test.py ; python test.py"
So you will see no error on import.
or You can try numpy as an alpine package
FROM python:3-alpine3.9
RUN apk add --no-cache py3-numpy
Or better to post the requirement.txt.
I had lot of trouble with this issue using FROM python:3.9-buster and pandas.
My requirements.txt had the python-dev-tools, numpy and pandas, along with other packages.
I always got something like this when attempting to build:
preluded by:
and by:
Following hints by Adiii in this thread, I did some debug and found out that this actually works and builds a perfectly running container:
RUN pip3 install NumPy==1.18.0
RUN pip3 install python-dev-tools
RUN pip3 install pandas
RUN pip3 install -r requirements.txt
So, giving a specific RUN layer to the pip3 installing pandas solved the problem!
Another method is to install from the 'slim' distribution of python (based on debian):
FROM python:slim
CMD pip install numpy
123Mb
This results in a smaller image than that of alpine:
FROM python:3-alpine3.9
RUN apk add --no-cache py3-numpy
187MB
Plus it gives better support for other whl libraries for slim is based on a glibc library (against which all the wheels are built) while apline uses musl (incompatible with the wheels), so all packages will have to be either apk added or compiled from sources.

How can we run Robot framework scripts in remote windows server without internet

In My Company I have been asked to configure and make the Robot framework scripts to Run remote windows server which has only intranet but no internet.
I need Information regarding setting the configuration and installing all required libraries and tools, SSH and DB configurations to run my robot framework test cases .
It would be very helpful If i can get some information regarding this as i could not find any helpful reference regarding the same.
Disclaimer - haven't actually done it, so it might fail (or - might work :)
On a machine having internet access, install the same version of python and pip you are going to use on the target machine.
Create a virtual environment, & activate it:
c:\python3\python.exe -m venv robot-venv
robot-venv\scripts\activate
Install all packages you are going to need - I don't know what you're using, but robotframework and robotframework-seleniumlibrary are safe bets:
pip install robotframework
pip install robotframework-seleniumlibrary
# etc, the rest you'll be using
Create a requirements file of what you have installed - this is the crucial step, generating the list of all libraries you'll be using:
pip freeze > requirements.txt
The file will have the packages you've just installed, with their versions; for example:
robotframework==3.1
robotframework-seleniumlibrary==3.2.0
# and the others you installed
So now you need to download these, for transferring to the "offline" machine; the command is:
pip download -r requirements.txt
And now you have the packages as tar.gz files; take them, plus the requirements.txt, and transfer to the target machine (the one with only intranet access).
Create & activate a virtual environment there (the same commands as before). Once done, install the packages from the local copies:
pip install --no-index --find-links C:/the_dir_with_the_files/ -r requirements.txt
It is crucial the python and pip on the two machines to be the same version.
The simplest way is to DOWNLOAD SOURCE files from the internet first, then copy these files into your intranet network. I'm also running ROBOT Framework in my intranet network in my VM.
Follow these links:
https://github.com/robotframework/robotframework/blob/master/INSTALL.rst#installing-from-source
https://pypi.org/project/robotframework/
By the way, you need to install python first & set the python path in your environment variables. The stable python version for ROBOT Framework is Python 2.7, as for ROBOT Framework just use the latest version.
1) First Ensure you have same version of Python installed on both PC's with env variables.
PYTHONPATH
C:\Python27\;C:\Python27\Scripts;C:\Python27\Lib\site-packages
PATH
allExistingPathVariables;%PYTHONPATH%;
2) Check that you have a newer version of pip installed if you are using Python 2. Python 3 seams to have everything already. Personally I use:
python -m pip install --upgrade pip-19.1.1-py2.py3-none-any.whl
3) Open a cmd prompt
NB If your company is anything like mine you will need to set your proxy each time you open a command prompt as per step 4 and 5. NOTE CMD prompt does not use the proxy already set in your browser.
4) set http_proxy=http://UserName:Password#proxy.nameOrIP.com.au:8080 -- t number is your username and whatever your current windows password it.
5) set https_proxy=https:// UserName:Password #proxy.nameOrIP.com.au:8080
6) cd C:\Python27\compiledLibraries ----This can be any folder you want…..
7) run lib_download.bat to download and update all the libraries and any internal dependancies they have from PyPi.org
8) Copy the whole downloadedLibrariesWithDependencies folder with new/updated libraries to Offline PC.
9) Open a cmd prompt on the Offline pc.
10) cd C:\Python27\compiledLibraries ----This can be any folder you want…..
11) run lib_install.bat file
Then all the libraries you keep adding to lib_ files get updated.
Contents of the .bat files should be something like:
lib_download.bat
REM This File contains list of all Libraries that are required for Exec Robot Tests
REM Please Update your library with pip install command
mkdir downloadedLibrariesWithDependencies
cd downloadedLibrariesWithDependencies
mkdir robotframework
pip download robotframework -d "robotframework"
mkdir python-dateutil
pip download python-dateutil -d "python-dateutil"
mkdir wheel
pip download wheel -d "wheel"
mkdir pylint
pip download pylint -d "pylint"
mkdir pytest
pip download pylint -d "pytest"
mkdir pywin32
pip download pywin32 -d "pywin32"
mkdir autopep8
pip download autopep8 -d "autopep8"
lib_install.bat
REM This File contains list of all Libraries that are required for Exec Robot Tests
REM Please Update your library with pip install command
cd downloadedLibrariesWithDependencies
cd ..\robotframework
pip install --upgrade robotframework -f ./ --no-index
cd ..\python-dateutil
pip install --upgrade python-dateutil -f ./ --no-index
cd ..\wheel
pip install --upgrade wheel -f ./ --no-index
cd ..\pylint
pip install --upgrade pylint -f ./ --no-index
cd ..\pytest
pip install --upgrade pytest -f ./ --no-index
cd ..\pywin32
pip install --upgrade pywin32 -f ./ --no-index
cd ..\autopep8
pip install --upgrade autopep8 -f ./ --no-index

Is there a way to use Python 3.5 instead of 3.6?

I need to install a library that is only compatible with Python 3.5. Is there a way to change the Python version in Colaboratory from 3.6 to 3.5?
The only way to vary the Python 3 version is to connect to a local runtime.
You cannot directly change the environment for the notebook.
After hours of exploration, I found a solution:
Initialize a Ngork server in the Colaboratory notebook.
connect to the Ngork server from a local terminal using SSH (or use any editor which supports SSH connections)
Install the required Python version using the terminal.
Install virtualenv.
Create a virtual environment by specifying the Python version installed.
Activate the environment.
Work in that environment from the terminal directly.
Check out Free!! GPUs on your local machine which provides to get detailed description on how to follow the steps.
There is a way to use any version of python you want 3.5 or 3.8 in this example, without having to run a kernel locally or going through an ngrok proxy.
Download the colab notebook. Open a text editor to change the kernel specification to:
"kernelspec": {
"name": "py38",
"display_name": "Python 3.8"
}
This is the same trick as the one used with Javascript, Java, and Golang.
Then upload the edited notebook to Google Drive. Open the notebook in Google Colab. It cannot find the py38 kernel, so it use normal python3 kernel.
You need to install a python 3.8, the google-colab package and the ipykernel under the name you defined above: "py38":
!wget -O mini.sh https://repo.anaconda.com/miniconda/Miniconda3-py38_4.8.2-Linux-x86_64.sh
!chmod +x mini.sh
!bash ./mini.sh -b -f -p /usr/local
!conda install -q -y jupyter
!conda install -q -y google-colab -c conda-forge
!python -m ipykernel install --name "py38" --user
Reload the page, and voilà, you can test the version is correct:
import sys
print("User Current Version:-", sys.version)
A working example can be found there.

TensorFlow Object Detection installation error tensorflow/models/research/

As the title says I have a problem with installing TensorFlow Object Detection.
My system:
lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 17.04
Release: 17.04
Codename: zesty
and achitecture:
uname -i
x86_64
These are the steps I took exactly.
First I verified my python installation:
python -V Python 2.7.13
And my pip installation:
pip -V pip 9.0.1 from /usr/lib/python2.7/dist-packages (python 2.7)
After that I set the url to latest tensorflow version.
export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linu/cpu/tensorflow-1.4.0-cp27-none-linux_x86_64.whl
And then I installed tensorflow.
sudo pip install tensorflow
After this I verified the installation:
python
import tensorflow as tf
hello = tf.constant('Hello, TensorFlow!')
sess = tf.Session()
print(sess.run(hello))
And got Hello, TensorFlow! as response.
Now comes the trouble...
I tried following this guide:
https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/installation.md
Ran:
sudo apt-get install protobuf-compiler python-pil python-lxml
sudo pip install jupyter
sudo pip install matplotlib
And those commands all executed successfully.
The next step gave me my problems though..
The guide does not say what directory tensorflow/models/research/ is (if it's created automatically or should be created by the user and in that case where?)
So I googled a bit and found this one: https://github.com/tensorflow/models/issues/2253
stating that I should just create it... but doing so made the next command executed from that newly created directory
protoc object_detection/protos/*.proto --python_out=.
fail with error object_detection/protos/*.proto: No such file or directory.
I created the directory in tester#tester-vm:~/Documents$ so the full directory path became tester#tester-vm:~/Documents/tensorflow/models/research$.
I'm guessing that I shouldn't create the directory by myself anyway but would love some tips!
Assuming you checked out the models repo (git clone https://github.com/tensorflow/models.git), the tensorflow/models/research/ directory is the research directory in this repo. Basically, this directory: https://github.com/tensorflow/models/tree/master/research