osmNX in Google Colab - osmnx

For my purposes I require osmNX in Google Colab
Has anyone done this before? I use the following commands:
!wget https://repo.anaconda.com/archive/Anaconda3-2019.07-Linux-x86_64.sh && bash Anaconda3-2019.07-Linux-x86_64.sh -bfp /usr/local
import sys
sys.path.append('/usr/local/lib/python3.6/site-packages')
!conda config --prepend channels conda-forge
The command:
!conda info --envs
Shows that the enviroment is created succesfully.
When I run the command:
!conda activate ox
The error is displayed:
CommandNotFoundError: Your shell has not been properly configured to use 'conda activate'.
To initialize your shell, run
$ conda init <SHELL_NAME>
Currently supported shells are:
- bash
- fish
- tcsh
- xonsh
- zsh
- powershell
See 'conda init --help' for more information and options.
IMPORTANT: You may need to close and restart your shell after running 'conda init'.
The command
!conda init bash
has no effect.
Thanks for the help

!apt-get -qq install -y libspatialindex-dev && pip install -q -U osmnx
import osmnx as ox
ox.config(use_cache=True, log_console=True)
you can use this command !

!pip install geopandas== 0.10.0
!pip install matplotlib==3.4
!pip install networkx==2.6
!pip install numpy==1.21
!pip install pandas==1.3
!pip install pyproj==3.2
!pip install requests==2.26
!pip install Rtree==0.9
!pip install Shapely==1.7
!pip install osmnx
I installed the respective packages based on the requirements provided in this link https://github.com/gboeing/osmnx/blob/main/requirements.txt , it has worked in my application so far, hope it works for you too.
Alternatively, similar to another answer, you can use the code below, found in https://stackoverflow.com/a/65378540/18403512:
!apt install libspatialindex-dev
!pip install osmnx

The answer would be similar to running osmnx on any docker or external server.
I tried it and almost got there, maybe someone can help make it complete.
So let's start with the basic osmnx installation:
conda config --prepend channels conda-forge
conda create -n ox --strict-channel-priority osmnx
Then, let's look at how can this be done at remote docker, e.g. travis CI (working sample .travis.yml from one of my repos):
- bash miniconda.sh -b -p $HOME/miniconda
- source "$HOME/miniconda/etc/profile.d/conda.sh"
- hash -r
- conda config --set always_yes yes --set changeps1 no
- conda update -q conda
# Useful for debugging any issues with conda
- conda info -a
- conda config --prepend channels conda-forge
- conda create -n ox --strict-channel-priority osmnx
- conda activate ox
Then we may take a look at how to have conda in colab and use this snippet:
%%bash
MINICONDA_INSTALLER_SCRIPT=Miniconda3-4.5.4-Linux-x86_64.sh
MINICONDA_PREFIX=/usr/local
wget https://repo.continuum.io/miniconda/$MINICONDA_INSTALLER_SCRIPT
chmod +x $MINICONDA_INSTALLER_SCRIPT
./$MINICONDA_INSTALLER_SCRIPT -b -f -p $MINICONDA_PREFIX
which then finally boils down to this almost working notebook, based on this post.
What is not working is switching between environments, so !conda env list returns ox as one of environments, yet activating it fails:
!conda activate ox
raises:
CommandNotFoundError: Your shell has not been properly configured to use 'conda activate'.
To initialize your shell, run
$ conda init <SHELL_NAME>
Currently supported shells are:
- bash
- fish
- tcsh
- xonsh
- zsh
- powershell
See 'conda init --help' for more information and options.
IMPORTANT: You may need to close and restart your shell after running 'conda init'.

Related

How can I update Google Colab's Python version?

The current default version of Python running on Google Colab is 3.7, but I need 3.9 for my notebooks to work.
How can I update Google Colab's Python version to 3.9 (or greater)?
In Google Colab you have a Debian-based Linux, and you can do whatever you can on a Debian Linux. Upgrading Python is as easy as upgrading it on your own Linux system.
Detect the current python version in Colab:
!python --version
#Python 3.8.16
Install new python version
Let's first install and upgrade to Python 3.9:
#install python 3.9
!sudo apt-get update -y
!sudo apt-get install python3.9
#change alternatives
!sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.8 1
!sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.9 2
#check python version
!python --version
#3.9.16
Port Colab kernel to the new installed python
As mentioned in the comments, the above commands just add a new python version to your google colab and update the default python for commandline usage. But your runtime packages such as sys are still running on the previous python version. The following commands need to be executed as well, to update the sys version.
# install pip for new python
!sudo apt-get install python3.9-distutils
!wget https://bootstrap.pypa.io/get-pip.py
!python get-pip.py
# credit of these last two commands blongs to #Erik
# install colab's dependencies
!python -m pip install ipython ipython_genutils ipykernel jupyter_console prompt_toolkit httplib2 astor
# link to the old google package
!ln -s /usr/local/lib/python3.8/dist-packages/google \
/usr/local/lib/python3.9/dist-packages/google
Now you can restart runtime and check the sys version. Note that in the new python version you have to install every packages, such as pandas, tensorflow, etc. from scratch.
Also, note that you can see a list of installed Python versions and switch between them at any time with this command:
(If nothing changed after installation, use this command to select python version manually)
!sudo update-alternatives --config python3
#after running, enter the row number of the python version you want.
It's also possible to update the kernel without going through ngrok or conda with some creative package installation.
Raha's answer suggesting making a link between the default google package and the newly installed Python version is the trick that makes this work because, at least with Python 3.9, the version of pandas (0.24.0) that the google package requires fails to build.
Here's the code I used to install and switch my Colab kernel to Python 3.9:
#install python 3.9 and dev utils
#you may not need all the dev libraries, but I haven't tested which aren't necessary.
!sudo apt-get update -y
!sudo apt-get install python3.9 python3.9-dev python3.9-distutils libpython3.9-dev
#change alternatives
!sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.8 1
!sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.9 2
#Check that it points at the right location
!python3 --version
# install pip
!curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
!python3 get-pip.py --force-reinstall
#install colab's dependencies
!python3 -m pip install ipython ipython_genutils ipykernel jupyter_console prompt_toolkit httplib2 astor
# link to the old google package
!ln -s /usr/local/lib/python3.8/dist-packages/google \
/usr/local/lib/python3.9/dist-packages/google
# There has got to be a better way to do this...but there's a bad import in some of the colab files
# IPython no longer exposes traitlets like this, it's a separate package now
!sed -i "s/from IPython.utils import traitlets as _traitlets/import traitlets as _traitlets/" /usr/local/lib/python3.9/dist-packages/google/colab/*.py
!sed -i "s/from IPython.utils import traitlets/import traitlets/" /usr/local/lib/python3.9/dist-packages/google/colab/*.py
If Google updates from Python 3.8, you'll have to change the path to the default package.
Then go the Runtime menu and select Restart runtime. It should reconnect and choose the updated version of Python as the default kernel. You can check that it worked with:
#check python version
import sys
print(sys.version)
!python3 --version
!python --version
To use another python version in google colab, you need to:
1- Installing Anaconda.
2- Adding (fake) google colab library.
3- Starting Jupyterlab.
4- Accessing it with ngrok.
# install Anaconda3
!wget -qO ac.sh https://repo.anaconda.com/archive/Anaconda3-2020.07-Linux-x86_64.sh
!bash ./ac.sh -b
# a fake google.colab library
!ln -s /usr/local/lib/python3.6/dist-packages/google \
/root/anaconda3/lib/python3.8/site-packages/google
# start jupyterlab, which now has Python3 = 3.8
!nohup /root/anaconda3/bin/jupyter-lab --ip=0.0.0.0&
# access through ngrok, click the link
!pip install pyngrok -q
from pyngrok import ngrok
print(ngrok.connect(8888))
you can also use:
# Install the python version
!apt-get install python3.9
# Select the version
!python3.9 setup.py
another way is to use a virtual environment with your desired python version:
virtualenv env --python=python3.9
Update 24.12.2022 - Unfortunately, the method does not work anymore.
This worked for me (copied from GitHub), I successfully installed Python 3.10.
#The code below installs 3.10 (assuming you now have 3.8) and restarts environment, so you can run your cells.
import sys #for version checker
import os #for restart routine
if '3.10' in sys.version:
print('You already have 3.10')
else:
#install python 3.10 and dev utils
#you may not need all the dev libraries, but I haven't tested which aren't necessary.
!sudo apt-get update -y
!sudo apt-get install python3.10 python3.10-dev python3.10-distutils libpython3.10-dev
!sudo apt-get install python3.10-venv binfmt-support #recommended in install logs of the command above
#change alternatives
!sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.8 1
!sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.10 2
# install pip
!curl -sS https://bootstrap.pypa.io/get-pip.py | python3.10
!python3 get-pip.py --force-reinstall
#install colab's dependencies
!python3 -m pip install setuptools ipython ipython_genutils ipykernel jupyter_console prompt_toolkit httplib2 astor
#minor cleanup
!sudo apt autoremove
#link to the old google package
!ln -s /usr/local/lib/python3.8/dist-packages/google /usr/local/lib/python3.10/dist-packages/google
#this is just to verify if 3.10 folder was indeed created
!ls /usr/local/lib/python3.10/
#restart environment so you don't have to do it manually
os.kill(os.getpid(), 9)
In addition to Kaveh's answer, I added the following code. (This colab python version is python 3.8 and I tried to downgrade to python 3.7)
!pip install google-colab==1.0.0
# install colab's dependencies
!python -m pip install ipython==7.9.0 ipython_genutils==0.2.0 ipykernel==5.3.4 jupyter_console==6.1.0 prompt_toolkit==2.0.10 httplib2==0.17.4 astor==0.8.1 traitlets==5.7.1 google==2.0.3
This way, I solved the crashing runtime error.
Simple as that: -
!wget -O mini.sh https://repo.anaconda.com/miniconda/Miniconda3-py39_4.9.2-Linux-x86_64.sh
!chmod +x mini.sh
!bash ./mini.sh -b -f -p /usr/local
!conda install -q -y jupyter
!conda install -q -y google-colab -c conda-forge
!python -m ipykernel install --name "py39" --user
Source: https://colab.research.google.com/drive/1m47aWKayWTwqJG--x94zJMXolCEcfyPS?usp=sharing#scrollTo=r3sLiMIs8If3

Install Numpy Requirement in a Dockerfile. Results in error

I am attempting to install a numpy dependancy inside a docker container. (My code heavily uses it). On building the container the numpy library simply does not install and the build fails. This is on OS raspbian-buster/stretch. This does however work when building the container on MAC OS.
I suspect some kind of python related issue, but can not for the life of me figure out how to make it work.
I should point out that removing the pip install numpy from the requirements file and using it in its own RUN statement in the dockerfile does not solve the issue.
The Dockerfile:
FROM python:3.6
ENV PYTHONUNBUFFERED 1
ENV APP /app
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
RUN mkdir $APP
WORKDIR $APP
ADD requirements.txt .
RUN pip install -r requirements.txt
COPY . .
The requirements.txt contains all the project requirements, amounf which is numpy.
Step 6/15 : RUN pip install numpy==1.14.3
---> Running in 266a2132b078
Collecting numpy==1.14.3
Downloading https://files.pythonhosted.org/packages/b0/2b/497c2bb7c660b2606d4a96e2035e92554429e139c6c71cdff67af66b58d2/numpy-1.14.3.zip (4.9MB)
Building wheels for collected packages: numpy
Building wheel for numpy (setup.py): started
Building wheel for numpy (setup.py): still running...
Building wheel for numpy (setup.py): still running...
EDIT:
So after the comment by skybunk and the suggestion to head to official docs, some more debugging on my part, the solution wound up being pretty simple. Thanks skybunk to you go all the glory. Yay.
Solution:
Use alpine and install python install package dependencies, upgrade pip before doing a pip install requirements.
This is my edited Dockerfile - working obviously...
FROM python:3.6-alpine3.7
RUN apk add --no-cache --update \
python3 python3-dev gcc \
gfortran musl-dev \
libffi-dev openssl-dev
RUN pip install --upgrade pip
ENV PYTHONUNBUFFERED 1
ENV APP /app
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
RUN mkdir $APP
WORKDIR $APP
ADD requirements.txt .
RUN pip install -r requirements.txt
COPY . .
To use Numpy on python3 here, we first head over to the official documentation to find what dependencies are required to build Numpy.
Mainly these 5 packages + their dependencies must be installed:
Python3 - 70 mb
Python3-dev - 25 mb
gfortran - 20 mb
gcc - 70 mb
musl-dev -10 mb (used for tracking unexpected behaviour/debugging)
An POC setup would look something like this -
Dockerfile:
FROM gliderlabs/alpine
ADD repositories.txt /etc/apk/repositories
RUN apk add --no-cache --update \
python3 python3-dev gcc \
gfortran musl-dev
ADD requirements-pip.txt .
RUN pip3 install --upgrade pip setuptools && \
pip3 install -r requirements-pip.txt
ADD . /app
WORKDIR /app
ENV PYTHONPATH=/app/
ENTRYPOINT python3 testscript.py
repositories.txt
http://dl-5.alpinelinux.org/alpine/v3.4/main
requirements-pip.txt
numpy
testscript.py
import numpy as np
def random_array(a, b):
return np.random.random((a, b))
a = random_array(2,2)
b = random_array(2,2)
print(np.dot(a,b))
To run this - clone alpine, build it using "docker build -t gliderlabs/alpine ."
Build and Run your Dockerfile
docker build -t minidocker .
docker run minidocker
Output should be something like this-
[[ 0.03573961 0.45351115]
[ 0.28302967 0.62914049]]
Here's the git link, if you want to test it out
From the error logs, it does not seem that it is from numpy. but you can install numpy before the requirment.txt and verify if it's working.
FROM python:3.6
RUN pip install numpy==1.14.3
Build
docker build -t numpy .
Run and Test
docker run numpy bash -c "echo import numpy as np > test.py ; python test.py"
So you will see no error on import.
or You can try numpy as an alpine package
FROM python:3-alpine3.9
RUN apk add --no-cache py3-numpy
Or better to post the requirement.txt.
I had lot of trouble with this issue using FROM python:3.9-buster and pandas.
My requirements.txt had the python-dev-tools, numpy and pandas, along with other packages.
I always got something like this when attempting to build:
preluded by:
and by:
Following hints by Adiii in this thread, I did some debug and found out that this actually works and builds a perfectly running container:
RUN pip3 install NumPy==1.18.0
RUN pip3 install python-dev-tools
RUN pip3 install pandas
RUN pip3 install -r requirements.txt
So, giving a specific RUN layer to the pip3 installing pandas solved the problem!
Another method is to install from the 'slim' distribution of python (based on debian):
FROM python:slim
CMD pip install numpy
123Mb
This results in a smaller image than that of alpine:
FROM python:3-alpine3.9
RUN apk add --no-cache py3-numpy
187MB
Plus it gives better support for other whl libraries for slim is based on a glibc library (against which all the wheels are built) while apline uses musl (incompatible with the wheels), so all packages will have to be either apk added or compiled from sources.

Install "Ifcopenshell" in google-colaboratory

I've tried with: import ifcopenshell
after that I tried: !pip install -q ifcopenshell
and later with : !apt-get -qq install -y ifcopenshell
I had an error in all three cases: Could not find a version that satisfies the requirement ifcopenshell (from versions: )
No matching distribution found for ifcopenshell
... How can I install "ifcopenshell" in google-colaboratory ?
Thanks in advance
use conda
!wget -c https://repo.continuum.io/archive/Anaconda3-5.1.0-Linux-x86_64.sh
!chmod +x Anaconda3-5.1.0-Linux-x86_64.sh
!bash ./Anaconda3-5.1.0-Linux-x86_64.sh -b -f -p /usr/local
!conda install -c conda-forge -c oce -c dlr-sc -c ifcopenshell ifcopenshell
import sys
sys.path.append('/usr/local/lib/python3.6/site-packages/')
I believe we must leave as linux, as we are cloning a repo inside google colab.
Just open a notebook in googlecollab, copy and paste the exact lines previewsly suggested, and run.

conda install -c conda-forge tensorflow just stuck in Solving environment

I am trying to run this statement in MacOS.
conda install -c conda-forge tensorflow
It just stuck at the
Solving Environment:
Never finish.
$ conda --version
conda 4.5.12
Nothing worked untill i ran this in conda terminal:
conda upgrade conda
Note that this was for poppler (conda install -c conda-forge poppler)
On win10 I waited about 5-6 minutes but it depends of the number of installed python packages and your internet connection.
Also you can install it via Anaconda Navigator
One can also resolve the "Solving environment" issue by using the mamba package manager.
I installed tensorflow-gpu==2.6.2 on Linux (CentOS Stream 8) using the following commands
conda create --name deeplearning python=3.8
conda activate deeplearning
conda install -c conda-forge mamba
mamba install -c conda-forge tensorflow-gpu
To check the successful usage of GPU, simply run either of the commands
python -c "import tensorflow as tf;print('\n\n\n====================== \n GPU Devices: ',tf.config.list_physical_devices('GPU'), '\n======================')"
python -c "import tensorflow as tf;print('\n\n\n====================== \n', tf.reduce_sum(tf.random.normal([1000, 1000])), '\n======================' )"
References
Conda Forge blog post
mamba install instead of conda install
The same error happens with me .I've tried to install tensorboard with anaconda prompt but it was stuck on the environment solving .So i've added these paths to my environment variables:
C:\Anaconda3
C:\Anaconda3\Library\mingw-w64\bin
C:\Anaconda3\Library\usr\bin
C:\Anaconda3\Library\bin
C:\Anaconda3\Scripts
and it worked well.
Follow the instruction by nekomatic.
I left it running for 1 hour. Yes. it is finally finished.
But now I got the conflicts
Solving environment: failed
UnsatisfiableError: The following specifications were found to be in conflict:
- anaconda==2018.12=py37_0 -> bleach==3.0.2=py37_0
- anaconda==2018.12=py37_0 -> html5lib==1.0.1=py37_0
- anaconda==2018.12=py37_0 -> numexpr==2.6.8=py37h7413580_0
- anaconda==2018.12=py37_0 -> scikit-learn==0.20.1=py37h27c97d8_0
- tensorflow
Use "conda info <package>" to see the dependencies for each package.

Import tensorflow error, no module named tensorflow in Google cloud

Remotely connected to my gcloud vm (compute engine) using ssh through gcloud sdk shell and putty.
Created a sample python script as per the quickstart:
https://cloud.google.com/tpu/docs/quickstart
Trying to run the script but getting error no module named tensorflow.
Have both python 2.7.14 and 3.5.4 installed locally. I can run python scripts locally but not in the gcloud shell.
Any help is greatly appreciated
Thanks
TensorFlow packages have to be installed if you want to use them.
First you have to install pip if you haven't done so already:
sudo apt-get update
sudo apt-get -y upgrade \
&& sudo apt-get install -y python-pip python-dev
When you have pip installed you have to install the TensorFlow packages:
sudo pip install tensorflow
You can follow step by step tutorial how to set up VM instance with TensorFlow in Google Cloud here