Install Numpy Requirement in a Dockerfile. Results in error - numpy

I am attempting to install a numpy dependancy inside a docker container. (My code heavily uses it). On building the container the numpy library simply does not install and the build fails. This is on OS raspbian-buster/stretch. This does however work when building the container on MAC OS.
I suspect some kind of python related issue, but can not for the life of me figure out how to make it work.
I should point out that removing the pip install numpy from the requirements file and using it in its own RUN statement in the dockerfile does not solve the issue.
The Dockerfile:
FROM python:3.6
ENV PYTHONUNBUFFERED 1
ENV APP /app
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
RUN mkdir $APP
WORKDIR $APP
ADD requirements.txt .
RUN pip install -r requirements.txt
COPY . .
The requirements.txt contains all the project requirements, amounf which is numpy.
Step 6/15 : RUN pip install numpy==1.14.3
---> Running in 266a2132b078
Collecting numpy==1.14.3
Downloading https://files.pythonhosted.org/packages/b0/2b/497c2bb7c660b2606d4a96e2035e92554429e139c6c71cdff67af66b58d2/numpy-1.14.3.zip (4.9MB)
Building wheels for collected packages: numpy
Building wheel for numpy (setup.py): started
Building wheel for numpy (setup.py): still running...
Building wheel for numpy (setup.py): still running...
EDIT:
So after the comment by skybunk and the suggestion to head to official docs, some more debugging on my part, the solution wound up being pretty simple. Thanks skybunk to you go all the glory. Yay.
Solution:
Use alpine and install python install package dependencies, upgrade pip before doing a pip install requirements.
This is my edited Dockerfile - working obviously...
FROM python:3.6-alpine3.7
RUN apk add --no-cache --update \
python3 python3-dev gcc \
gfortran musl-dev \
libffi-dev openssl-dev
RUN pip install --upgrade pip
ENV PYTHONUNBUFFERED 1
ENV APP /app
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
RUN mkdir $APP
WORKDIR $APP
ADD requirements.txt .
RUN pip install -r requirements.txt
COPY . .

To use Numpy on python3 here, we first head over to the official documentation to find what dependencies are required to build Numpy.
Mainly these 5 packages + their dependencies must be installed:
Python3 - 70 mb
Python3-dev - 25 mb
gfortran - 20 mb
gcc - 70 mb
musl-dev -10 mb (used for tracking unexpected behaviour/debugging)
An POC setup would look something like this -
Dockerfile:
FROM gliderlabs/alpine
ADD repositories.txt /etc/apk/repositories
RUN apk add --no-cache --update \
python3 python3-dev gcc \
gfortran musl-dev
ADD requirements-pip.txt .
RUN pip3 install --upgrade pip setuptools && \
pip3 install -r requirements-pip.txt
ADD . /app
WORKDIR /app
ENV PYTHONPATH=/app/
ENTRYPOINT python3 testscript.py
repositories.txt
http://dl-5.alpinelinux.org/alpine/v3.4/main
requirements-pip.txt
numpy
testscript.py
import numpy as np
def random_array(a, b):
return np.random.random((a, b))
a = random_array(2,2)
b = random_array(2,2)
print(np.dot(a,b))
To run this - clone alpine, build it using "docker build -t gliderlabs/alpine ."
Build and Run your Dockerfile
docker build -t minidocker .
docker run minidocker
Output should be something like this-
[[ 0.03573961 0.45351115]
[ 0.28302967 0.62914049]]
Here's the git link, if you want to test it out

From the error logs, it does not seem that it is from numpy. but you can install numpy before the requirment.txt and verify if it's working.
FROM python:3.6
RUN pip install numpy==1.14.3
Build
docker build -t numpy .
Run and Test
docker run numpy bash -c "echo import numpy as np > test.py ; python test.py"
So you will see no error on import.
or You can try numpy as an alpine package
FROM python:3-alpine3.9
RUN apk add --no-cache py3-numpy
Or better to post the requirement.txt.

I had lot of trouble with this issue using FROM python:3.9-buster and pandas.
My requirements.txt had the python-dev-tools, numpy and pandas, along with other packages.
I always got something like this when attempting to build:
preluded by:
and by:
Following hints by Adiii in this thread, I did some debug and found out that this actually works and builds a perfectly running container:
RUN pip3 install NumPy==1.18.0
RUN pip3 install python-dev-tools
RUN pip3 install pandas
RUN pip3 install -r requirements.txt
So, giving a specific RUN layer to the pip3 installing pandas solved the problem!

Another method is to install from the 'slim' distribution of python (based on debian):
FROM python:slim
CMD pip install numpy
123Mb
This results in a smaller image than that of alpine:
FROM python:3-alpine3.9
RUN apk add --no-cache py3-numpy
187MB
Plus it gives better support for other whl libraries for slim is based on a glibc library (against which all the wheels are built) while apline uses musl (incompatible with the wheels), so all packages will have to be either apk added or compiled from sources.

Related

How to get a model for a non-released spaCy version (v3)?

I want to try out the develop branch of spaCy in order to test the features of v3. I built it successfully, but get the following message when trying to download a model:
'No compatible models found for v3.0.0 of spaCy'
What can I do? How are contributors supposed to get models for a non-released version?
This is the Dockerfile I used:
FROM python:3.8
RUN apt-get update && apt-get install -y \
git \
make \
&& rm -rf /var/lib/apt/lists/*
RUN pip install --upgrade pip
RUN git clone https://github.com/explosion/spaCy /spaCy
WORKDIR /spaCy
RUN git checkout develop
ENV PYTHONPATH=.
RUN pip install -r requirements.txt
RUN python setup.py build_ext --inplace
RUN python setup.py install
RUN python -m spacy download en_core_web_sm

keras failed to import pydot

I'm trying to run the Pix2Pix tutorial for Tensorflow. I'm using the official docker container for this. This is how I start my container:
docker run --gpus all -it -p 8888:8888 --rm -v $PWD:/tf -w /tmp tensorflow/tensorflow:latest-gpu-py3-jupyter
I'm not able to get pass by this cell
generator = Generator()
tf.keras.utils.plot_model(generator, show_shapes=True, dpi=64)
# output -> Failed to import pydot. You must install pydot and graphviz for `pydotprint` to work.
I have tried also installing the pydot and graphviz using pip and also apt-get. Even if this libraries are installed I get the same error.
I had same problem and follow this link
In short:
run these command on command prompt.
pip install pydot
pip install graphviz
From website, download and install graphviz software
Note: in install time, check "add to system path" option to add bin
folder to path variable otherwise you should do it manually. restart
your windows

osmNX in Google Colab

For my purposes I require osmNX in Google Colab
Has anyone done this before? I use the following commands:
!wget https://repo.anaconda.com/archive/Anaconda3-2019.07-Linux-x86_64.sh && bash Anaconda3-2019.07-Linux-x86_64.sh -bfp /usr/local
import sys
sys.path.append('/usr/local/lib/python3.6/site-packages')
!conda config --prepend channels conda-forge
The command:
!conda info --envs
Shows that the enviroment is created succesfully.
When I run the command:
!conda activate ox
The error is displayed:
CommandNotFoundError: Your shell has not been properly configured to use 'conda activate'.
To initialize your shell, run
$ conda init <SHELL_NAME>
Currently supported shells are:
- bash
- fish
- tcsh
- xonsh
- zsh
- powershell
See 'conda init --help' for more information and options.
IMPORTANT: You may need to close and restart your shell after running 'conda init'.
The command
!conda init bash
has no effect.
Thanks for the help
!apt-get -qq install -y libspatialindex-dev && pip install -q -U osmnx
import osmnx as ox
ox.config(use_cache=True, log_console=True)
you can use this command !
!pip install geopandas== 0.10.0
!pip install matplotlib==3.4
!pip install networkx==2.6
!pip install numpy==1.21
!pip install pandas==1.3
!pip install pyproj==3.2
!pip install requests==2.26
!pip install Rtree==0.9
!pip install Shapely==1.7
!pip install osmnx
I installed the respective packages based on the requirements provided in this link https://github.com/gboeing/osmnx/blob/main/requirements.txt , it has worked in my application so far, hope it works for you too.
Alternatively, similar to another answer, you can use the code below, found in https://stackoverflow.com/a/65378540/18403512:
!apt install libspatialindex-dev
!pip install osmnx
The answer would be similar to running osmnx on any docker or external server.
I tried it and almost got there, maybe someone can help make it complete.
So let's start with the basic osmnx installation:
conda config --prepend channels conda-forge
conda create -n ox --strict-channel-priority osmnx
Then, let's look at how can this be done at remote docker, e.g. travis CI (working sample .travis.yml from one of my repos):
- bash miniconda.sh -b -p $HOME/miniconda
- source "$HOME/miniconda/etc/profile.d/conda.sh"
- hash -r
- conda config --set always_yes yes --set changeps1 no
- conda update -q conda
# Useful for debugging any issues with conda
- conda info -a
- conda config --prepend channels conda-forge
- conda create -n ox --strict-channel-priority osmnx
- conda activate ox
Then we may take a look at how to have conda in colab and use this snippet:
%%bash
MINICONDA_INSTALLER_SCRIPT=Miniconda3-4.5.4-Linux-x86_64.sh
MINICONDA_PREFIX=/usr/local
wget https://repo.continuum.io/miniconda/$MINICONDA_INSTALLER_SCRIPT
chmod +x $MINICONDA_INSTALLER_SCRIPT
./$MINICONDA_INSTALLER_SCRIPT -b -f -p $MINICONDA_PREFIX
which then finally boils down to this almost working notebook, based on this post.
What is not working is switching between environments, so !conda env list returns ox as one of environments, yet activating it fails:
!conda activate ox
raises:
CommandNotFoundError: Your shell has not been properly configured to use 'conda activate'.
To initialize your shell, run
$ conda init <SHELL_NAME>
Currently supported shells are:
- bash
- fish
- tcsh
- xonsh
- zsh
- powershell
See 'conda init --help' for more information and options.
IMPORTANT: You may need to close and restart your shell after running 'conda init'.

Why does it take ages to install Pandas on Alpine Linux

I've noticed that installing Pandas and Numpy (it's dependency) in a Docker container using the base OS Alpine vs. CentOS or Debian takes much longer. I created a little test below to demonstrate the time difference. Aside from the few seconds Alpine takes to update and download the build dependencies to install Pandas and Numpy, why does the setup.py take around 70x more time than on Debian install?
Is there any way to speed up the install using Alpine as the base image or is there another base image of comparable size to Alpine that is better to use for packages like Pandas and Numpy?
Dockerfile.debian
FROM python:3.6.4-slim-jessie
RUN pip install pandas
Build Debian image with Pandas & Numpy:
[PandasDockerTest] time docker build -t debian-pandas -f Dockerfile.debian . --no-cache
Sending build context to Docker daemon 3.072kB
Step 1/2 : FROM python:3.6.4-slim-jessie
---> 43431c5410f3
Step 2/2 : RUN pip install pandas
---> Running in 2e4c030f8051
Collecting pandas
Downloading pandas-0.22.0-cp36-cp36m-manylinux1_x86_64.whl (26.2MB)
Collecting numpy>=1.9.0 (from pandas)
Downloading numpy-1.14.1-cp36-cp36m-manylinux1_x86_64.whl (12.2MB)
Collecting pytz>=2011k (from pandas)
Downloading pytz-2018.3-py2.py3-none-any.whl (509kB)
Collecting python-dateutil>=2 (from pandas)
Downloading python_dateutil-2.6.1-py2.py3-none-any.whl (194kB)
Collecting six>=1.5 (from python-dateutil>=2->pandas)
Downloading six-1.11.0-py2.py3-none-any.whl
Installing collected packages: numpy, pytz, six, python-dateutil, pandas
Successfully installed numpy-1.14.1 pandas-0.22.0 python-dateutil-2.6.1 pytz-2018.3 six-1.11.0
Removing intermediate container 2e4c030f8051
---> a71e1c314897
Successfully built a71e1c314897
Successfully tagged debian-pandas:latest
docker build -t debian-pandas -f Dockerfile.debian . --no-cache 0.07s user 0.06s system 0% cpu 13.605 total
Dockerfile.alpine
FROM python:3.6.4-alpine3.7
RUN apk --update add --no-cache g++
RUN pip install pandas
Build Alpine image with Pandas & Numpy:
[PandasDockerTest] time docker build -t alpine-pandas -f Dockerfile.alpine . --no-cache
Sending build context to Docker daemon 16.9kB
Step 1/3 : FROM python:3.6.4-alpine3.7
---> 4b00a94b6f26
Step 2/3 : RUN apk --update add --no-cache g++
---> Running in 4b0c32551e3f
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/community/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/community/x86_64/APKINDEX.tar.gz
(1/17) Upgrading musl (1.1.18-r2 -> 1.1.18-r3)
(2/17) Installing libgcc (6.4.0-r5)
(3/17) Installing libstdc++ (6.4.0-r5)
(4/17) Installing binutils-libs (2.28-r3)
(5/17) Installing binutils (2.28-r3)
(6/17) Installing gmp (6.1.2-r1)
(7/17) Installing isl (0.18-r0)
(8/17) Installing libgomp (6.4.0-r5)
(9/17) Installing libatomic (6.4.0-r5)
(10/17) Installing pkgconf (1.3.10-r0)
(11/17) Installing mpfr3 (3.1.5-r1)
(12/17) Installing mpc1 (1.0.3-r1)
(13/17) Installing gcc (6.4.0-r5)
(14/17) Installing musl-dev (1.1.18-r3)
(15/17) Installing libc-dev (0.7.1-r0)
(16/17) Installing g++ (6.4.0-r5)
(17/17) Upgrading musl-utils (1.1.18-r2 -> 1.1.18-r3)
Executing busybox-1.27.2-r7.trigger
OK: 184 MiB in 50 packages
Removing intermediate container 4b0c32551e3f
---> be26c3bf4e42
Step 3/3 : RUN pip install pandas
---> Running in 36f6024e5e2d
Collecting pandas
Downloading pandas-0.22.0.tar.gz (11.3MB)
Collecting python-dateutil>=2 (from pandas)
Downloading python_dateutil-2.6.1-py2.py3-none-any.whl (194kB)
Collecting pytz>=2011k (from pandas)
Downloading pytz-2018.3-py2.py3-none-any.whl (509kB)
Collecting numpy>=1.9.0 (from pandas)
Downloading numpy-1.14.1.zip (4.9MB)
Collecting six>=1.5 (from python-dateutil>=2->pandas)
Downloading six-1.11.0-py2.py3-none-any.whl
Building wheels for collected packages: pandas, numpy
Running setup.py bdist_wheel for pandas: started
Running setup.py bdist_wheel for pandas: still running...
Running setup.py bdist_wheel for pandas: still running...
Running setup.py bdist_wheel for pandas: still running...
Running setup.py bdist_wheel for pandas: still running...
Running setup.py bdist_wheel for pandas: still running...
Running setup.py bdist_wheel for pandas: still running...
Running setup.py bdist_wheel for pandas: finished with status 'done'
Stored in directory: /root/.cache/pip/wheels/e8/ed/46/0596b51014f3cc49259e52dff9824e1c6fe352048a2656fc92
Running setup.py bdist_wheel for numpy: started
Running setup.py bdist_wheel for numpy: still running...
Running setup.py bdist_wheel for numpy: still running...
Running setup.py bdist_wheel for numpy: still running...
Running setup.py bdist_wheel for numpy: finished with status 'done'
Stored in directory: /root/.cache/pip/wheels/9d/cd/e1/4d418b16ea662e512349ef193ed9d9ff473af715110798c984
Successfully built pandas numpy
Installing collected packages: six, python-dateutil, pytz, numpy, pandas
Successfully installed numpy-1.14.1 pandas-0.22.0 python-dateutil-2.6.1 pytz-2018.3 six-1.11.0
Removing intermediate container 36f6024e5e2d
---> a93c59e6a106
Successfully built a93c59e6a106
Successfully tagged alpine-pandas:latest
docker build -t alpine-pandas -f Dockerfile.alpine . --no-cache 0.54s user 0.33s system 0% cpu 16:08.47 total
Debian based images use only python pip to install packages with .whl format:
Downloading pandas-0.22.0-cp36-cp36m-manylinux1_x86_64.whl (26.2MB)
Downloading numpy-1.14.1-cp36-cp36m-manylinux1_x86_64.whl (12.2MB)
WHL format was developed as a quicker and more reliable method of installing Python software than re-building from source code every time. WHL files only have to be moved to the correct location on the target system to be installed, whereas a source distribution requires a build step before installation.
Wheel packages pandas and numpy are not supported in images based on Alpine platform. That's why when we install them using python pip during the building process, we always compile them from the source files in alpine:
Downloading pandas-0.22.0.tar.gz (11.3MB)
Downloading numpy-1.14.1.zip (4.9MB)
and we can see the following inside container during the image building:
/ # ps aux
PID USER TIME COMMAND
1 root 0:00 /bin/sh -c pip install pandas
7 root 0:04 {pip} /usr/local/bin/python /usr/local/bin/pip install pandas
21 root 0:07 /usr/local/bin/python -c import setuptools, tokenize;__file__='/tmp/pip-build-en29h0ak/pandas/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n
496 root 0:00 sh
660 root 0:00 /bin/sh -c gcc -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -DTHREAD_STACK_SIZE=0x100000 -fPIC -Ibuild/src.linux-x86_64-3.6/numpy/core/src/pri
661 root 0:00 gcc -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -DTHREAD_STACK_SIZE=0x100000 -fPIC -Ibuild/src.linux-x86_64-3.6/numpy/core/src/private -Inump
662 root 0:00 /usr/libexec/gcc/x86_64-alpine-linux-musl/6.4.0/cc1 -quiet -I build/src.linux-x86_64-3.6/numpy/core/src/private -I numpy/core/include -I build/src.linux-x86_64-3.6/numpy/core/includ
663 root 0:00 ps aux
If we modify Dockerfile a little:
FROM python:3.6.4-alpine3.7
RUN apk add --no-cache g++ wget
RUN wget https://pypi.python.org/packages/da/c6/0936bc5814b429fddb5d6252566fe73a3e40372e6ceaf87de3dec1326f28/pandas-0.22.0-cp36-cp36m-manylinux1_x86_64.whl
RUN pip install pandas-0.22.0-cp36-cp36m-manylinux1_x86_64.whl
we get the following error:
Step 4/4 : RUN pip install pandas-0.22.0-cp36-cp36m-manylinux1_x86_64.whl
---> Running in 0faea63e2bda
pandas-0.22.0-cp36-cp36m-manylinux1_x86_64.whl is not a supported wheel on this platform.
The command '/bin/sh -c pip install pandas-0.22.0-cp36-cp36m-manylinux1_x86_64.whl' returned a non-zero code: 1
Unfortunately, the only way to install pandas on an Alpine image is to wait until build finishes.
Of course if you want to use the Alpine image with pandas in CI for example, the best way to do so is to compile it once, push it to any registry and use it as a base image for your needs.
EDIT:
If you want to use the Alpine image with pandas you can pull my nickgryg/alpine-pandas docker image. It is a python image with pre-compiled pandas on the Alpine platform. It should save your time.
ANSWER: AS OF 3/9/2020, FOR PYTHON 3, IT STILL DOESN'T!
Here is a complete working Dockerfile:
FROM python:3.7-alpine
RUN echo "#testing http://dl-cdn.alpinelinux.org/alpine/edge/testing" >> /etc/apk/repositories
RUN apk add --update --no-cache py3-numpy py3-pandas#testing
The build is very sensitive to the exact python and alpine version numbers - getting these wrong seems to provoke Max Levy's error so:libpython3.7m.so.1.0 (missing) - but the above does now work for me.
My updated Dockerfile is available at https://gist.github.com/jtlz2/b0f4bc07ce2ff04bc193337f2327c13b
[Earlier Update:]
ANSWER: IT DOESN'T!
In any Alpine Dockerfile you can simply do*
RUN apk add py2-numpy#community py2-scipy#community py-pandas#edge
This is because numpy, scipy and now pandas are all available prebuilt on alpine:
https://pkgs.alpinelinux.org/packages?name=*numpy
https://pkgs.alpinelinux.org/packages?name=*scipy&branch=edge
https://pkgs.alpinelinux.org/packages?name=*pandas&branch=edge
One way to avoid rebuilding every time, or using a Docker layer, is to use a prebuilt, native Alpine Linux/.apk package, e.g.
https://github.com/sgerrand/alpine-pkg-py-pandas
https://github.com/nbgallery/apks
You can build these .apks once and use them wherever in your Dockerfile you like :)
This also saves you having to bake everything else into the Docker image before the fact - i.e. the flexibility to pre-build any Docker image you like.
PS I have put a Dockerfile stub at https://gist.github.com/jtlz2/b0f4bc07ce2ff04bc193337f2327c13b that shows roughly how to build the image. These include the important steps (*):
RUN echo "#community http://dl-cdn.alpinelinux.org/alpine/edge/community" >> /etc/apk/repositories
RUN apk update
RUN apk add --update --no-cache libgfortran
Real honest advice here, switch to Debian based image and then all your problems will be gone.
Alpine for python applications doesn't work well.
Here is an example of my dockerfile:
FROM python:3.7.6-buster
RUN pip install pandas==1.0.0
RUN pip install sklearn
RUN pip install Django==3.0.2
RUN pip install cx_Oracle==7.3.0
RUN pip install excel
RUN pip install djangorestframework==3.11.0
The python:3.7.6-buster is more appropriate in this case, in addition, you don't need any extra dependency in the OS.
Follow a usefull and recent article: https://pythonspeed.com/articles/alpine-docker-python/:
Don’t use Alpine Linux for Python images
Unless you want massively slower build times, larger images, more work, and the potential for obscure bugs, you’ll want to avoid Alpine Linux as a base image. For some recommendations on what you should use, see my article on choosing a good base image.
Just going to bring some of these answers together in one answer and add a detail I think was missed. The reason certain python libraries, particularly optimized math and data libraries, take so long to build on alpine is because the pip wheels for these libraries include binaries precompiled from c/c++ and linked against gnu-libc (glibc), a common set of c standard libraries. Debian, Fedora, CentOS all (typically) use glibc, but alpine, in order to stay lightweight, uses musl-libc instead. c/c++ binaries build on a glibc system will not work on a system without glibc and the same goes for musl.
Pip looks first for a wheel with the correct binaries, if it can't find one, it tries to compile the binaries from the c/c++ source and links them against musl. In many cases, this won't even work unless you have the python headers from python3-dev or build tools like make.
Now the silver lining, as others have mentioned, there are apk packages with the proper binaries provided by the community, using these will save you the (sometimes lengthy) process of building the binaries.
You can, in fact, install from a pure python .whl on alpine, but, at the time of this writing, manylinux did not support binary distributions for alpine due to the musl/gnu issue.
Update Oct 2022
Newer versions of python/pip support musl via the package musllinux which, I assume, is a musl impl for manylinux. Still no official 'musl' support for CUDA though.
ATTENTION
Look at the #jtlz2 answer with the latest update
OUTDATED
So, py3-pandas & py3-numpy packages moved to the testing alpine repository, so, you can download it by adding these lines in to the your Dockerfile:
RUN echo "http://dl-8.alpinelinux.org/alpine/edge/testing" >> /etc/apk/repositories \
&& apk update \
&& apk add py3-numpy py3-pandas
Hope it helps someone!
Alpine packages links:
- py3-pandas
- py3-numpy
Alpine repositories docks info.
In this case the alpine not be the best solution change alpine for slim:
FROM python:3.8.3-alpine
Change to that:
FROM python:3.8.3-slim
In my case it was resolved with this small change.
This worked for me:
FROM python:3.8-alpine
RUN echo "#testing http://dl-cdn.alpinelinux.org/alpine/edge/testing" >> /etc/apk/repositories
RUN apk add --update --no-cache py3-numpy py3-pandas#testing
ENV PYTHONPATH=/usr/lib/python3.8/site-packages
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
EXPOSE 5003
ENTRYPOINT [ "python" ]
CMD [ "app.py" ]
Most of the code here is from the answer of jtlz2 from this same thread and Faylixe from another thread.
Turns out the lighter version of pandas is found in the Alpine repository py3-numpy but it doesn't get installed in the same file path from where Python reads the imports by default. Therefore you need to add the ENV. Also be mindful about the alpine version.
I have solved the installation with some additional changes:
Requirements
Migrate from python3.8-alpine to python3.10-alpine:
docker pull python:3.10-alpine
Important!
I had to migrate because when I was installing py3-pandas, it installed the package as python3.10, not in the required version
that I was using python3.8).
To figure out where the libraries of a package were installed, you can check that with the following command:
apk info -L py3-pandas
Not install backports.zoneinfo package since python3.9 (I had to add a condition in the requirements.txt to install the package with versions lower than 3.9).
backports.zoneinfo==0.2.1;python_version<"3.9"
Installation
After the previous changes, I proceed to install panda performing the following:
Add 3 additional repositories to /etc/apk/repositories (the repositories can vary based on the version of your distribution), reference here:
for x in $(echo "main community testing"); \
do echo "https://dl-cdn.alpinelinux.org/alpine/edge/${x}" >> /etc/apk/repositories; \
done
Validate the content of the file /etc/apk/repositories:
$ cat /etc/apk/repositories
https://dl-cdn.alpinelinux.org/alpine/v3.16/main
https://dl-cdn.alpinelinux.org/alpine/v3.16/community
https://dl-cdn.alpinelinux.org/alpine/edge/main
https://dl-cdn.alpinelinux.org/alpine/edge/community
https://dl-cdn.alpinelinux.org/alpine/edge/testing
Perform to install pandas (pynum is installed automatically as a dependency of pandas):
sudo apk update && sudo apk add py3-pandas
Set the environment variable PYTHONPATH:
export PYTHONPATH=/usr/lib/python3.10/site-packages/
Validate the packages can be imported (on my case I tested it with django):
python manage.py shell
import pandas as pd
import numpy as np
technologies = ['Spark','Pandas','Java','Python', 'PHP']
fee = [25000,20000,15000,15000,18000]
duration = ['5o Days','35 Days',np.nan,'30 Days', '30 Days']
discount = [2000,1000,800,500,800]
columns=['Courses','Fee','Duration','Discount']
df = pd.DataFrame(list(zip(technologies,fee,duration,discount)), columns=columns)
print(df)
pandas is considered a community supported package, so the answers pointing to edge/testing are not going to work as Alpine does not officially support pandas as a core package (it still works, it's just not supported by the core Alpine developers).
Try this Dockerfile:
FROM python:3.8-alpine
RUN echo "#community http://dl-cdn.alpinelinux.org/alpine/edge/community" >> /etc/apk/repositories \
&& apk add py3-pandas#community
ENV PYTHONPATH="/usr/lib/python3.8/site-packages"
This works for the vanilla Alpine image too, using FROM alpine:3.12.
Update: thanks to #cegprakash for raising the question about how to work with this setup when you also have a requirements.txt file that must be satisfied inside the container.
I added one line to the Dockerfile snippet to export the PYTHONPATH variable into the container runtime. If you do this, it won't matter whether pandas or numpy are included in the requirements file or not (provided they are pegged to the same version that was installed via apk).
The reason this is needed is that apk installs the py3-pands#community package under /usr/lib, but that location is not on the default PYTHONPATH that pip checks before installing new packages. If we don't include this step to add it, pip and python will not find the package and pip will try to download and install it under /usr/local which is what we're trying to avoid.
And given that we really want to make sure that pip doesn't try to install pandas, I would suggest to not include pandas or numpy in the requirements.txt file if you've already installed them with apk using the above method. It's just a little extra insurance that things will go as intended.
The following Dockerfile worked for me to install pandas, among other dependencies as listed below.
python:3.10-alpine Dockerfile
# syntax=docker/dockerfile:1
FROM python:3.10-alpine as base
RUN apk add --update --no-cache --virtual .tmp-build-deps \
gcc g++ libc-dev linux-headers postgresql-dev build-base \
&& apk add libffi-dev
COPY requirements.txt requirements.txt
RUN pip install --no-cache-dir --upgrade -r requirements.txt
pyproject.toml dependencies
python = "^3.10"
Django = "^3.2.9"
djangorestframework = "^3.12.4"
PyYAML = ">=5.3.0,<6.0.0"
Markdown = "^3.3.6"
uritemplate = "^4.1.1"
install = "^1.3.5"
drf-spectacular = "^0.21.0"
django-extensions = "^3.1.5"
django-filter = "^21.1"
django-cors-headers = "^3.10.1"
httpx = "^0.22.0"
channels = "^3.0.4"
daphne = "^3.0.2"
whitenoise = "^6.2.0"
djoser = "^2.1.0"
channels-redis = "^3.4.0"
pika = "^1.2.1"
backoff = "^2.1.2"
psycopg2-binary = "^2.9.3"
pandas = "^1.5.0"
alpine takes lot of time to install pandas and the image size is also huge. I tried the python:3.8-slim-buster version of python base image. Image build was very fast and size of image was less than half in comparison to alpine python docker image
https://github.com/dguyhasnoname/k8s-cluster-checker/blob/master/Dockerfile

Installing seaborn on Docker Alpine

I am trying to install seaborn with this Dockerfile:
FROM alpine:latest
RUN apk add --update python py-pip python-dev
RUN pip install seaborn
CMD python
The error I get is related to numpy and scipy (required by seaborn). It starts with:
/tmp/easy_install-nvj61E/numpy-1.11.1/setup.py:327: UserWarning:
Unrecognized setuptools command, proceeding with generating Cython
sources and expanding templates
and ends with
File "numpy/core/setup.py", line 654, in get_mathlib_info
RuntimeError: Broken toolchain: cannot link a simple C program
Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-DZ4cXr/scipy/
The command '/bin/sh -c pip install seaborn' returned a non-zero code: 1
Any idea how I can fix this?
To fix this error, you need to install gcc: apk add gcc.
But you will see that you will hit a new error as numpy, matplotlip and scipy have several dependencies. You need to also install gfortran, musl-dev, freetype-dev, etc.
Here is a Dockerfile based on you initial one that will install those dependencies as well as seaborn:
FROM alpine:latest
# install dependencies
# the lapack package is only in the community repository
RUN echo "http://dl-4.alpinelinux.org/alpine/edge/community" >> /etc/apk/repositories
RUN apk --update add --no-cache \
lapack-dev \
gcc \
freetype-dev
RUN apk add python py-pip python-dev
# Install dependencies
RUN apk add --no-cache --virtual .build-deps \
gfortran \
musl-dev \
g++
RUN ln -s /usr/include/locale.h /usr/include/xlocale.h
RUN pip install seaborn
# removing dependencies
RUN apk del .build-deps
CMD python
You'll notice that I'm removing the dependencies using apk-del .build-deps to limit the size of your image (http://www.sandtable.com/reduce-docker-image-sizes-using-alpine/).
Personally I also had to install ca-certificates but it seems you didn't have this issue.
Note: You could also build your image FROM the python:2.7-alpine image to avoid installing python and pip yourself.