Installing pandas with dockerfile for ARM/V7 architecture gets stuck - pandas

I want to build a Docker Image based on a Python 3.8 Image and then install some requirements including Pandas on a ARM/V7 platform. But when it comes to install the pip requirements the process gets stuck.
Is there any way to use a different base image or change something else in the Docker file to run pandas in a Docker image on a device with ARM/V7 architecture?
Here is the dockerfile:
FROM python:3.8#sha256:45fbccbc4681e8d9ef517d43f9d0eb8f17b2beac00b0f9697bbf85354ae8a266
WORKDIR /app
EXPOSE 8002/tcp
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
COPY . .
CMD ["python3", "-m", "monitoring"]
The requirements.txt:
pandas==1.3.3
requests==2.26.0
When i am building the Image with docker build -t rfu . i get the following output:
Sending build context to Docker daemon 25.6kB
Step 1/7 : FROM python:3.8#sha256:45fbccbc4681e8d9ef517d43f9d0eb8f17b2beac00b0f9697bbf85354ae8a266
---> 0c665e140292
Step 2/7 : WORKDIR /app
---> Running in a5e6772c20f9
Removing intermediate container a5e6772c20f9
---> 5e7807e6f975
Step 3/7 : EXPOSE 8002/tcp
---> Running in d9f6cdc8aca1
Removing intermediate container d9f6cdc8aca1
---> bcd057ff5d87
Step 4/7 : COPY requirements.txt requirements.txt
---> d11ccce85d46
Step 5/7 : RUN pip install -r requirements.txt
---> Running in cd920fa4c18d
Collecting pandas==1.3.3
Downloading pandas-1.3.3.tar.gz (4.7 MB)
Installing build dependencies: started
Installing build dependencies: still running...
Installing build dependencies: still running...
Installing build dependencies: still running...
Installing build dependencies: still running...
Installing build dependencies: still running...
This is where the process get stuck.

I guess it takes time for compiling... Just use precompiled packages from https://piwheels.org/
RUN pip install --index-url=https://www.piwheels.org/simple --no-cache-dir -r requirements.txt

Related

Jenkins build not running after a successful build the first time

So I am setting up a Jenkins job to run my Pytest project on Github, i successfully did the configuration for my Job.
this is how my custom python builder command looks like:
python3 -m venv venv
source venv/bin/activate
python3 -m pip install --upgrade pip -r requirements.txt
export CHROMEDRIVER_PATH="/Users/Nprashanth/drivers/chromedriver"
pytest -m "regression" tests/regression/regression_test.py --metadata "Scanner" HG20320005 --metadata "S/W Release" 8.5.1 --metadata "S/W Build" 6.0.1.2212091223 --headless
Please find below, my console output where it is getting stuck and not doing anything.
The first time i ran this build it ran through all my tests successfully, i am having this issue with the second time i am running this build, also even if i make a new project wiht new config i am having the same issue.
I also tried deleting the virtual jenkins directory which gets created and tried running it again but i still have the same issue

Pandas error when installing on Gitlab runner YAML

I am attempting to deploy a python AWS CDK app through a Gitlab runner automated deploy using a gitlab-ci.yml script. The failure is happening at the synth stage:
cdk-synth:
stage: cdk-synth
cache: []
variables:
CDK_PREFIX: $CDK_PREFIX
tags:
- aws-cdk
image: node:14-alpine
before_script:
- apk add --no-cache python3 py3-pip
- npm install -g aws-cdk#2.15.0
- npm install -g cdk-assume-role-credential-plugin
- python3 -m venv .venv
- source .venv/bin/activate
- pip install --upgrade pip
- pip install -r requirements.txt
script:
- echo $CDK_PREFIX and "$CDK_PREFIX"
- cdk synth
This script was working successfully without error until adding pandas to the requirements.txt file below:
aws-cdk-lib==2.15.0
constructs>=10.0.0,<11.0.0
urllib3==1.26.8
requests==2.27.1
boto3==1.20.51
pandas==1.4.0
The automated deploy pipeline is now failing with an error of:
Collecting pandas==1.4.0
Downloading pandas-1.4.0.tar.gz (4.9 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 4.9/4.9 MB 93.8 MB/s eta 0:00:00
Installing build dependencies: started
Installing build dependencies: finished with status 'error'
error: subprocess-exited-with-error
ERROR: Could not build wheels for numpy, which is required to install pyproject.toml-based projects
I have attempted re-aligning the versions and updating pip without success. A google search suggests there is a dependency issue with the 'crypptography' package but does not suggest a clear fix besides uninstalling and re-installing pip which did not correct the issue.

Install Numpy Requirement in a Dockerfile. Results in error

I am attempting to install a numpy dependancy inside a docker container. (My code heavily uses it). On building the container the numpy library simply does not install and the build fails. This is on OS raspbian-buster/stretch. This does however work when building the container on MAC OS.
I suspect some kind of python related issue, but can not for the life of me figure out how to make it work.
I should point out that removing the pip install numpy from the requirements file and using it in its own RUN statement in the dockerfile does not solve the issue.
The Dockerfile:
FROM python:3.6
ENV PYTHONUNBUFFERED 1
ENV APP /app
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
RUN mkdir $APP
WORKDIR $APP
ADD requirements.txt .
RUN pip install -r requirements.txt
COPY . .
The requirements.txt contains all the project requirements, amounf which is numpy.
Step 6/15 : RUN pip install numpy==1.14.3
---> Running in 266a2132b078
Collecting numpy==1.14.3
Downloading https://files.pythonhosted.org/packages/b0/2b/497c2bb7c660b2606d4a96e2035e92554429e139c6c71cdff67af66b58d2/numpy-1.14.3.zip (4.9MB)
Building wheels for collected packages: numpy
Building wheel for numpy (setup.py): started
Building wheel for numpy (setup.py): still running...
Building wheel for numpy (setup.py): still running...
EDIT:
So after the comment by skybunk and the suggestion to head to official docs, some more debugging on my part, the solution wound up being pretty simple. Thanks skybunk to you go all the glory. Yay.
Solution:
Use alpine and install python install package dependencies, upgrade pip before doing a pip install requirements.
This is my edited Dockerfile - working obviously...
FROM python:3.6-alpine3.7
RUN apk add --no-cache --update \
python3 python3-dev gcc \
gfortran musl-dev \
libffi-dev openssl-dev
RUN pip install --upgrade pip
ENV PYTHONUNBUFFERED 1
ENV APP /app
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
RUN mkdir $APP
WORKDIR $APP
ADD requirements.txt .
RUN pip install -r requirements.txt
COPY . .
To use Numpy on python3 here, we first head over to the official documentation to find what dependencies are required to build Numpy.
Mainly these 5 packages + their dependencies must be installed:
Python3 - 70 mb
Python3-dev - 25 mb
gfortran - 20 mb
gcc - 70 mb
musl-dev -10 mb (used for tracking unexpected behaviour/debugging)
An POC setup would look something like this -
Dockerfile:
FROM gliderlabs/alpine
ADD repositories.txt /etc/apk/repositories
RUN apk add --no-cache --update \
python3 python3-dev gcc \
gfortran musl-dev
ADD requirements-pip.txt .
RUN pip3 install --upgrade pip setuptools && \
pip3 install -r requirements-pip.txt
ADD . /app
WORKDIR /app
ENV PYTHONPATH=/app/
ENTRYPOINT python3 testscript.py
repositories.txt
http://dl-5.alpinelinux.org/alpine/v3.4/main
requirements-pip.txt
numpy
testscript.py
import numpy as np
def random_array(a, b):
return np.random.random((a, b))
a = random_array(2,2)
b = random_array(2,2)
print(np.dot(a,b))
To run this - clone alpine, build it using "docker build -t gliderlabs/alpine ."
Build and Run your Dockerfile
docker build -t minidocker .
docker run minidocker
Output should be something like this-
[[ 0.03573961 0.45351115]
[ 0.28302967 0.62914049]]
Here's the git link, if you want to test it out
From the error logs, it does not seem that it is from numpy. but you can install numpy before the requirment.txt and verify if it's working.
FROM python:3.6
RUN pip install numpy==1.14.3
Build
docker build -t numpy .
Run and Test
docker run numpy bash -c "echo import numpy as np > test.py ; python test.py"
So you will see no error on import.
or You can try numpy as an alpine package
FROM python:3-alpine3.9
RUN apk add --no-cache py3-numpy
Or better to post the requirement.txt.
I had lot of trouble with this issue using FROM python:3.9-buster and pandas.
My requirements.txt had the python-dev-tools, numpy and pandas, along with other packages.
I always got something like this when attempting to build:
preluded by:
and by:
Following hints by Adiii in this thread, I did some debug and found out that this actually works and builds a perfectly running container:
RUN pip3 install NumPy==1.18.0
RUN pip3 install python-dev-tools
RUN pip3 install pandas
RUN pip3 install -r requirements.txt
So, giving a specific RUN layer to the pip3 installing pandas solved the problem!
Another method is to install from the 'slim' distribution of python (based on debian):
FROM python:slim
CMD pip install numpy
123Mb
This results in a smaller image than that of alpine:
FROM python:3-alpine3.9
RUN apk add --no-cache py3-numpy
187MB
Plus it gives better support for other whl libraries for slim is based on a glibc library (against which all the wheels are built) while apline uses musl (incompatible with the wheels), so all packages will have to be either apk added or compiled from sources.

How could I build tensorflow-serving-api myself?

I have added some custom code in serving and I want to build my own tensorflow-serving-api for the customized serving, especially ReloadConfig API. But I have no idea how to build it myself. It seems that I could only install tensorflow-serving-api from pip.
Any suggestions? Thanks.
Here's an example of how to do this:
# Pull the latest source code (or pick a release branch)
$ git clone https://github.com/tensorflow/serving .
# Make changes you'd want to make
# Build the pip package using the latest nightly docker build
$ tools/bazel_in_docker.sh bazel build --color=yes tensorflow_serving/tools/pip_package:build_pip_package
# Run the pip package builder using the latest nightly docker build
$ tools/bazel_in_docker.sh bazel-bin/tensorflow_serving/tools/pip_package/build_pip_package $(pwd)/pip
# Install the package that has your custom code
$ pip --no-cache-dir install --upgrade $(pwd)/pip/tensorflow_serving*.whl
# Clean up your extracted folder (optional)
$ rm -rf $(pwd)/pip

Why does it take ages to install Pandas on Alpine Linux

I've noticed that installing Pandas and Numpy (it's dependency) in a Docker container using the base OS Alpine vs. CentOS or Debian takes much longer. I created a little test below to demonstrate the time difference. Aside from the few seconds Alpine takes to update and download the build dependencies to install Pandas and Numpy, why does the setup.py take around 70x more time than on Debian install?
Is there any way to speed up the install using Alpine as the base image or is there another base image of comparable size to Alpine that is better to use for packages like Pandas and Numpy?
Dockerfile.debian
FROM python:3.6.4-slim-jessie
RUN pip install pandas
Build Debian image with Pandas & Numpy:
[PandasDockerTest] time docker build -t debian-pandas -f Dockerfile.debian . --no-cache
Sending build context to Docker daemon 3.072kB
Step 1/2 : FROM python:3.6.4-slim-jessie
---> 43431c5410f3
Step 2/2 : RUN pip install pandas
---> Running in 2e4c030f8051
Collecting pandas
Downloading pandas-0.22.0-cp36-cp36m-manylinux1_x86_64.whl (26.2MB)
Collecting numpy>=1.9.0 (from pandas)
Downloading numpy-1.14.1-cp36-cp36m-manylinux1_x86_64.whl (12.2MB)
Collecting pytz>=2011k (from pandas)
Downloading pytz-2018.3-py2.py3-none-any.whl (509kB)
Collecting python-dateutil>=2 (from pandas)
Downloading python_dateutil-2.6.1-py2.py3-none-any.whl (194kB)
Collecting six>=1.5 (from python-dateutil>=2->pandas)
Downloading six-1.11.0-py2.py3-none-any.whl
Installing collected packages: numpy, pytz, six, python-dateutil, pandas
Successfully installed numpy-1.14.1 pandas-0.22.0 python-dateutil-2.6.1 pytz-2018.3 six-1.11.0
Removing intermediate container 2e4c030f8051
---> a71e1c314897
Successfully built a71e1c314897
Successfully tagged debian-pandas:latest
docker build -t debian-pandas -f Dockerfile.debian . --no-cache 0.07s user 0.06s system 0% cpu 13.605 total
Dockerfile.alpine
FROM python:3.6.4-alpine3.7
RUN apk --update add --no-cache g++
RUN pip install pandas
Build Alpine image with Pandas & Numpy:
[PandasDockerTest] time docker build -t alpine-pandas -f Dockerfile.alpine . --no-cache
Sending build context to Docker daemon 16.9kB
Step 1/3 : FROM python:3.6.4-alpine3.7
---> 4b00a94b6f26
Step 2/3 : RUN apk --update add --no-cache g++
---> Running in 4b0c32551e3f
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/community/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/community/x86_64/APKINDEX.tar.gz
(1/17) Upgrading musl (1.1.18-r2 -> 1.1.18-r3)
(2/17) Installing libgcc (6.4.0-r5)
(3/17) Installing libstdc++ (6.4.0-r5)
(4/17) Installing binutils-libs (2.28-r3)
(5/17) Installing binutils (2.28-r3)
(6/17) Installing gmp (6.1.2-r1)
(7/17) Installing isl (0.18-r0)
(8/17) Installing libgomp (6.4.0-r5)
(9/17) Installing libatomic (6.4.0-r5)
(10/17) Installing pkgconf (1.3.10-r0)
(11/17) Installing mpfr3 (3.1.5-r1)
(12/17) Installing mpc1 (1.0.3-r1)
(13/17) Installing gcc (6.4.0-r5)
(14/17) Installing musl-dev (1.1.18-r3)
(15/17) Installing libc-dev (0.7.1-r0)
(16/17) Installing g++ (6.4.0-r5)
(17/17) Upgrading musl-utils (1.1.18-r2 -> 1.1.18-r3)
Executing busybox-1.27.2-r7.trigger
OK: 184 MiB in 50 packages
Removing intermediate container 4b0c32551e3f
---> be26c3bf4e42
Step 3/3 : RUN pip install pandas
---> Running in 36f6024e5e2d
Collecting pandas
Downloading pandas-0.22.0.tar.gz (11.3MB)
Collecting python-dateutil>=2 (from pandas)
Downloading python_dateutil-2.6.1-py2.py3-none-any.whl (194kB)
Collecting pytz>=2011k (from pandas)
Downloading pytz-2018.3-py2.py3-none-any.whl (509kB)
Collecting numpy>=1.9.0 (from pandas)
Downloading numpy-1.14.1.zip (4.9MB)
Collecting six>=1.5 (from python-dateutil>=2->pandas)
Downloading six-1.11.0-py2.py3-none-any.whl
Building wheels for collected packages: pandas, numpy
Running setup.py bdist_wheel for pandas: started
Running setup.py bdist_wheel for pandas: still running...
Running setup.py bdist_wheel for pandas: still running...
Running setup.py bdist_wheel for pandas: still running...
Running setup.py bdist_wheel for pandas: still running...
Running setup.py bdist_wheel for pandas: still running...
Running setup.py bdist_wheel for pandas: still running...
Running setup.py bdist_wheel for pandas: finished with status 'done'
Stored in directory: /root/.cache/pip/wheels/e8/ed/46/0596b51014f3cc49259e52dff9824e1c6fe352048a2656fc92
Running setup.py bdist_wheel for numpy: started
Running setup.py bdist_wheel for numpy: still running...
Running setup.py bdist_wheel for numpy: still running...
Running setup.py bdist_wheel for numpy: still running...
Running setup.py bdist_wheel for numpy: finished with status 'done'
Stored in directory: /root/.cache/pip/wheels/9d/cd/e1/4d418b16ea662e512349ef193ed9d9ff473af715110798c984
Successfully built pandas numpy
Installing collected packages: six, python-dateutil, pytz, numpy, pandas
Successfully installed numpy-1.14.1 pandas-0.22.0 python-dateutil-2.6.1 pytz-2018.3 six-1.11.0
Removing intermediate container 36f6024e5e2d
---> a93c59e6a106
Successfully built a93c59e6a106
Successfully tagged alpine-pandas:latest
docker build -t alpine-pandas -f Dockerfile.alpine . --no-cache 0.54s user 0.33s system 0% cpu 16:08.47 total
Debian based images use only python pip to install packages with .whl format:
Downloading pandas-0.22.0-cp36-cp36m-manylinux1_x86_64.whl (26.2MB)
Downloading numpy-1.14.1-cp36-cp36m-manylinux1_x86_64.whl (12.2MB)
WHL format was developed as a quicker and more reliable method of installing Python software than re-building from source code every time. WHL files only have to be moved to the correct location on the target system to be installed, whereas a source distribution requires a build step before installation.
Wheel packages pandas and numpy are not supported in images based on Alpine platform. That's why when we install them using python pip during the building process, we always compile them from the source files in alpine:
Downloading pandas-0.22.0.tar.gz (11.3MB)
Downloading numpy-1.14.1.zip (4.9MB)
and we can see the following inside container during the image building:
/ # ps aux
PID USER TIME COMMAND
1 root 0:00 /bin/sh -c pip install pandas
7 root 0:04 {pip} /usr/local/bin/python /usr/local/bin/pip install pandas
21 root 0:07 /usr/local/bin/python -c import setuptools, tokenize;__file__='/tmp/pip-build-en29h0ak/pandas/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n
496 root 0:00 sh
660 root 0:00 /bin/sh -c gcc -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -DTHREAD_STACK_SIZE=0x100000 -fPIC -Ibuild/src.linux-x86_64-3.6/numpy/core/src/pri
661 root 0:00 gcc -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -DTHREAD_STACK_SIZE=0x100000 -fPIC -Ibuild/src.linux-x86_64-3.6/numpy/core/src/private -Inump
662 root 0:00 /usr/libexec/gcc/x86_64-alpine-linux-musl/6.4.0/cc1 -quiet -I build/src.linux-x86_64-3.6/numpy/core/src/private -I numpy/core/include -I build/src.linux-x86_64-3.6/numpy/core/includ
663 root 0:00 ps aux
If we modify Dockerfile a little:
FROM python:3.6.4-alpine3.7
RUN apk add --no-cache g++ wget
RUN wget https://pypi.python.org/packages/da/c6/0936bc5814b429fddb5d6252566fe73a3e40372e6ceaf87de3dec1326f28/pandas-0.22.0-cp36-cp36m-manylinux1_x86_64.whl
RUN pip install pandas-0.22.0-cp36-cp36m-manylinux1_x86_64.whl
we get the following error:
Step 4/4 : RUN pip install pandas-0.22.0-cp36-cp36m-manylinux1_x86_64.whl
---> Running in 0faea63e2bda
pandas-0.22.0-cp36-cp36m-manylinux1_x86_64.whl is not a supported wheel on this platform.
The command '/bin/sh -c pip install pandas-0.22.0-cp36-cp36m-manylinux1_x86_64.whl' returned a non-zero code: 1
Unfortunately, the only way to install pandas on an Alpine image is to wait until build finishes.
Of course if you want to use the Alpine image with pandas in CI for example, the best way to do so is to compile it once, push it to any registry and use it as a base image for your needs.
EDIT:
If you want to use the Alpine image with pandas you can pull my nickgryg/alpine-pandas docker image. It is a python image with pre-compiled pandas on the Alpine platform. It should save your time.
ANSWER: AS OF 3/9/2020, FOR PYTHON 3, IT STILL DOESN'T!
Here is a complete working Dockerfile:
FROM python:3.7-alpine
RUN echo "#testing http://dl-cdn.alpinelinux.org/alpine/edge/testing" >> /etc/apk/repositories
RUN apk add --update --no-cache py3-numpy py3-pandas#testing
The build is very sensitive to the exact python and alpine version numbers - getting these wrong seems to provoke Max Levy's error so:libpython3.7m.so.1.0 (missing) - but the above does now work for me.
My updated Dockerfile is available at https://gist.github.com/jtlz2/b0f4bc07ce2ff04bc193337f2327c13b
[Earlier Update:]
ANSWER: IT DOESN'T!
In any Alpine Dockerfile you can simply do*
RUN apk add py2-numpy#community py2-scipy#community py-pandas#edge
This is because numpy, scipy and now pandas are all available prebuilt on alpine:
https://pkgs.alpinelinux.org/packages?name=*numpy
https://pkgs.alpinelinux.org/packages?name=*scipy&branch=edge
https://pkgs.alpinelinux.org/packages?name=*pandas&branch=edge
One way to avoid rebuilding every time, or using a Docker layer, is to use a prebuilt, native Alpine Linux/.apk package, e.g.
https://github.com/sgerrand/alpine-pkg-py-pandas
https://github.com/nbgallery/apks
You can build these .apks once and use them wherever in your Dockerfile you like :)
This also saves you having to bake everything else into the Docker image before the fact - i.e. the flexibility to pre-build any Docker image you like.
PS I have put a Dockerfile stub at https://gist.github.com/jtlz2/b0f4bc07ce2ff04bc193337f2327c13b that shows roughly how to build the image. These include the important steps (*):
RUN echo "#community http://dl-cdn.alpinelinux.org/alpine/edge/community" >> /etc/apk/repositories
RUN apk update
RUN apk add --update --no-cache libgfortran
Real honest advice here, switch to Debian based image and then all your problems will be gone.
Alpine for python applications doesn't work well.
Here is an example of my dockerfile:
FROM python:3.7.6-buster
RUN pip install pandas==1.0.0
RUN pip install sklearn
RUN pip install Django==3.0.2
RUN pip install cx_Oracle==7.3.0
RUN pip install excel
RUN pip install djangorestframework==3.11.0
The python:3.7.6-buster is more appropriate in this case, in addition, you don't need any extra dependency in the OS.
Follow a usefull and recent article: https://pythonspeed.com/articles/alpine-docker-python/:
Don’t use Alpine Linux for Python images
Unless you want massively slower build times, larger images, more work, and the potential for obscure bugs, you’ll want to avoid Alpine Linux as a base image. For some recommendations on what you should use, see my article on choosing a good base image.
Just going to bring some of these answers together in one answer and add a detail I think was missed. The reason certain python libraries, particularly optimized math and data libraries, take so long to build on alpine is because the pip wheels for these libraries include binaries precompiled from c/c++ and linked against gnu-libc (glibc), a common set of c standard libraries. Debian, Fedora, CentOS all (typically) use glibc, but alpine, in order to stay lightweight, uses musl-libc instead. c/c++ binaries build on a glibc system will not work on a system without glibc and the same goes for musl.
Pip looks first for a wheel with the correct binaries, if it can't find one, it tries to compile the binaries from the c/c++ source and links them against musl. In many cases, this won't even work unless you have the python headers from python3-dev or build tools like make.
Now the silver lining, as others have mentioned, there are apk packages with the proper binaries provided by the community, using these will save you the (sometimes lengthy) process of building the binaries.
You can, in fact, install from a pure python .whl on alpine, but, at the time of this writing, manylinux did not support binary distributions for alpine due to the musl/gnu issue.
Update Oct 2022
Newer versions of python/pip support musl via the package musllinux which, I assume, is a musl impl for manylinux. Still no official 'musl' support for CUDA though.
ATTENTION
Look at the #jtlz2 answer with the latest update
OUTDATED
So, py3-pandas & py3-numpy packages moved to the testing alpine repository, so, you can download it by adding these lines in to the your Dockerfile:
RUN echo "http://dl-8.alpinelinux.org/alpine/edge/testing" >> /etc/apk/repositories \
&& apk update \
&& apk add py3-numpy py3-pandas
Hope it helps someone!
Alpine packages links:
- py3-pandas
- py3-numpy
Alpine repositories docks info.
In this case the alpine not be the best solution change alpine for slim:
FROM python:3.8.3-alpine
Change to that:
FROM python:3.8.3-slim
In my case it was resolved with this small change.
This worked for me:
FROM python:3.8-alpine
RUN echo "#testing http://dl-cdn.alpinelinux.org/alpine/edge/testing" >> /etc/apk/repositories
RUN apk add --update --no-cache py3-numpy py3-pandas#testing
ENV PYTHONPATH=/usr/lib/python3.8/site-packages
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
EXPOSE 5003
ENTRYPOINT [ "python" ]
CMD [ "app.py" ]
Most of the code here is from the answer of jtlz2 from this same thread and Faylixe from another thread.
Turns out the lighter version of pandas is found in the Alpine repository py3-numpy but it doesn't get installed in the same file path from where Python reads the imports by default. Therefore you need to add the ENV. Also be mindful about the alpine version.
I have solved the installation with some additional changes:
Requirements
Migrate from python3.8-alpine to python3.10-alpine:
docker pull python:3.10-alpine
Important!
I had to migrate because when I was installing py3-pandas, it installed the package as python3.10, not in the required version
that I was using python3.8).
To figure out where the libraries of a package were installed, you can check that with the following command:
apk info -L py3-pandas
Not install backports.zoneinfo package since python3.9 (I had to add a condition in the requirements.txt to install the package with versions lower than 3.9).
backports.zoneinfo==0.2.1;python_version<"3.9"
Installation
After the previous changes, I proceed to install panda performing the following:
Add 3 additional repositories to /etc/apk/repositories (the repositories can vary based on the version of your distribution), reference here:
for x in $(echo "main community testing"); \
do echo "https://dl-cdn.alpinelinux.org/alpine/edge/${x}" >> /etc/apk/repositories; \
done
Validate the content of the file /etc/apk/repositories:
$ cat /etc/apk/repositories
https://dl-cdn.alpinelinux.org/alpine/v3.16/main
https://dl-cdn.alpinelinux.org/alpine/v3.16/community
https://dl-cdn.alpinelinux.org/alpine/edge/main
https://dl-cdn.alpinelinux.org/alpine/edge/community
https://dl-cdn.alpinelinux.org/alpine/edge/testing
Perform to install pandas (pynum is installed automatically as a dependency of pandas):
sudo apk update && sudo apk add py3-pandas
Set the environment variable PYTHONPATH:
export PYTHONPATH=/usr/lib/python3.10/site-packages/
Validate the packages can be imported (on my case I tested it with django):
python manage.py shell
import pandas as pd
import numpy as np
technologies = ['Spark','Pandas','Java','Python', 'PHP']
fee = [25000,20000,15000,15000,18000]
duration = ['5o Days','35 Days',np.nan,'30 Days', '30 Days']
discount = [2000,1000,800,500,800]
columns=['Courses','Fee','Duration','Discount']
df = pd.DataFrame(list(zip(technologies,fee,duration,discount)), columns=columns)
print(df)
pandas is considered a community supported package, so the answers pointing to edge/testing are not going to work as Alpine does not officially support pandas as a core package (it still works, it's just not supported by the core Alpine developers).
Try this Dockerfile:
FROM python:3.8-alpine
RUN echo "#community http://dl-cdn.alpinelinux.org/alpine/edge/community" >> /etc/apk/repositories \
&& apk add py3-pandas#community
ENV PYTHONPATH="/usr/lib/python3.8/site-packages"
This works for the vanilla Alpine image too, using FROM alpine:3.12.
Update: thanks to #cegprakash for raising the question about how to work with this setup when you also have a requirements.txt file that must be satisfied inside the container.
I added one line to the Dockerfile snippet to export the PYTHONPATH variable into the container runtime. If you do this, it won't matter whether pandas or numpy are included in the requirements file or not (provided they are pegged to the same version that was installed via apk).
The reason this is needed is that apk installs the py3-pands#community package under /usr/lib, but that location is not on the default PYTHONPATH that pip checks before installing new packages. If we don't include this step to add it, pip and python will not find the package and pip will try to download and install it under /usr/local which is what we're trying to avoid.
And given that we really want to make sure that pip doesn't try to install pandas, I would suggest to not include pandas or numpy in the requirements.txt file if you've already installed them with apk using the above method. It's just a little extra insurance that things will go as intended.
The following Dockerfile worked for me to install pandas, among other dependencies as listed below.
python:3.10-alpine Dockerfile
# syntax=docker/dockerfile:1
FROM python:3.10-alpine as base
RUN apk add --update --no-cache --virtual .tmp-build-deps \
gcc g++ libc-dev linux-headers postgresql-dev build-base \
&& apk add libffi-dev
COPY requirements.txt requirements.txt
RUN pip install --no-cache-dir --upgrade -r requirements.txt
pyproject.toml dependencies
python = "^3.10"
Django = "^3.2.9"
djangorestframework = "^3.12.4"
PyYAML = ">=5.3.0,<6.0.0"
Markdown = "^3.3.6"
uritemplate = "^4.1.1"
install = "^1.3.5"
drf-spectacular = "^0.21.0"
django-extensions = "^3.1.5"
django-filter = "^21.1"
django-cors-headers = "^3.10.1"
httpx = "^0.22.0"
channels = "^3.0.4"
daphne = "^3.0.2"
whitenoise = "^6.2.0"
djoser = "^2.1.0"
channels-redis = "^3.4.0"
pika = "^1.2.1"
backoff = "^2.1.2"
psycopg2-binary = "^2.9.3"
pandas = "^1.5.0"
alpine takes lot of time to install pandas and the image size is also huge. I tried the python:3.8-slim-buster version of python base image. Image build was very fast and size of image was less than half in comparison to alpine python docker image
https://github.com/dguyhasnoname/k8s-cluster-checker/blob/master/Dockerfile