buildah seems can't handle `npm install` writing file "`/sys/fs/cgroup/cgroup.subtree_control`: Operation not supported" - npm

Situation
I get following error when I try to build a container image with buildah.
[1/2] STEP 7/8: RUN npm install
error running container: error from crun creating container for [/bin/sh -c npm install]: writing file `/sys/fs/cgroup/cgroup.subtree_control`: Operation not supported
Environment/steps
I have installed buildah in an ubuntu container image called tools-image
I run this tools-image container on a macOS
I use docker to run the tool-image container
I start the tools-image container with
docker run -it --privileged --name demo -v "$(pwd)":/localmachine
"myname/myname:v1" /bin/bash
I use inside the tools-image buildah to build an example application container image
buildah bud -t test:v1 -f Dockerfile .
Dockerfile for the example application container image I use with buildah bud command
This is the Dockerfile.
##############################
# BUILD
##############################
FROM docker.io/node:17-alpine as BUILD
COPY src /usr/src/app/src
COPY public /usr/src/app/public
COPY package.json /usr/src/app/
COPY babel.config.js /usr/src/app/
WORKDIR /usr/src/app/
RUN npm install
RUN npm run build
##############################
# EXAMPLE
##############################
# https://blog.openshift.com/deploy-vuejs-applications-on-openshift/
FROM docker.io/nginx:1.21.4-alpine
RUN apk update \
apk upgrade \
apk add --update coreutils
# Add a user how will have the rights to change the files in code
RUN addgroup -g 1500 nginxusers
RUN adduser --disabled-password -u 1501 nginxuser nginxusers
# Configure ngnix server
COPY nginx-os4-webapp.conf /etc/nginx/nginx.conf
WORKDIR /code
COPY --from=BUILD /usr/src/app/dist .
# https://zingzai.medium.com/externalise-and-configure-frontend-environment-variables-on-kubernetes-e8e798285b3e
# Configure web-app for environment variable usage
WORKDIR /
COPY docker_entrypoint.sh .
COPY generate_env-config.sh .
RUN chown nginxuser:nginxusers docker_entrypoint.sh
RUN chown nginxuser:nginxusers generate_env-config.sh
RUN chmod 777 docker_entrypoint.sh generate_env-config.sh
RUN chown -R nginxuser:nginxusers /code
RUN chown -R nginxuser:nginxusers /etc/nginx
RUN chown -R nginxuser:nginxusers /tmp
RUN chmod 777 /code
RUN chmod 777 /tmp
RUN chmod 777 /etc/nginx
USER nginxuser
EXPOSE 8080
CMD ["/bin/sh","docker_entrypoint.sh"]
Error when I execute
[1/2] STEP 1/8: FROM docker.io/node:12-alpine AS BUILD
[1/2] STEP 2/8: COPY src /usr/src/app/src
--> d6601e0d631
[1/2] STEP 3/8: COPY public /usr/src/app/public
--> febd88b92b3
[1/2] STEP 4/8: COPY package.json /usr/src/app/
--> 26675130145
[1/2] STEP 5/8: COPY babel.config.js /usr/src/app/
--> 1006f1e8cf3
[1/2] STEP 6/8: WORKDIR /usr/src/app/
--> af1b28ef62c
[1/2] STEP 7/8: RUN npm install
error running container: error from crun creating container for [/bin/sh -c npm install]: writing file `/sys/fs/cgroup/cgroup.subtree_control`: Operation not supported
: exit status 1
[2/2] STEP 1/22: FROM docker.io/nginx:1.21.4-alpine
Trying to pull docker.io/library/nginx:1.21.4-alpine...
error building at STEP "RUN npm install": error while running runtime: exit status 1

It worked with podman!
Steps which solved the problem for me:
I installed podman on macOS
I builded the tools-image with podman
I started the tools-image with following command
podman run -it --rm --privileged --name demo "tools-image:v1"
I cloned the code for the example application into the tools-image running container
I ran buildah with following command
buildah bud -t test:v1 -f Dockerfile .
Result
It worked with podman!
[2/2] COMMIT test:v1
Getting image source signatures
Copying blob 1a058d5342cc [--------------] 0.0b / 0.0b
Copying blob ad93babfd60c [--------------] 0.0b / 0.0b
Copying blob 5af959103b90 [--------------] 0.0b / 0.0b
Copying blob 385374b911f2 [--------------] 0.0b / 0.0b
Copying blob eabae5075c43 [--------------] 0.0b / 0.0b
Copying blob 3d71b657b020 [--------------] 0.0b / 0.0b
Copying blob 57627a47445a done
Copying config 204e250881 [========] 10.6KiB / 10.6KiB
Writing manifest to image destination
Storing signatures
--> 204e250881d
Successfully tagged localhost/test:v1
204e250881d44984be77c4abfef100880bda165b3d195606880fcad026b57003

Related

singularity returns a permission denied

I would like to build a singularity container for an application shipped via AppImage. To do so, I build the following def file:
Bootstrap: docker
From: debian:bullseye-slim
%post
apt-get update -y
apt-get install -y wget unzip fuse libglu1 libglib2.0-dev libharfbuzz-dev libsm6 dbus
cd /opt
wget https://www.ill.eu/fileadmin/user_upload/ILL/3_Users/Instruments/Instruments_list/00_-_DIFFRACTION/D3/Mag2Pol/Mag2Pol_v5.0.2.AppImage
chmod u+x Mag2Pol_v5.0.2.AppImage
%runscript
exec /opt/Mag2Pol_v5.0.2.AppImage
I build the container using singularity build -f test.sif test.def command. The build runs OK but when running the sif file using ./test.sif I get an /.singularity.d/runscript: 3: exec: /opt/Mag2Pol_v5.0.2.AppImage: Permission denied error. Looking inside the container using a singularity shell command shows that the /opt/Mag2Pol_v5.0.2.AppImage executable belongs to root. I guess that it is the source of the problem but I do not know how to solve it. Would you have any idea ?

Cannot locate docker build output of multistage build inside CodeBuild

We're using a aws/codebuild/standard:5.0 codebuild image to build our own docker images. I have a buildspec that calls docker build against our Dockerfile and push to ECR. The Dockerfile uses Microsoft dotnet base images to call dotnet pubish to build our binaries. This all works fine.
We then added a build stage to our Dockerfile to run unit tests (using dotnet test) and we followed the "FROM scratch" advice combined with docker build --output to try and pull unit test results files out of the multi-stage target:
docker build --target export-test-results -f ./Dockerfile --output type=local,dest=out .
This works fine locally (an out dir is created containing the files), but when I run this in Codebuild, I cannot find where the output may be (the command succeeds - but I've no idea where it's going). I've added ls commands everywhere, and cannot locate the out dir, so of course my artifacts step has nothing to archive.
Question is: where is the output being created inside the CodeBuild instance?
My (abbreviated) Dockerfile
ARG VERSION=3.1-alpine3.13
FROM mcr.microsoft.com/dotnet/aspnet:$VERSION AS base
WORKDIR /usr/local/bin
FROM mcr.microsoft.com/dotnet/sdk:$VERSION AS source
#Using pattern here to bypass need for recursive copy from local src folder: https://github.com/moby/moby/issues/15858#issuecomment-614157331
WORKDIR /usr/local
COPY . ./src
RUN mkdir ./proj && \
cd ./src && \
find . -type f -a \( -iname "*.sln" -o -iname "*.csproj" -o -iname "*.dcproj" \) -exec cp --parents "{}" ../proj/ \;
FROM mcr.microsoft.com/dotnet/sdk:$VERSION AS projectfiles
# Copy only the project files with correct directory structure
# then restore packages - this will mean that "restore" will be saved in a layer of its own
COPY --from=source /usr/local/proj /usr/local/src
FROM projectfiles AS restore
WORKDIR /usr/local/src/Postie
RUN dotnet restore --verbosity minimal -s https://api.nuget.org/v3/index.json Postie.sln
FROM restore AS unittests
#Copy all the source files
COPY --from=source /usr/local/src /usr/local/src
RUN cd Postie.Domain.UnitTests && \
dotnet test --no-restore --logger:nunit --verbosity normal || true
FROM scratch as export-test-results
COPY --from=unittests /usr/local/src/Postie/Postie.Domain.UnitTests/TestResults/TestResults.xml ./Postie.Domain.UnitTests.TestResults.xml
My (abbreviated) Buildspec:
version: 0.2
phases:
pre_build:
commands:
- echo Logging in to Amazon ECR...
- aws ecr get-login-password | docker login --username AWS --password-stdin $DOCKER_REGISTRY_SERVER
build:
commands:
- export IMAGE_TAG=:$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | cut -c 1-7).$CODEBUILD_BUILD_NUMBER
- export JENKINS_TAG=:$(echo $JENKINS_VERSION_NUMBER | tr '+' '-')
- echo Build started on `date` with version $IMAGE_TAG
- cd ./Src/
- echo Testing the Docker image...
#see the following for why we use the --output option
#https://docs.docker.com/engine/reference/commandline/build/#custom-build-outputs
- docker build --target export-test-results -t ${DOCKER_REGISTRY_SERVER}/postie.api${IMAGE_TAG} -f ./Postie/Postie.Api/Dockerfile --output type=local,dest=out .
artifacts:
files:
- '**/*'
name: builds/$JENKINS_VERSION_NUMBER/artifacts
(I should note that the "artifacts" step above is actually archiving my entire source tree to S3 so that I can prove that the upload is working and also so that I can try to find the "out" dir - but it's not to be found)
I know this is old, but just in case anyone else stumbles across this one, you need to add the Docker Buildkit variable to the CodeBuild environment, otherwise the files will not get exported.
version: 0.2
... etc
phases:
build:
commands:
... etc
- echo Testing the Docker image...
- export DOCKER_BUILDKIT=1
- docker build --target export-test-results ... etc
... etc
If you want to display more output along with this you can also add
- export BUILDKIT_PROGRESS=plain
- export PROGRESS_NO_TRUNC=1
under the buildkit variable.

Modify a line before starting the container

I used the following command to build a docker image
docker build -t shantanuo/mydash .
And the dockerfile is:
FROM continuumio/miniconda3
EXPOSE 8050
RUN cd /tmp/
RUN apt-get update
RUN apt-get install --yes git zip vim
RUN git clone https://github.com/kanishkan91/SuperTrendfor50Stocks.git
RUN pip install -r SuperTrendfor50Stocks/requirements.txt
WORKDIR SuperTrendfor50Stocks/
I can start the container, modify the application file and then start the app.
Step 1:
docker run -p 8050:8050 -it shantanuo/mydash bash
Step 2:
vi application.py
Change the last line
application.run_server(debug=True)
application.run(host='0.0.0.0')
Step 3:
python application.py
Can I avoid these 3 steps and merge everything in my dockerfile?
I do not think this is a good approach to change the line of code and then run the application manually, why not the code is self generic and modify the behaviour of application accordingly base on ENV.
You can try
# set default value accordingly
app.run(host=os.getenv('HOST', "127.0.0.1") , debug=os.getenv('DEBUG', False))
Now you can change that behaviour base on ENV.
web:
build: ./web
environment:
- HOST=0.0.0.0
- DEBUG=True
or
docker run -p 8050:8050 -e HOST="0.0.0.0" e DEBUG=True -it shantanuo/mydash
You also need to set CMD in the Dockerfile
CMD python app.py

Create default files for conan without install

I'm creating a docker image as a build environment where I can mount a project and build it. For build I use cmake and conan. The dockerfile of this image:
FROM alpine:3.9
RUN ["apk", "add", "--no-cache", "gcc", "g++", "make", "cmake", "python3", "python3-dev", "linux-headers", "musl-dev"]
RUN ["pip3", "install", "--upgrade", "pip"]
RUN ["pip3", "install", "conan"]
WORKDIR /project
Files like
~/.conan/profiles/default
are created after I call
conan install ..
so that these files are created in the container and not in the image. The default behavior of conan is to set
compiler.libcxx=libstdc++
I'd like to run something like
RUN ["sed", "-i", "s/compiler.libcxx=libstdc++/compiler.libcxx=libstdc++11/", "~/.conan/profiles/default"]
to change the libcxx value but this file does not exist at this point. The only way I found to create the default profile by conan would be to install something.
Currently I'm running this container with
docker run --rm -v $(dirname $(realpath $0))/project:/project build-environment /bin/sh -c "\
rm -rf build && \
mkdir build && \
cd build && \
conan install -s compiler.libcxx=libstdc++11 .. --build missing && \
cmake .. && \
cmake --build . ; \
chown -R $(id -u):$(id -u) /project/build \
"
but I need to remove -s compiler.libcxx=libstdc++11 as it should be dependent on the image and not fixed by the build script.
Is there a way to initialize conan inside the image and edit the configuration without installing something? Currently I'm planning to write the whole configuration by myself but that seems a little too much as I want to use the default configuration and change only one line.
You can also create an image from a running container. Try installing conan in running container and then create an image of it. As it is being installed in running container it will have all dependencies only for it. To create that image you can follow this link
https://docs.docker.com/engine/reference/commandline/commit/

Running apache on docker with Dockerfile in windows boot2docker

I created the Dockerfile below and I tried to build the docker file but it's stop in Step 7 with below issue.
FROM fedora:20
RUN yum -y update; yum clean all
RUN yum -y install httpd; yum clean all
RUN mkdir -p /var/www/html
RUN mkdir -p /var/log/httpd
RUN mkdir -p /bin/httpd-run
# Create Apache test page
RUN echo "Apache set up successfully." > /var/www/html/index.html
# Copy apache run script
ADD httpd-run /bin/httpd-run
# Done
EXPOSE 80
CMD ["/bin/httpd-run"]
docker#boot2docker:~/c$ docker build --rm -t neroinc/frdora-apache .
Sending build context to Docker daemon 2.048 kB
Sending build context to Docker daemon
Step 0 : FROM fedora:20
---> 6cece30db4f9
Step 1 : RUN yum -y update; yum clean all
---> Using cache
---> 31c60bf0f22d
Step 2 : RUN yum -y install httpd; yum clean all
---> Using cache
---> 6efbe9b41918
Step 3 : RUN mkdir -p /var/www/html
---> Using cache
---> acd918c77d60
Step 4 : RUN mkdir -p /var/log/httpd
---> Using cache
---> a9e069fbfb24
Step 5 : RUN mkdir -p /bin/httpd-run
---> Running in 71735c1e8b7e
---> c6dab827ab33
Removing intermediate container 71735c1e8b7e
Step 6 : RUN echo "Apache set up successfully." > /var/www/html/index.html
---> Running in 8f6f38bdd492
---> c4f21e9f64b7
Removing intermediate container 8f6f38bdd492
Step 7 : ADD httpd-run /bin/httpd-run
INFO[0001] httpd-run: no such file or directory
docker#boot2docker:~/c$
If anyone can help me I'm new to write code. Here I want add any folder in boot2docker vm
I want write Dockerfile for Apache.