Cannot locate docker build output of multistage build inside CodeBuild - aws-codebuild

We're using a aws/codebuild/standard:5.0 codebuild image to build our own docker images. I have a buildspec that calls docker build against our Dockerfile and push to ECR. The Dockerfile uses Microsoft dotnet base images to call dotnet pubish to build our binaries. This all works fine.
We then added a build stage to our Dockerfile to run unit tests (using dotnet test) and we followed the "FROM scratch" advice combined with docker build --output to try and pull unit test results files out of the multi-stage target:
docker build --target export-test-results -f ./Dockerfile --output type=local,dest=out .
This works fine locally (an out dir is created containing the files), but when I run this in Codebuild, I cannot find where the output may be (the command succeeds - but I've no idea where it's going). I've added ls commands everywhere, and cannot locate the out dir, so of course my artifacts step has nothing to archive.
Question is: where is the output being created inside the CodeBuild instance?
My (abbreviated) Dockerfile
ARG VERSION=3.1-alpine3.13
FROM mcr.microsoft.com/dotnet/aspnet:$VERSION AS base
WORKDIR /usr/local/bin
FROM mcr.microsoft.com/dotnet/sdk:$VERSION AS source
#Using pattern here to bypass need for recursive copy from local src folder: https://github.com/moby/moby/issues/15858#issuecomment-614157331
WORKDIR /usr/local
COPY . ./src
RUN mkdir ./proj && \
cd ./src && \
find . -type f -a \( -iname "*.sln" -o -iname "*.csproj" -o -iname "*.dcproj" \) -exec cp --parents "{}" ../proj/ \;
FROM mcr.microsoft.com/dotnet/sdk:$VERSION AS projectfiles
# Copy only the project files with correct directory structure
# then restore packages - this will mean that "restore" will be saved in a layer of its own
COPY --from=source /usr/local/proj /usr/local/src
FROM projectfiles AS restore
WORKDIR /usr/local/src/Postie
RUN dotnet restore --verbosity minimal -s https://api.nuget.org/v3/index.json Postie.sln
FROM restore AS unittests
#Copy all the source files
COPY --from=source /usr/local/src /usr/local/src
RUN cd Postie.Domain.UnitTests && \
dotnet test --no-restore --logger:nunit --verbosity normal || true
FROM scratch as export-test-results
COPY --from=unittests /usr/local/src/Postie/Postie.Domain.UnitTests/TestResults/TestResults.xml ./Postie.Domain.UnitTests.TestResults.xml
My (abbreviated) Buildspec:
version: 0.2
phases:
pre_build:
commands:
- echo Logging in to Amazon ECR...
- aws ecr get-login-password | docker login --username AWS --password-stdin $DOCKER_REGISTRY_SERVER
build:
commands:
- export IMAGE_TAG=:$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | cut -c 1-7).$CODEBUILD_BUILD_NUMBER
- export JENKINS_TAG=:$(echo $JENKINS_VERSION_NUMBER | tr '+' '-')
- echo Build started on `date` with version $IMAGE_TAG
- cd ./Src/
- echo Testing the Docker image...
#see the following for why we use the --output option
#https://docs.docker.com/engine/reference/commandline/build/#custom-build-outputs
- docker build --target export-test-results -t ${DOCKER_REGISTRY_SERVER}/postie.api${IMAGE_TAG} -f ./Postie/Postie.Api/Dockerfile --output type=local,dest=out .
artifacts:
files:
- '**/*'
name: builds/$JENKINS_VERSION_NUMBER/artifacts
(I should note that the "artifacts" step above is actually archiving my entire source tree to S3 so that I can prove that the upload is working and also so that I can try to find the "out" dir - but it's not to be found)

I know this is old, but just in case anyone else stumbles across this one, you need to add the Docker Buildkit variable to the CodeBuild environment, otherwise the files will not get exported.
version: 0.2
... etc
phases:
build:
commands:
... etc
- echo Testing the Docker image...
- export DOCKER_BUILDKIT=1
- docker build --target export-test-results ... etc
... etc
If you want to display more output along with this you can also add
- export BUILDKIT_PROGRESS=plain
- export PROGRESS_NO_TRUNC=1
under the buildkit variable.

Related

buildah seems can't handle `npm install` writing file "`/sys/fs/cgroup/cgroup.subtree_control`: Operation not supported"

Situation
I get following error when I try to build a container image with buildah.
[1/2] STEP 7/8: RUN npm install
error running container: error from crun creating container for [/bin/sh -c npm install]: writing file `/sys/fs/cgroup/cgroup.subtree_control`: Operation not supported
Environment/steps
I have installed buildah in an ubuntu container image called tools-image
I run this tools-image container on a macOS
I use docker to run the tool-image container
I start the tools-image container with
docker run -it --privileged --name demo -v "$(pwd)":/localmachine
"myname/myname:v1" /bin/bash
I use inside the tools-image buildah to build an example application container image
buildah bud -t test:v1 -f Dockerfile .
Dockerfile for the example application container image I use with buildah bud command
This is the Dockerfile.
##############################
# BUILD
##############################
FROM docker.io/node:17-alpine as BUILD
COPY src /usr/src/app/src
COPY public /usr/src/app/public
COPY package.json /usr/src/app/
COPY babel.config.js /usr/src/app/
WORKDIR /usr/src/app/
RUN npm install
RUN npm run build
##############################
# EXAMPLE
##############################
# https://blog.openshift.com/deploy-vuejs-applications-on-openshift/
FROM docker.io/nginx:1.21.4-alpine
RUN apk update \
apk upgrade \
apk add --update coreutils
# Add a user how will have the rights to change the files in code
RUN addgroup -g 1500 nginxusers
RUN adduser --disabled-password -u 1501 nginxuser nginxusers
# Configure ngnix server
COPY nginx-os4-webapp.conf /etc/nginx/nginx.conf
WORKDIR /code
COPY --from=BUILD /usr/src/app/dist .
# https://zingzai.medium.com/externalise-and-configure-frontend-environment-variables-on-kubernetes-e8e798285b3e
# Configure web-app for environment variable usage
WORKDIR /
COPY docker_entrypoint.sh .
COPY generate_env-config.sh .
RUN chown nginxuser:nginxusers docker_entrypoint.sh
RUN chown nginxuser:nginxusers generate_env-config.sh
RUN chmod 777 docker_entrypoint.sh generate_env-config.sh
RUN chown -R nginxuser:nginxusers /code
RUN chown -R nginxuser:nginxusers /etc/nginx
RUN chown -R nginxuser:nginxusers /tmp
RUN chmod 777 /code
RUN chmod 777 /tmp
RUN chmod 777 /etc/nginx
USER nginxuser
EXPOSE 8080
CMD ["/bin/sh","docker_entrypoint.sh"]
Error when I execute
[1/2] STEP 1/8: FROM docker.io/node:12-alpine AS BUILD
[1/2] STEP 2/8: COPY src /usr/src/app/src
--> d6601e0d631
[1/2] STEP 3/8: COPY public /usr/src/app/public
--> febd88b92b3
[1/2] STEP 4/8: COPY package.json /usr/src/app/
--> 26675130145
[1/2] STEP 5/8: COPY babel.config.js /usr/src/app/
--> 1006f1e8cf3
[1/2] STEP 6/8: WORKDIR /usr/src/app/
--> af1b28ef62c
[1/2] STEP 7/8: RUN npm install
error running container: error from crun creating container for [/bin/sh -c npm install]: writing file `/sys/fs/cgroup/cgroup.subtree_control`: Operation not supported
: exit status 1
[2/2] STEP 1/22: FROM docker.io/nginx:1.21.4-alpine
Trying to pull docker.io/library/nginx:1.21.4-alpine...
error building at STEP "RUN npm install": error while running runtime: exit status 1
It worked with podman!
Steps which solved the problem for me:
I installed podman on macOS
I builded the tools-image with podman
I started the tools-image with following command
podman run -it --rm --privileged --name demo "tools-image:v1"
I cloned the code for the example application into the tools-image running container
I ran buildah with following command
buildah bud -t test:v1 -f Dockerfile .
Result
It worked with podman!
[2/2] COMMIT test:v1
Getting image source signatures
Copying blob 1a058d5342cc [--------------] 0.0b / 0.0b
Copying blob ad93babfd60c [--------------] 0.0b / 0.0b
Copying blob 5af959103b90 [--------------] 0.0b / 0.0b
Copying blob 385374b911f2 [--------------] 0.0b / 0.0b
Copying blob eabae5075c43 [--------------] 0.0b / 0.0b
Copying blob 3d71b657b020 [--------------] 0.0b / 0.0b
Copying blob 57627a47445a done
Copying config 204e250881 [========] 10.6KiB / 10.6KiB
Writing manifest to image destination
Storing signatures
--> 204e250881d
Successfully tagged localhost/test:v1
204e250881d44984be77c4abfef100880bda165b3d195606880fcad026b57003

GraphDB Docker Container Fails to Run: adoptopenjdk/openjdk12:alpine

When using the standard DockerFile available here, GraphDB fails to start with the following output:
Could not find any executable java binary. Please install java in your PATH or set JAVA_HOME
Looking into it, the DockerFile uses adoptopenjdk/openjdk11:alpine which was recently updated to Alpine 3.14.
If I switch to an older Docker image (or use adoptopenjdk/openjdk12:alpine) then GraphDB starts without a problem.
How can I fix this while still using the latest version of adoptopenjdk/openjdk11:alpine?
Below is the DockerFile:
FROM adoptopenjdk/openjdk11:alpine
# Build time arguments
ARG version=9.1.1
ARG edition=ee
ENV GRAPHDB_PARENT_DIR=/opt/graphdb
ENV GRAPHDB_HOME=${GRAPHDB_PARENT_DIR}/home
ENV GRAPHDB_INSTALL_DIR=${GRAPHDB_PARENT_DIR}/dist
WORKDIR /tmp
RUN apk add --no-cache bash curl util-linux procps net-tools busybox-extras wget less && \
curl -fsSL "http://maven.ontotext.com/content/groups/all-onto/com/ontotext/graphdb/graphdb-${edition}/${version}/graphdb-${edition}-${version}-dist.zip" > \
graphdb-${edition}-${version}.zip && \
bash -c 'md5sum -c - <<<"$(curl -fsSL http://maven.ontotext.com/content/groups/all-onto/com/ontotext/graphdb/graphdb-${edition}/${version}/graphdb-${edition}-${version}-dist.zip.md5) graphdb-${edition}-${version}.zip"' && \
mkdir -p ${GRAPHDB_PARENT_DIR} && \
cd ${GRAPHDB_PARENT_DIR} && \
unzip /tmp/graphdb-${edition}-${version}.zip && \
rm /tmp/graphdb-${edition}-${version}.zip && \
mv graphdb-${edition}-${version} dist && \
mkdir -p ${GRAPHDB_HOME}
ENV PATH=${GRAPHDB_INSTALL_DIR}/bin:$PATH
CMD ["-Dgraphdb.home=/opt/graphdb/home"]
ENTRYPOINT ["/opt/graphdb/dist/bin/graphdb"]
EXPOSE 7200
The issue comes from an update in the base image. From a few weeks adopt switched to alpine 3.14 which has some issues with older container runtime (runc). The issue can be seen in the release notes: https://wiki.alpinelinux.org/wiki/Release_Notes_for_Alpine_3.14.0
Updating your Docker will fix the issue. However, if you don't wish to update your Docker, there's a workaround.
Some additional info:
The cause of the issue is that for some reason containers running in older docker versions and alpine 3.14 seem to have issues with the test flag "-x" so an if [ -x /opt/java/openjdk/bin/java ] returns false, although java is there and is executable.
You can workaround this for now by
Pull the GraphDB distribution
Unzip it
Open "setvars.in.sh" in the bin folder
Find and remove the if block around line 32
if [ ! -x "$JAVA" ]; then
echo "Could not find any executable java binary. Please install java in your PATH or set JAVA_HOME"
exit 1
fi
Zip it again and provide it in the Dockerfile without pulling it from maven.ontotext.com
Passing it to the Dockerfile is done with 'ADD'
You can check the GraphDB free version's Dockerfile for a reference on how to pass the zip file to the Dockerfile https://github.com/Ontotext-AD/graphdb-docker/blob/master/free-edition/Dockerfile

How to run a script from file in another project using include in GitLab CI?

I'm trying to run a shell script from my template file located in another project via my include.
How should this be configured to work? Below scripts are simplified versions of my code.
Project A
template.yml
deploy:
before_script:
- chmod +x ./.run.sh
- source ./.run.sh
Project B
gitlab-ci.yml
include:
- project: 'project-a'
ref: master
file: '/template.yml'
stages:
- deploy
Clearly, the commands are actually being run from ProjectB and not ProjectA where the template resides. This can further be confirmed by adding ls -a in the template file.
So how should we be calling run.sh? Both projects are on the same GitLab instance under different groups.
If you have access project A and B, you can use multi-project pipelines. You trigger a pipeline in project A from project B.
In project A, you clone project B and run your script.
Project B
job 1:
variables:
PROJECT_PATH: "$CI_PROJECT_PATH"
RELEASE_BRANCH: "$CI_COMMIT_BRANCH"
trigger:
project: project-a
strategy: depend
Project A
job 2:
rules:
- if: '$CI_PIPELINE_SOURCE == "pipeline" && $PROJECT_PATH && $RELEASE_BRANCH'
script:
- git clone -b "${RELEASE_BRANCH}" --depth 50 https://gitlab-ci-token:${CI_JOB_TOKEN}#${CI_SERVER_HOST}/${PROJECT_PATH}.git $(basename ${PROJECT_PATH})
- cd $(basename ${PROJECT_PATH})
- chmod +x ../.run.sh
- source ../.run.sh
We've also run into this problem, and kinda wish Gitlab allowed includes to "import" non-yaml files. Nevertheless the simplest workaround we've found is to build a small docker image in repo A, which contains the script you want to run, and then repo B's job uses that docker image as the image, so the file run.sh is available :)
Minimal Dockerfile:
FROM bash:latest
COPY run.sh /usr/local/bin/
CMD run.sh
(Note: make sure you chmod +x run.sh before building your image, or add a RUN chmod +x /usr/local/bin/run.sh step)
Then, you'd just add this to your Project B's .gitlab-ci.yml:
stages:
- deploy
deploy:
image: registry.gitlab.com/... # Wherever you pushed your docker image to
script: run.sh
it's also possible to request a script by curl instead of copying a whole repository:
- curl -H "PRIVATE-TOKEN:$PRIVATE_TOKEN" --create-dirs "$CI_API_V4_URL/projects/$CI_DEPLOY_PROJECT_ID/repository/archive?path=pathToFolderWithScripts" -o $TEMP_DIR/archive.tar.gz
- tar zxvf $TEMP_DIR/archive.tar.gz -C $TEMP_DIR --strip-components 3
- bash $TEMP_DIR/run.sh
to make a curl request
to archive a folder with scripts
to unzip scripts in a temporary folder
to execute sh
ref This :: https://docs.gitlab.com/ee/api/repository_files.html#get-file-from-repository
GET /projects/:id/repository/files/:file_path/raw
curl --header "PRIVATE-TOKEN: <your_access_token>" "https://gitlab.example.com/api/v4/projects/13083/repository/files/app%2Fmodels%2Fkey%2Erb?ref=master"
it will display the file
to download this file just add >>
as below
curl --header "PRIVATE-TOKEN: <your_access_token>" "https://gitlab.example.com/api/v4/projects/13083/repository/files/app%2Fmodels%2Fkey%2Erb?ref=master" >> file.extension
As hinted by the answer above, multi project pipelines is the right approach for it.
Here's how it worked for me:
GroupX/ProjectA - contains reusable code
# .gitlab-ci.yml
stages:
- deploy
reusable_deploy_job:
stage: deploy
rules:
- if: '$CI_PIPELINE_SOURCE == "pipeline"' # run only if triggered by a pipeline
script:
- bash ./src/run.sh $UPSTREAM_CUSTOM_VARIABLE
GroupY/ProjectB - job that will reuse a code
# .gitlab-ci.yml
stages:
- deploy
deploy_job:
stage: deploy
variables:
UPSTREAM_CUSTOM_VARIABLE: CUSTOM_VARIABLE # pass this variable to downstream job
trigger: groupx/projecta

Create default files for conan without install

I'm creating a docker image as a build environment where I can mount a project and build it. For build I use cmake and conan. The dockerfile of this image:
FROM alpine:3.9
RUN ["apk", "add", "--no-cache", "gcc", "g++", "make", "cmake", "python3", "python3-dev", "linux-headers", "musl-dev"]
RUN ["pip3", "install", "--upgrade", "pip"]
RUN ["pip3", "install", "conan"]
WORKDIR /project
Files like
~/.conan/profiles/default
are created after I call
conan install ..
so that these files are created in the container and not in the image. The default behavior of conan is to set
compiler.libcxx=libstdc++
I'd like to run something like
RUN ["sed", "-i", "s/compiler.libcxx=libstdc++/compiler.libcxx=libstdc++11/", "~/.conan/profiles/default"]
to change the libcxx value but this file does not exist at this point. The only way I found to create the default profile by conan would be to install something.
Currently I'm running this container with
docker run --rm -v $(dirname $(realpath $0))/project:/project build-environment /bin/sh -c "\
rm -rf build && \
mkdir build && \
cd build && \
conan install -s compiler.libcxx=libstdc++11 .. --build missing && \
cmake .. && \
cmake --build . ; \
chown -R $(id -u):$(id -u) /project/build \
"
but I need to remove -s compiler.libcxx=libstdc++11 as it should be dependent on the image and not fixed by the build script.
Is there a way to initialize conan inside the image and edit the configuration without installing something? Currently I'm planning to write the whole configuration by myself but that seems a little too much as I want to use the default configuration and change only one line.
You can also create an image from a running container. Try installing conan in running container and then create an image of it. As it is being installed in running container it will have all dependencies only for it. To create that image you can follow this link
https://docs.docker.com/engine/reference/commandline/commit/

Whether drone.io support reusing docker container for build

I have setup drone.io locally and created a .drone.yml for CI build. But I found drone removes the docker container after finishing the build. Whether it support reusing the docker container? I am working on gradle project and the initial build takes a long time to download java dependencies.
UPDATE1
I used below command to set the admin user on running drone-server container.
docker run -d \
-e DRONE_GITHUB=true \
-e DRONE_GITHUB_CLIENT="xxxx" \
-e DRONE_GITHUB_SECRET="xxxx" \
-e DRONE_SECRET="xxxx" \
-e DRONE_OPEN=true \
-e DRONE_DATABASE_DRIVER=mysql \
-e DRONE_DATABASE_DATASOURCE="root:root#tcp(mysql:3306)/drone?parseTime=true" \
-e DRONE_ADMIN="joeyzhao0113" \
--restart=always \
--name=drone-server \
--link=mysql \
drone/drone:0.5
After doing this, I use the user joeyzhao0113 to login drone server but failed to enable the Trusted flag on the setting page. The popup message dialog shows setting successfully see below screenshot. But the flag keep showing disabled always.
No, it is not possible to re-use a Docker container for your Drone build. Build containers are ephemeral and are destroyed at the end of every build.
That being said, it doesn't mean your problem cannot be solved.
I think a better way to phrase this question would be "how do I prevent my builds from having to re-download dependencies"? There are two solutions to this problem.
Option 1, Cache Plugin
The first, recommended solution, is to use a plugin to cache and restore your dependencies. Cache plugins such as the volume cache and s3 cache are community contributed plugins.
pipeline:
# restores the cache from a local volume
restore-cache:
image: drillster/drone-volume-cache
restore: true
mount: [ /drone/.gradle, /drone/.m2 ]
volumes:
- /tmp/cache:/cache
build:
image: maven
environment:
- M2_HOME=/drone/.m2
- MAVEN_HOME=/drone/.m2
- GRADLE_USER_HOME=/drone/.gradle
commands:
- mvn install
- mvn package
# rebuild the cache in case new dependencies were
# downloaded during your build
rebuild-cache:
image: drillster/drone-volume-cache
rebuild: true
mount: [ /drone/.gradle, /drone/.m2 ]
volumes:
- /tmp/cache:/cache
Option 2, Custom Image
The second solution is to create a Docker image with your dependencies, publish to DockerHub, and use this as your build image in your .drone.yml file.
pipeline:
build:
image: some-image-with-all-my-dependencies
commands:
- mvn package