AWS Codebuild build stage shows nginx was installed but container doesn't have nginx - aws-codebuild

I'm new with AWS Codebuild and trying to deploy my docker application in EKS. On my Dockerfile I'm running RUN apk add nginx and build stage shows that it's being installed.
Step 6/24 : RUN apk add nginx
---> Running in cbd7a4e37bb7
(1/2) Installing pcre (8.44-r0)
(2/2) Installing nginx (1.18.0-r15)
Executing nginx-1.18.0-r15.pre-install
Executing nginx-1.18.0-r15.post-install
Executing busybox-1.32.1-r6.trigger
OK: 15 MiB in 34 packages
Removing intermediate container cbd7a4e37bb7
---> a890efd77e97
After the BUILD stage has been completed, I enter my EKS pod but didn't see any nginx under /etc
/etc # ls
ImageMagick-7 fstab issue my.cnf.d periodic securetty sysctl.d
alpine-release group logrotate.d mysql php7 services terminfo
apk group- modprobe.d network pkcs11 shadow udhcpd.conf
ca-certificates hostname modules openldap profile shadow- wgetrc
ca-certificates.conf hosts modules-load.d opt profile.d shells
conf.d init.d motd os-release protocols ssh
crontabs inittab mtab passwd resolv.conf ssl
fonts inputrc my.cnf passwd- rsyncd.conf sysctl.conf
I manually executed apk add nginx inside my pod and that's the only time nginx appeared under /etc
My AWS Codebuild is using the following configurations:
Managed image
OS: Amazon Linux 2
Runtime: Standard
Image: aws/codebuild/amazonlinux2-x86_64-standard:3.0
Image version: Always use the latest image for this runtime version
Privilaged: Enabled
Also,
I noticed that AWS Build doesn't execute the following COPY command in my Dockerfile:
COPY docker/php/php.ini /usr/local/etc/php/ (this works)
COPY docker/nginx/conf.d/default.conf /etc/nginx/conf.d/default.conf (this doesn't, probably because nginx didn't get installed on runtime)
COPY docker/nginx/docker-entrypoint.sh /usr/local/bin/docker-entrypoint-d9 (this doen't work.. still a mystery)
I hope I was able to provide the full details. Please let me know if you need more information. Thank you.
buildspec.yml
version: 0.2
env:
variables:
PHP_VERSION: 7.4
APP_HOME: /var/www/html
phases:
pre_build:
commands:
- echo Logging in to Amazon ECR...test4
- aws --version
- docker --version
- echo $AWS_DEFAULT_REGION
- $(aws ecr get-login --region $AWS_DEFAULT_REGION --no-include-email)
- export KUBECONFIG=$HOME/.kube/config
- export PHP_VERSION=$PHP_VERSION
- REPOSITORY_URI=xxxxxxx.xxx.xxx.xx-xxxxxxx-x.amazonaws.com/development-ecr
- COMMIT_HASH=$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | cut -c 1-7)
- IMAGE_TAG=$CODEBUILD_BUILD_NUMBER
# - IMAGE_TAG=build-$(echo $CODEBUILD_BUILD_ID | awk -F":" '{print $2}')
build:
commands:
- echo "PHP Version is $PHP_VERSION"
- echo "$APP_HOME"
- echo Running Docker compose..
- echo Build started on `date`
- echo Building the Docker image...
- docker build --build-arg PHP_VERSION=${PHP_VERSION} -t $REPOSITORY_URI:latest test/
- docker tag $REPOSITORY_URI:latest $REPOSITORY_URI:$IMAGE_TAG
post_build:
commands:
- echo Build completed on `date`
- echo Pushing the Docker images...
- docker push $REPOSITORY_URI:latest
- docker push $REPOSITORY_URI:$IMAGE_TAG
- echo "Setting Environment Variables related to AWS CLI for Kube Config Setup"
- CREDENTIALS=$(aws sts assume-role --role-arn $EKS_KUBECTL_ROLE_ARN --role-session-name codebuild-kubectl --duration-seconds 900)
- export AWS_ACCESS_KEY_ID="$(echo ${CREDENTIALS} | jq -r '.Credentials.AccessKeyId')"
- export AWS_SECRET_ACCESS_KEY="$(echo ${CREDENTIALS} | jq -r '.Credentials.SecretAccessKey')"
- export AWS_SESSION_TOKEN="$(echo ${CREDENTIALS} | jq -r '.Credentials.SessionToken')"
- export AWS_EXPIRATION=$(echo ${CREDENTIALS} | jq -r '.Credentials.Expiration')
# Setup kubectl with our EKS Cluster
- echo "Update Kube Config"
- aws eks update-kubeconfig --name $EKS_CLUSTER_NAME
# Apply changes to our Application using kubectl
- echo "Delete current deployment"
- kubectl delete deployment dev-spectrum-app
- sleep 10
- echo "Apply changes to kube manifests"
- kubectl apply -f kube-yaml/
- echo "Completed applying changes to Kubernetes Objects"
pipeline for other stages
- echo Writing image definitions file...
- printf '[{"name":"dev-spectrum","imageUri":"%s"}]' $REPOSITORY_URI:$IMAGE_TAG > imagedefinitions.json
- cat imagedefinitions.json
artifacts:
files: imagedefinitions.json
Dockerfile
ARG PHP_VERSION=7.4
FROM php:${PHP_VERSION:+${PHP_VERSION}-}fpm-alpine
ARG APP_HOME
RUN echo "CHECKING ARGS: $APP_HOME ${PHP_VERSION}"
RUN apk update; \
apk upgrade;
RUN apk add nginx
# Install library extension
RUN apk add libpng libpng-dev libjpeg-turbo libjpeg-turbo-dev freetype-dev libwebp-dev zlib-dev libxpm-dev
# Install MySql
RUN docker-php-ext-install mysqli pdo pdo_mysql
RUN docker-php-ext-install opcache
RUN docker-php-ext-configure gd --with-jpeg --with-freetype \
&& docker-php-ext-install gd
# PHP packages
RUN apk add libressl \
ca-certificates \
openssh-client \
rsync \
git \
curl \
wget \
gzip \
tar \
patch \
perl \
pcre \
imap \
imagemagick \
mariadb-client \
build-base \
autoconf \
libtool \
php7-dev \
pcre-dev \
imagemagick-dev \
php7 \
php7-fpm \
php7-opcache \
php7-session \
php7-dom \
php7-xml \
php7-xmlreader \
php7-ctype \
php7-ftp \
php7-json \
php7-posix \
php7-curl \
php7-pdo \
php7-pdo_mysql \
php7-sockets \
php7-zlib \
php7-mcrypt \
php7-mysqli \
php7-sqlite3 \
php7-bz2 \
php7-phar \
php7-openssl \
php7-posix \
php7-zip \
php7-calendar \
php7-iconv \
php7-imap \
php7-soap \
php7-dev \
php7-pear \
php7-redis \
php7-mbstring \
php7-xdebug \
php7-exif \
php7-xsl \
php7-ldap \
php7-bcmath \
php7-memcached \
php7-oauth \
php7-apcu
# Install Composer
RUN curl -sS https://getcomposer.org/installer | php && \
mv composer.phar /usr/local/bin/composer && \
ln -s /root/.composer/vendor/bin/drush /usr/local/bin/drush
# Install Drush
RUN composer global require drush/drush && \
composer global update
# Copy php.ini memory limit increase
COPY docker/php/php.ini /usr/local/etc/php/
COPY docker/nginx/conf.d/default.conf /etc/nginx/conf.d/default.conf
COPY docker/nginx/docker-entrypoint.sh /usr/local/bin/docker-entrypoint.sh
#COPY docker/nginx/docker-entrypoint.sh /usr/local/bin/docker-entrypoint-d9
RUN echo ${APP_HOME} ${PHP_VERSION}
WORKDIR /var/www/html
RUN mkdir /run/nginx;
#Copy composer files
COPY composer.json composer.lock ./
RUN composer install --prefer-dist --no-interaction
RUN composer dump-autoload
RUN set -eux; \
chmod +x /usr/local/bin/docker-entrypoint.sh
# chmod +x /usr/local/bin/docker-entrypoint-d9
EXPOSE 80 433
CMD ["nginx"]
#RUN nginx
#RUN php-fpm
ENTRYPOINT ["docker-entrypoint.sh"]

Related

How to run Selenium sind runner in GitLabCI?

I'm currently evaluating Selenium in combination with GitLab CI as a testing tool for our website. This is my current .gitlab-ci.yml:
variables:
GIT_STRATEGY: clone
GIT_DEPTH: 0
stages:
- tests
test:
stage: tests
image: node:latest
tags:
- linux
before_script:
- apt-get update
- apt-get install -y chromium
- npm install -g selenium-side-runner
- npm install -g chromedriver
script:
- selenium-side-runner My-UI-Test.side
I'm getting the following error:
FAIL ./DefaultSuite.test.js
● Test suite failed to run
WebDriverError: unknown error: Chrome failed to start: exited abnormally.
(unknown error: DevToolsActivePort file doesn't exist)
(The process started from chrome location /usr/bin/chromium is no longer running, so ChromeDriver is assuming that Chrome has crashed.)
at Object.throwDecodedError (../../../../../../usr/local/lib/node_modules/selenium-side-runner/node_modules/selenium-webdriver/lib/error.js:550:15)
at parseHttpResponse (../../../../../../usr/local/lib/node_modules/selenium-side-runner/node_modules/selenium-webdriver/lib/http.js:560:13)
at Executor.execute (../../../../../../usr/local/lib/node_modules/selenium-side-runner/node_modules/selenium-webdriver/lib/http.js:486:26)
I've searched for the error message DevToolsActivePort file doesn't exist and it seems that Chrome doesn't like to be run with root privileges. A lot of answers suggest using the --no-sandbox or --disable-dev-shm-usage flags. But those are Chrome flags, and since I'm not calling Chrome directly, I can't use them. The website in question is also deployed from a different project, so I have no code to work with. The only files I can change are My-UI-Test.side and .side.yaml.
I have a separate project for my e2e tests, into which I've added a Dockerfile, my selenium .side file, as well as a config file .side.conf. This project uses the gitlab docker registry to upload the project as an image, that can be loaded directly into the gitlab-ci.
Here are my files for the e2e test project:
package.json
...
"scripts": {
"test": "selenium-side-runner test.side"
},
...
"dependencies": {
"selenium-side-runner": "^3.17.0",
"chromedriver": "^101.0.0"
}
These are the options that I am using, you might want to adjust a few things here and there. The capabilities are pretty much what you want, though.
I've also added the baseUrl key to this file instead of directly into the package.json, because I use the same image for several environments with changing URLs, that I'm replacing in my before_script whenever needed. (I left this out below, as your use case probably differs)
.side.yml
capabilities:
browserName: "chrome"
goog:chromeOptions:
binary: /usr/bin/google-chrome-stable
args:
- no-sandbox
- disable-dev-shm-usage
- headless
- nogpu
output-directory: results
output-format: junit
baseUrl: <baseURL>
The Dockerfile might include a few useless dependencies, you can likely remove a lot of them. Many of those are just copied over from my puppeteer Dockerfile, as they are using the google-chrome-stable binary very similarly. Downloading the fonts with the google-chrome-stable binary might not be needed in your case as well. So just adjust it to your needs.
Dockerfile
FROM node:14
RUN apt update
RUN apt install -y \
rsync \
grsync \
gnupg \
ca-certificates \
fonts-liberation \
libappindicator3-1 \
libasound2 \
libatk-bridge2.0-0 \
libatk1.0-0 \
libc6 \
libcairo2 \
libcups2 \
libdbus-1-3 \
libexpat1 \
libfontconfig1 \
libgbm1 \
libgcc1 \
libglib2.0-0 \
libgtk-3-0 \
libnspr4 \
libnss3 \
libpango-1.0-0 \
libpangocairo-1.0-0 \
libstdc++6 \
libx11-6 \
libx11-xcb1 \
libxcb1 \
libxcomposite1 \
libxcursor1 \
libxdamage1 \
libxext6 \
libxfixes3 \
libxi6 \
libxrandr2 \
libxrender1 \
libxss1 \
libxtst6 \
lsb-release \
wget \
xdg-utils
RUN apt-get update \
&& apt-get install -y wget gnupg \
&& wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - \
&& sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list' \
&& apt-get update \
&& apt-get install -y google-chrome-stable fonts-ipafont-gothic fonts-wqy-zenhei fonts-thai-tlwg fonts-kacst fonts-freefont-ttf libxss1 \
--no-install-recommends \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /app
COPY package.json /app
COPY .side.yml /app
COPY test.side /app
COPY results /app
RUN npm i -g chromedriver --unsafe-perm
RUN npm i -g selenium-side-runner --unsafe-perm
RUN npm install
And here I include it into my gitlab CI
gitlab-ci.yml:
e2e:
stage: test
image: <image-of-above-project>:1.0
variables:
GIT_STRATEGY: none
script:
- cat .side.yml
- npm run test
If you need more information on container registry, head to here: https://docs.gitlab.com/ee/user/packages/container_registry/

Extracting ZAP report running in container on Jenkins agent (docker based)

My setup is as follows:
Jenkins pipeline script which triggers Jenkins job which runs inside a dokcer container.
ZAP is in containerzied mode
Commands used:
echo DEBUG - mkdir -p $PWD/out
mkdir -p $PWD/out
echo DEBUG - chmod 777 $PWD/out
chmod 777 $PWD/out
test -d ${PWD}/out \
&& docker run -v $(pwd)/out:/zap/wrk/:rw -t owasp/zap2docker-live zap-api-scan.py -t $TARGET_URL -f openapi -d -r zap_scan_report.html
Also tried: docker run --user $(id -u):$(id -g) -v $(pwd)/out:/zap/wrk/:rw -t owasp/zap2docker-live zap-api-scan.py -t $TARGET_URL -f openapi -d -r zap_scan_report.html
Scan works fine but report is not in the "out" directory.
This works fine on a VM environment
Any suggestions as I guess the mount is not working in a docker container

Creating Splunk universal forwarder using Alpine base image

I am trying to create a Splunk universal forwarder image using the alpine:3.8 base image.
FROM alpine:3.8
ENV SPLUNK_PRODUCT universalforwarder
ENV SPLUNK_VERSION 6.3.1
ENV SPLUNK_BUILD f3e41e4b37b2
ENV SPLUNK_FILENAME splunkforwarder-${SPLUNK_VERSION}-${SPLUNK_BUILD}-Linux-x86_64.tgz
ENV SPLUNK_SERVER_HOST testapp:9997
ENV SPLUNK_HOME /opt/splunk
ENV SPLUNK_GROUP splunk
ENV SPLUNK_USER splunk
ENV SPLUNK_BACKUP_DEFAULT_ETC /var/opt/splunk
ENV SPLUNK_INDEX test
ENV FORWARD_HOSTNAME InstanceId
# Here we install GNU libc (aka glibc) and set C.UTF-8 locale as default.
RUN apk --no-cache add ca-certificates wget \
&& wget -q -O /etc/apk/keys/sgerrand.rsa.pub https://alpine-pkgs.sgerrand.com/sgerrand.rsa.pub \
&& wget https://github.com/sgerrand/alpine-pkg-glibc/releases/download/2.28-r0/glibc-2.28-r0.apk \
&& apk add glibc-2.28-r0.apk \
&& rm -rf /var/lib/apt/lists/*
# add splunk:splunk user
RUN addgroup --system ${SPLUNK_GROUP} \
&& adduser --system --ingroup ${SPLUNK_GROUP} ${SPLUNK_USER}
# Download official Splunk release, verify checksum and unzip in /opt/splunk
# Also backup etc folder, so it will be later copied to the linked volume
RUN apk add sudo curl\
&& mkdir -p ${SPLUNK_HOME} \
&& curl -o /tmp/${SPLUNK_FILENAME} https://download.splunk.com/products/${SPLUNK_PRODUCT}/releases/${SPLUNK_VERSION}/linux/${SPLUNK_FILENAME} \
&& curl -o /tmp/${SPLUNK_FILENAME}.md5 https://download.splunk.com/products/${SPLUNK_PRODUCT}/releases/${SPLUNK_VERSION}/linux/${SPLUNK_FILENAME}.md5 \
&& tar xzf /tmp/${SPLUNK_FILENAME} --strip 1 -C ${SPLUNK_HOME} \
&& rm /tmp/${SPLUNK_FILENAME} \
&& rm /tmp/${SPLUNK_FILENAME}.md5 \
&& mkdir -p /var/opt/splunk \
&& cp -R ${SPLUNK_HOME}/etc ${SPLUNK_BACKUP_DEFAULT_ETC} \
&& rm -fR ${SPLUNK_HOME}/etc \
&& chown -R ${SPLUNK_USER}:${SPLUNK_GROUP} ${SPLUNK_HOME} \
&& chown -R ${SPLUNK_USER}:${SPLUNK_GROUP} ${SPLUNK_BACKUP_DEFAULT_ETC} \
&& rm -rf /var/lib/apt/lists/*
COPY ./config /tmp/splunk
COPY patch-entrypoint.sh /sbin/entrypoint.sh
RUN chmod +x /sbin/entrypoint.sh
# Ports Splunk Daemon, Network Input, HTTP Event Collector
EXPOSE 8089/tcp 1514 8088/tcp
WORKDIR /opt/splunk
# Configurations folder, var folder for everyting (indexes, logs, kvstore)
VOLUME [ "/opt/splunk/etc", "/opt/splunk/var" ]
ENTRYPOINT ["/sbin/entrypoint.sh"]
CMD ["start-service"]
Now I am facing a couple of issues here:
When I am running /opt/splunkforwarder/bin/splunk start --accept-license I am getting /opt/splunkforwarder/bin/splunk: not found.
I am using custom output.conf file. It's in config folder.
[tcpout]
defaultGroup = abc
disabled = false
[tcpout:abc]
server = _OUTPUT_SERVERS_
autoLB = true
compressed = false
useACK = true
sendCookedData = true
entrypoint.sh is the script which I am using to replace the environment variable from output.config and restart the Splunk but again restart is not working. How can I fix this?
AFAIK, alpine:3.8 doesn't ship with glibc, which Splunk requires. Is is possible that this is causing issues? Have you tried with a different base image?

apache2 service status in Docker container

My Dockerfile as follow:
FROM php:7.2-apache
#install some basic tools
RUN apt-get -dd clean && apt-get -dd update && apt-get install -y \
git \
tree \
vim \
wget \
iputils-ping \
mysql-client \
subversion
#install some base extensions
RUN apt-get install -y \
libzip-dev \
libicu-dev \
zip \
&& docker-php-ext-configure zip --with-libzip \
&& docker-php-ext-configure intl \
&& docker-php-ext-install zip intl opcache pdo_mysql mysqli
#setup composer
RUN curl -sS https://getcomposer.org/installer | php \
&& mv composer.phar /usr/local/bin/ \
&& ln -s /usr/local/bin/composer.phar /usr/local/bin/composer
WORKDIR /var/www/app
EXPOSE 80
RUN a2enmod rewrite
After I compose above image with mysql I start server e.g
docker-compose up -d
And access the container by:
docker exec -it php_web_1 bash
Then I check the apache2 service status:
service apache2 status
[FAIL] apache2 is not running ... failed!
If I just run command : apache2
httpd (pid 1) already running
service apache2 start/stop does not have any effect on apache2 status.
What is the difference b/w both ways and why service apache2 start/stop is not working ?
If you look at the Dockerfile for the php:7.2-apache base image, you would see the CMD ["apache2-foreground"] which runs a script located in /usr/local/bin/ directory to run the Apache server upon the container startup. If you set an interactive session with the base image and run the SysVInit commands like service apache2 start, this will start the Apache service within the container which was stopped when you made the session.
In your case, try running the script in the Dockerfile located in the /usr/local/bin/ directory as the CMD command and re-run docker-compose up -d to see if the Apache is started or not.

Docker Container from php:5.6-apache as root

This would be related to Docker php:5.6-Apache Development Environment missing permissions on volume mount
I have tried pretty much everything to make the mounted volume be readable by www-data, my current solution is trying to move by scripts the folders needed by the application to /var and giving the proper permissions to be writable by www-data but that is becoming hard to maintain.
Giving the fact that it's a development environment I don't mind being a security hole so I would like to run apache as root and I get
Error: Apache has not been designed to serve pages while running as
root. There are known race conditions that will allow any local user
to read any file on the system. If you still desire to serve pages as
root then add -DBIG_SECURITY_HOLE to the CFLAGS line in your
src/Configuration file and rebuild the server. It is strongly
suggested that you instead modify the User directive in your
httpd.conf file to list a non-root user.
Is there any easy way I can accomplish this using the docker image php:5.6-apache?
This is my docker-compose.yml
version: '2'
services:
api:
container_name: api
privileged: true
build:
context: .
dockerfile: apigility/Dockerfile
ports:
- "2020:80"
volumes:
- /ft/code/api:/var/www:rw
And this is my Dockerfile:
FROM php:5.6-apache
USER root
RUN apt-get update \
&& apt-get install -y sudo openjdk-7-jdk \
&& echo "www-data ALL=NOPASSWD: ALL" >> /etc/sudoers
RUN apt-get install -y git zlib1g-dev libmcrypt-dev nano vim --no-install-recommends \
&& apt-get clean \
&& rm -r /var/lib/apt/lists/* \
&& docker-php-ext-install mcrypt zip \
&& curl -sS https://getcomposer.org/installer \
| php -- --install-dir=/usr/local/bin --filename=composer \
&& a2enmod rewrite \
&& sed -i 's!/var/www/html!/var/www/public!g' /etc/apache2/apache2.conf \
&& echo "AllowEncodedSlashes On" >> /etc/apache2/apache2.conf \
&& cp /usr/src/php/php.ini-production /usr/local/etc/php/php.ini \
&& printf '[Date]\ndate.timezone=UTC' > /usr/local/etc/php/conf.d/timezone.ini
WORKDIR /var/www
Why not to do exactly what it says in the question you referred to?
RUN usermod -u 1000 www-data
RUN groupmod -g 1000 www-data
This is not a hack. It's a proper solution to the problem you have in the development environment.
So, I managed to make the mounted data available for www-data by using the part of the answer in the related post but another step is required for it to work.
After you run docker-machine start default you need to ssh into it and run the following:
sudo mkdir --parents /code [where /code is the shared folder in virtualbox]
sudo mount -t vboxsf -o uid=999,gid=999 code /code [this is to make sure the uid and gid is 999 for the next part to work]
Then in your Dockerfile add
RUN usermod -u 999 www-data \
&& groupmod -g 999 www-data
After it's mounted, /code will have the owner www-data, and problem solved!
Another and better solution.
Add this in your dockerfile
RUN cd ~ \
&& apt-get -y install dpkg-dev debhelper libaprutil1-dev libapr1-dev libpcre3-dev liblua5.1-0-dev autotools-dev \
&& apt-get source apache2.2-common \
&& cd apache2-2.4.10 \
&& export DEB_CFLAGS_SET="-g -O2 -fstack-protector-strong -Wformat -Werror=format-security -DBIG_SECURITY_HOLE" \
&& dpkg-buildpackage -b \
&& cd .. \
&& dpkg -i apache2-bin_2.4.10-10+deb8u7_amd64.deb \
&& dpkg -i apache2.2-common_2.4.10-10+deb8u7_amd64.deb
After that, you could be able to run apache as root.
PS : apache2-2.4.10, apache2-bin_2.4.10-10+deb8u7_amd64.deb and apache2.2-common_2.4.10-10+deb8u7_amd64.deb could change according to your source