Deploying Symfony 4 Application to AWS Elasticbeanstalk - apache

I have a working Symfony 4.0.1 application running on PHP 7.1.14 (locally) that I would like to deploy to AWS Elastic Beanstalk using the EB CLI
I have a dist package of the application on my master git branch configured for production (vendor folder removed etc) that I am able to successfully deploy to Heroku. Now I need to deploy to AWS EB.
The AWS EB environment has already been set up (although I dont have access to the console). Some environment details are as follows:
Platform: arn:aws:elasticbeanstalk:us-east-2::platform/Tomcat 8 with Java 8 running on 64bit Amazon Linux/2.7.7
Tier: WebServer-Standard-1.0
At first, I was able to successfully deploy the application, but accessing the URL gave a 404 error for every page.
I did some googling and found a few articles describing the use of .config files. I have added one named 03_main.config with the following contents.
commands:
300-composer-update:
command: "export COMPOSER_HOME=/root && composer.phar self-update -n"
container_commands:
300-run-composer:
command: "composer.phar install --no-dev --optimize-autoloader --prefer-dist --no-interaction"
600-update-cache:
command: "source .ebextensions/bin/update-cache.sh"
700-remove-dev-app:
command: "rm web/app_dev.php"
Deploying with this .config file gives the following deployment failure error:
ERROR: [Instance: i-0c5f61f41d55a18bc] Command failed on instance. Return code: 127 Output: /bin/sh: composer.phar: command not found. command 300-composer-update in .ebextensions/03-main.config failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.
I understand the purpose of .config files but do not understand what additional configuration is needed for get this Symfony app running.

I guess you should use the full path to composer like bellow :
100-update-composer:
command: export COMPOSER_HOME=/root && /usr/bin/composer.phar self-update -n

Related

azure devop selfhosted agent, newman command not recognized

Trying to run my postman collection in azure devops inside a self-hosted agent. When I try to run the command inside the agent "newman run postman_collection.json -e postman_environment.json -r cli,htmlextra" it's running fine. But when I run the same through a a command line script task in release pipeline it's throwing the error "newman is not recognized..". I also tried to have a npm task for newman installation i.e. "npm install -g newman" it's also throwing the erro "##[error]Unable to locate executable file: 'newman'. Please verify either the file path exists or the file can be found within a d...."
azure devop selfhosted agent, newman command not recognized
According to the error message "##[error]Unable to locate executable file: 'newman" when you using the npm install -g newman, you could try to add C:\Users\[BUILDSERVER-USERNAME]\AppData\Roaming\npm to the PATH variable for the [BUILDSERVER-USERNAME] user.
You could refer to this document How to fix the Newman task for Team Foundation Server silently failing for some more details.
Besides, when we use command line to install the newman, it will take a few minutes to install it, so we need to wait for a few minutes before we using the command line:
"newman run postman_collection.json -e postman_environment.json -r cli,htmlextra"
You could add powershell task to sleep a few minutes:
echo "Sleeping for 10 mins..."
Start-Sleep -s 600

How to build container serving Vue SPA using Cloud Native Buildpacks

Currently I'm trying to build container serving VueJS application via Cloud Native Buildpacks.
I already have working Docker file that builds VueJS in production mode and then copy results to nginx image, but I would like to try to use CNB.
So I just have created empty VueJS project for test via vue create vue-tutorial and trying to do with CNB somehting like described there https://cli.vuejs.org/guide/deployment.html#heroku but using CNB.
Does anyone know working recipe how to do that with CNB?
P.S. Currently I'm trying to build that with
pack build spa --path . \  SIGINT(2) ↵  17:22:41
--buildpack gcr.io/paketo-buildpacks/nodejs \
--buildpack gcr.io/paketo-buildpacks/nginx
but getting next error (and I'm not sure that I'm on right way):
===> DETECTING
ERROR: No buildpack groups passed detection.
ERROR: Please check that you are running against the correct path.
ERROR: failed to detect: no buildpacks participating
ERROR: failed to build: executing lifecycle: failed with status code: 100
UPD
My current dockerfile
# build stage
FROM node:lts-alpine as build-stage
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
# production stage
FROM nginx:1.19-alpine as production-stage
COPY --from=build-stage /app/dist /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
We chatted about this in Slack, but I wanted to capture it here too:
pack build --buildpack heroku/nodejs --buildpack https://cnb-shim.herokuapp.com/v1/heroku-community/static yourimage
This command may do what you want. The static buildpack used in that example is not yet converted to a cloud native buildpack, but the shim may allow you to build a workable artifact. Then run your image with something like docker run -it -e PORT=5000 -p 5000:5000 yourimagename

Docker Toolbox - OpenShift Origin - OCI runtime create failed

I successfully installed Docker Toolbox (Docker version 18.03.0-ce, build 0520e24302)on my Win10 PC,
I've downloaded "openshift/origin-release" official image from Docker Hub using this command "docker pull openshift/origin-release".
Then I executed this in Docker shell:
docker run openshift/origin-release start
And I'm unable to start openshift, as I receive this error:
C:\Program Files\Docker Toolbox\docker.exe: Error response from daemon: OCI runtime create failed: container_linux.go:344: starting container process caused "exec: \"start\": executable file not found in $PATH": unknown.
From instruction on Docker HUB (https://hub.docker.com/r/openshift/origin-release/)
it's reported to : "If you have downloaded the client tools from the releases page, place the included binaries in your PATH."
How can I do that?
Thanks in advance for any advice

How to publish docker images to docker hub from gitlab-ci

Gitlab provides a .gitlab-ci.yml template for building and publishing images to its own registry (click "new file" in one of your project, select .gitlab-ci.yml and docker). The file looks like this and it works out of the box :)
# This file is a template, and might need editing before it works on your project.
# Official docker image.
image: docker:latest
services:
- docker:dind
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
build-master:
stage: build
script:
- docker build --pull -t "$CI_REGISTRY_IMAGE" .
- docker push "$CI_REGISTRY_IMAGE"
only:
- master
build:
stage: build
script:
- docker build --pull -t "$CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG" .
- docker push "$CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG"
except:
- master
But by default, this will publish to gitlab's registry. How can we publish to docker hub instead?
No need to change that .gitlab-ci.yml at all, we only need to add/replace the environment variables in project's pipeline settings.
1. Find the desired registry url
Using hub.docker.com won't work, you'll get the following error:
Error response from daemon: login attempt to https://hub.docker.com/v2/ failed with status: 404 Not Found
Default docker hub registry url can be found like this:
docker info | grep Registry
Registry: https://index.docker.io/v1/
index.docker.io is what I was looking for.
2. Set the environment variables in gitlab settings
I wanted to publish gableroux/unity3d images using gitlab-ci, here's what I used in Gitlab's project > Settings > CI/CD > Variables
CI_REGISTRY_USER=gableroux
CI_REGISTRY_PASSWORD=********
CI_REGISTRY=docker.io
CI_REGISTRY_IMAGE=index.docker.io/gableroux/unity3d
CI_REGISTRY_IMAGE is important to set.
It defaults to registry.gitlab.com/<username>/<project>
regsitry url needs to be updated so use index.docker.io/<username>/<project>
Since docker hub is the default registry when using docker, you can also use <username>/<project> instead. I personally prefer when it's verbose so I kept the full registry url.
This answer should also cover other registries, just update environment variables accordingly. 🙌
To expand on the GabLeRoux's answer,
I had issues on the pushing stage of the GitLab CI build:
denied: requested access to the resource is denied
By changing my CI_REGISTRY to docker.io (remove the index.) I was able to successfully push.

Nuget CA certificates error on a docker container of Asp.net core

I am trying to build a docker container of an asp.net code application and i get errors while trying to retrieve nuget packages
docker build -t my:container .
Sending build context to Docker daemon 9.58 MB
Step 1 : FROM microsoft/dotnet:latest
---> 3693707d4f7f
Step 2 : COPY . /app
---> Using cache
---> 22a461236738
Step 3 : WORKDIR /app
---> Using cache
---> 8bea2af489ad
Step 4 : RUN dotnet restore
---> Running in 5fbfe078c820
log : Restoring packages for /app/project.json...
error: Unable to load the service index for source https://api.nuget.org/v3/index.json.
error: An error occurred while sending the request.
error: Peer certificate cannot be authenticated with given CA certificates
The command 'dotnet restore' returned a non-zero code: 1
The dockerfile i am using is a pretty standard one and based on microsoft/dotnet:latest container.
FROM microsoft/dotnet:latest
COPY . /app
WORKDIR /app
RUN ["dotnet", "restore"]
RUN ["dotnet", "build"]
EXPOSE 9881/tcp
ENV ASPNETCORE_URLS http://*:9881
ENTRYPOINT ["dotnet", "run"]
This used to work a while ago, something seems to have broken but i have no idea what would that be.
The problem was that i was been using an experimental version 1.12.3 of docker.
Everything works great now that i installed the latest official version on windows (still 1.12.3 but not marked as experimental).