Angular 5 - GAE flex environment deployment - angular5

We use bitbucket pipeline for our angular 5 to deploy our code to GAE. We are ending up with the following exception. We use bitbucket pipelines the the CI/CD
THis is the pipeline code
image: node:9.11.1
pipelines:
custom:
default:
- step:
script:
- npm install -g #angular/cli#latest
- ng build --prod
- cp app.yaml dist
- ls dist
- cd dist
- curl -o /tmp/google-cloud-sdk.tar.gz https://dl.google.com/dl/cloudsdk/channels/rapid/downloads/google-cloud-sdk-190.0.0-linux-x86_64.tar.gz
- tar -xvf /tmp/google-cloud-sdk.tar.gz -C /tmp/
- /tmp/google-cloud-sdk/install.sh -q
- source /tmp/google-cloud-sdk/path.bash.inc
- echo $GCLOUD_API_KEYFILE | base64 --decode --ignore-garbage > ./gcloud-api-key.json
- gcloud config set project $GCLOUD_PROJECT
- gcloud components install app-engine-java
- gcloud auth activate-service-account --key-file gcloud-api-key.json
- echo $GCLOUD_API_KEYFILE > /tmp/client-secret.json
- gcloud config set project $GCLOUD_PROJECT
- gcloud app update --split-health-checks --project adtecy-ui
- gcloud app deploy app.yaml
I am looking to use node docker image to deploy angular 5 (version 5.2.11) app to GAE flex environment but it takes an unusally long time and the status is still "In Progress" (not sure the usual deploy time)
This is my app.yaml file
env: flex
runtime: python
threadsafe: true
readiness_check:
timeout_sec: 4
check_interval_sec: 5
failure_threshold: 2
success_threshold: 2
app_start_timeout_sec: 3600
I have given a very high timeout period because the previous push failed with a timeout
I believe GAE by default use python and hence we did not install python
Right now the deployment is running for about 20 mins or so but without letting us know any result. Could you guys help in deploying my app to GAE with minimal time?
EDIT:
Now we have got the result after running for 33 minutes
`21df82f90a72: Layer already exists
aeb4b6656589: Pushed
latest: digest:
sha256:c57d3178321c5f2721fc70cd00cb7862d469c74a6bf616ecfda760342c13af7e size: 3255
DONE
--------------------------------------------------------------------------------
Updating service [default] (this may take several minutes)...
.failed.
ERROR: (gcloud.app.deploy) Operation [apps/adtecy-
ui/operations/9c273f87-91a3-495a-b75d-0d6c767dce97] timed out.
This operation may still be underway.`

You can check the status of the deploy operation by running
gcloud app operations "apps/adtecy-ui/operations/9c273f87-91a3-495a-b75d-0d6c767dce97"

there seems to be an issue with running the app is nodejs environment. So I switched to python27 and I was able to successfully deploy it. But when I try to load using the app served in GAE it is throwing an error.
https://adtecy-ui.appspot.com
And here is my app.yaml (And I made some modifications too)
runtime: python27
api_version: 1
threadsafe: true
handlers:
# Routing for bundles to serve directly
- url: /((?:inline|main|polyfills|styles|vendor)\.[a-z0-9]+\.bundle\.js)
secure: always
redirect_http_response_code: 301
static_files: dist/\1
upload: dist/.*
# Routing for a prod styles.bundle.css to serve directly
- url: /(styles\.[a-z0-9]+\.bundle\.css)
secure: always
redirect_http_response_code: 301
static_files: dist/\1
upload: dist/.*
# Routing for typedoc, assets and favicon.ico to serve directly
- url: /((?:assets|docs)/.*|favicon\.ico)
secure: always
redirect_http_response_code: 301
static_files: dist/\1
upload: dist/.*
# Any other requests are routed to index.html for angular to
handle so we don't need hash URLs
- url: /.*
secure: always
redirect_http_response_code: 301
static_files: dist/index.html
upload: dist/index\.html

Related

How to deploy Blazor WebAssembly as static site in GitLab Pages

I can't find any guide on how to deploy a Blazor web assembly app to GitLab Pages as a static site. Has anyone managed to do so for .NET 6?
I have created a sample web assembly Blazor client application:
https://gitlab.com/sunnyatticsoftware/sasw-community/sasw-editor
The steps to create this simple web assembly are:
Install .NET 6 SDK
Create repo and clone it (e.g: sasw-editor)
Create the solution with web assembly Blazor project
dotnet new gitignore
dotnet new blazorwasm --name Sasw.Editor.Web --output src/Sasw.Editor.Web --no-https
dotnet new sln
dotnet sln add src/Sasw.Editor.Web
Compile and run it
dotnet build
dotnet run --project src/Sasw.Editor.Web
That's a way to run the blazor app on the port defined at the launchsettings.json
Building...
info: Microsoft.Hosting.Lifetime[14]
Now listening on: http://localhost:5291
info: Microsoft.Hosting.Lifetime[0]
Application started. Press Ctrl+C to shut down.
info: Microsoft.Hosting.Lifetime[0]
Hosting environment: Development
info: Microsoft.Hosting.Lifetime[0]
Content root path: C:\src\sasw-editor\src\Sasw.Editor.Web
I stop it. It works fine when served with Kestrel.
Now, the process to publish a distribution folder would be like this
dotnet publish -c Release -o publish
All the artifacts and files are now under publish folder. So, in theory, I can serve those things with a simple web server. I install a basic web server tool called local-web-server (it requires NodeJs/npm but you can use any other web server)
npm install -g local-web-server
Now I navigate to the publish/wwwroot folder where my index.html is
And I start the web server there
ws
Listening on http://5CD013DP5L:8000, http://192.168.1.13:8000, http://127.0.0.1:8000, http://172.21.208.1:8000
If I open the browser on http://127.0.0.1:8000, or any other of the above url, I can see my Blazor wasm app working perfectly.
I want to host that very same publish folder in GitLab pages which, in theory, is capable of serving static files.
So I create a .gitlab-ci.yml to compile, publish and copy contents to the public folder of GitLab pages.
image: mcr.microsoft.com/dotnet/sdk:6.0
variables:
GIT_DEPTH: 1000
PUBLISH_OUTPUT_DIR: publish
stages:
- build
- test
- publish
- delivery
build:
stage: build
script:
- dotnet restore --no-cache --force
- dotnet build --configuration Release --no-restore
artifacts:
paths:
- test
expire_in: 8 hour
rules:
- if: $CI_COMMIT_TAG
when: never
- when: always
test:
stage: test
script: dotnet test --blame --configuration Release
allow_failure: false
rules:
- if: $CI_COMMIT_TAG
when: never
- exists:
- test/**/*Tests.csproj
publish:
stage: publish
script:
- dotnet publish -c Release -o $PUBLISH_OUTPUT_DIR
artifacts:
paths:
- $PUBLISH_OUTPUT_DIR/
expire_in: 8 hour
rules:
- if: $CI_COMMIT_TAG
when: never
- when: on_success
pages:
stage: delivery
script:
- cp -a $PUBLISH_OUTPUT_DIR/ public
artifacts:
paths:
- public
only:
- main
The pipeline completes successfully. I can see the exact same structure I had locally within publish folder, this time under public folder in GitLab
But it fails to render the app
https://sunnyatticsoftware.gitlab.io/-/sasw-community/sasw-editor/-/jobs/1846501612/artifacts/public/wwwroot/index.html
shows
Loading...
An unhandled error has occurred. Reload 🗙
I can see it's attempting to access https://sunnyatticsoftware.gitlab.io/ or https://sunnyatticsoftware.gitlab.io/favicon.ico and returning 404
The favicon.ico would exist on https://sunnyatticsoftware.gitlab.io/-/sasw-community/sasw-editor/-/jobs/1846501612/artifacts/public/wwwroot/favicon.ico
so it must be some kind of URL re-write problem, right?
Any help would be much appreciated.
Always keep it at base href="/" and then in your ci change it to whatever you need. E.g. on gitlab you can use the CI_PROJECT_NAME variable.
pages:
stage: deploy
variables:
SED_COMMAND: 's#<base\shref="\/"\s?\/>#<base href="\/$CI_PROJECT_NAME\/" \/>#g'
script:
- cp -a $PUBLISH_OUTPUT_DIR/wwwroot public
- sed -r -i "$SED_COMMAND" public/index.html
artifacts:
paths:
- public
only:
- main
The solution is simply to use the following
pages:
stage: delivery
script:
- cp -a $PUBLISH_OUTPUT_DIR/wwwroot public
artifacts:
paths:
- public
only:
- main
and to use the <base href="/sasw-community/sasw-editor/" /> in index.html with the relative path.
I've recorded a quick tutorial on my Odysee channel https://odysee.com/#sunnyAtticSoftware:a/blazor-wasm-gitlab-pages:e
See https://gitlab.com/sunnyatticsoftware/training/blazorwasm-pages with a full sample
I still don't know of a good way to mix local development's base / relative path with prod base /sasw-community/sasw-editor/ and change it dynamically (is it even possible?)
But the problem is solved.

Bitbucket pipelines - SCP Deployment- Failing "Identity file /opt/atlassian/pipelines/agent/ssh/id_rsa_tmp not accessible: No such file or directory."

I am trying to add bitbucket pipelines to deploy my angular application to one of our servers.
I configured SSH on my server and I could fetch the Host's finger print under Known hosts in Bitbucket
Below is my YAML file.
image: node:14
pipelines:
branches:
master:
- step:
name: Building Test angular application
caches:
- node
script:
- echo "npm install in progress.."
- npm install
- echo "Installing angular/cli..."
- npm install -g #angular/cli
- echo "Starting the Build process.."
- ng build
artifacts:
- dist/** # Save build for next steps
- step:
name: "Deployment"
script:
- pipe: atlassian/scp-deploy:0.3.3
variables:
USER: $USER
SERVER: $SERVER
REMOTE_PATH: '/c/testscp/'
LOCAL_PATH: 'dist/*'
SSH_KEY: $MY_SSH_KEY
The first step is running fine without any issues and I can see dist folder being added to the artifact however the second step is failing with the below error.
scp -rp -i /opt/atlassian/pipelines/agent/ssh/id_rsa_tmp dist/TestPipelineApplication <<USER>>#<<SERVERIP>>:/c/testscp/
Warning: Identity file /opt/atlassian/pipelines/agent/ssh/id_rsa_tmp not accessible: No such file or directory.
Load key "/root/.ssh/pipelines_id": invalid format
Permission denied, please try again.
Permission denied, please try again.
<<USER>>#<<SERVERIP>>: Permission denied (publickey,password,keyboard-interactive).
I never configured pipelines before so I am not completely sure what I am missing here.
Also, I looked into the below documentation but of no luck
https://bitbucket.org/atlassian/scp-deploy/src/1.0.1/README.md
Any help or suggestions are greatly appreciated.

Blank page when deploying app with Cloud Storage and custom domain

I am running into an issue when trying to deploy a React website using Google Cloud Storage and a custom domain name.
I build my react app locally using npm run build.
Upload the build folder to a bucket with the name www.example.com. Upload a yaml that contains:
runtime: nodejs10
handlers:
- url: /
static_files: build/index.html
upload: build/index.html
- url: /
static_dir: build
Open cloud shell
run gsutil rsync -r gs://www.example.com ./example
run cd ./example
run gcloud app deploy
The app works great at the https://example-dot-projectname.uc.r.appspot.com url
The app is a blank page at the www.mycustomdomain.com
The console reads Loading failed for the <script> with source "http://www.mycustomdomain.com/static.js/###.chunk.js" twice
I also ran gsutil web set -m build/index.html gs://www.example.com and it did not work.
Any ideas?

Why is my static site broken using github action and azure cli to deploy?

I'm trying to deploy my static site to Azure storage but have been having issues getting the site open correctly even though the github action executes without errors and the files seem to be in place. In the browser, index.html seems to load along with the css and js.... but the site does not run properly. The console shows a failure in the js:
The odd thing is that I don't have any issues using the azure storage extension in vscode or using the azure cli:
az storage blob upload-batch --account-name <ACCOUNT_NAME> -d '$web' -s ./dist --connection-string '<CONNECTION_STRING>'
when I deploy from my laptop.
My github action looks like this:
name: Blob storage website CI
on:
push:
branches: [master]
pull_request:
branches: [master]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: npm install
run: |
npm install
- name: npm build
run: |
npm run build
- name: Azure Login
uses: azure/login#v1
with:
creds: ${{ secrets.AZURE_CREDENTIALS }}
- name: Azure CLI script
uses: azure/CLI#v1
with:
azcliversion: latest
inlineScript: |
az storage blob upload-batch --account-name <ACCOUNT_NAME> -d '$web' -s ./dist --connection-string '${{ secrets.BLOB_STORAGE_CONNECTION_STRING }}'
# Azure logout
- name: logout
run: |
az logout
based on this article here.
I thought that it might be due to the azure cli version, but none of the versions I've tried have made a difference.
Any ideas why my site broken using github action and azure cli to deploy?
For anyone interested - I was missing environment variables during the build process in the GitHub Action. I was able to pass these without checking in the .env files using github secrets.
There's now a step in the action to create a .env,
- name: Set Environment Variables
run: |
touch .env
echo ENVIRONMENT_VARIABLE=${{secrets.ENVIRONMENT_VARIABLE}} >> .env
and another to remove it:
- name: Remove Environment Variables
run: |
rm .env

How to publish docker images to docker hub from gitlab-ci

Gitlab provides a .gitlab-ci.yml template for building and publishing images to its own registry (click "new file" in one of your project, select .gitlab-ci.yml and docker). The file looks like this and it works out of the box :)
# This file is a template, and might need editing before it works on your project.
# Official docker image.
image: docker:latest
services:
- docker:dind
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
build-master:
stage: build
script:
- docker build --pull -t "$CI_REGISTRY_IMAGE" .
- docker push "$CI_REGISTRY_IMAGE"
only:
- master
build:
stage: build
script:
- docker build --pull -t "$CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG" .
- docker push "$CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG"
except:
- master
But by default, this will publish to gitlab's registry. How can we publish to docker hub instead?
No need to change that .gitlab-ci.yml at all, we only need to add/replace the environment variables in project's pipeline settings.
1. Find the desired registry url
Using hub.docker.com won't work, you'll get the following error:
Error response from daemon: login attempt to https://hub.docker.com/v2/ failed with status: 404 Not Found
Default docker hub registry url can be found like this:
docker info | grep Registry
Registry: https://index.docker.io/v1/
index.docker.io is what I was looking for.
2. Set the environment variables in gitlab settings
I wanted to publish gableroux/unity3d images using gitlab-ci, here's what I used in Gitlab's project > Settings > CI/CD > Variables
CI_REGISTRY_USER=gableroux
CI_REGISTRY_PASSWORD=********
CI_REGISTRY=docker.io
CI_REGISTRY_IMAGE=index.docker.io/gableroux/unity3d
CI_REGISTRY_IMAGE is important to set.
It defaults to registry.gitlab.com/<username>/<project>
regsitry url needs to be updated so use index.docker.io/<username>/<project>
Since docker hub is the default registry when using docker, you can also use <username>/<project> instead. I personally prefer when it's verbose so I kept the full registry url.
This answer should also cover other registries, just update environment variables accordingly. 🙌
To expand on the GabLeRoux's answer,
I had issues on the pushing stage of the GitLab CI build:
denied: requested access to the resource is denied
By changing my CI_REGISTRY to docker.io (remove the index.) I was able to successfully push.