What's the best way to achieve the following using Codepipeline/Codebuild/S3 - amazon-s3

We have a localization microservice and a CI/CD pipeline per branch.
We also have develop, staging, and master branches that get deployed to dev, staging, and prod accounts through the respective pipelines.
dev CI/CD pipeline submits jobs for translation to localization microservice based on en.json file in the source code (which also gets synced to S3 in addition to translated files such as fi.json, fr.json being created from the results of the localization microservice. The microservice may take days to get back with results and so CI/CD pipeline just submits the job and doesn't wait for the results.
We will be pushing to develop branch a lot more frequently than staging and prod.
When the translations do come back from the localization microservice and get stored in S3 in the dev account, we want to make sure that only the specific version of the files gets synced to the s3 bucket in staging and production, that the corresponding commit/sourcecode was approved for release in those envs. Any changes made to en.json and thus to fr.json, fi.json in dev account since the release should not be pushed. How can this be controlled?

Related

How do I manage multiple static files between environments (admin uploaded)?

I building a new course-like web application. There will be plenty of images, video and sound files.
I am wondering about possible strategies for static file management between app environments.
My current approach is to use SQL database to store image urls, which will be uploaded via admin panel on the website. The images are to be stored in a blob-like storage (AWS S3 bucket).
This however, when doing changes, requires to upload the image to each environment or create a data migration dev -> staging -> prod in a deployment pipeline.
Am I missing something here? Even if I store files in a single place (single storage account) for all environments, I still need to migrate the database records when making changes to the course.
Should I just apply the changes in prod and create some basic migration data for dev/uat course testing?.
To emphasize, files will only be uploaded by an admin, not by a user. For example, admin uploads the image via admin panel and the image will be automatically included in the course.
I am not sure what's the appropriate way of doing this to manage and test changes properly. If I allow to do this on prod directly without migration, then I'm running the risk of uploading something invalid into the course with untested changes. On the other hand , I am not sure if it's common to migrate SQL data between databases and it will also have it's own pitfalls.

What are the best practices for Tekton implementation with multiple repositories with multiple deployments

We have multiple repositories that have multiple deployments in K8S.
Today, we have Tekton with the following setup:
We have 3 different projects, that should be build the same and deploy (they are just different repo and different name)
We defined 3 Tasks: Build Image, Deploy to S3, and Deploy to K8S cluster.
We defined 1 Pipeline that accepts parameters from the PipelineRun.
Our problem is that we want to get Webhooks externally from GitHub and to run the appropriate Pipeline automatically without the need to run it with params.
In addition, we want to be able to have the PipelineRun with default paramaters, so Users can invoke deployments automatically.
So - is our configuration and setup seems ok? Should we do something differently?
Our problem is that we want to get Webhooks externally from GitHub and to run the appropriate Pipeline automatically without the need to run it with params. In addition, we want to be able to have the PipelineRun with default paramaters, so Users can invoke deployments automatically.
This sounds ok. The GitHub webhook initiates PipelineRuns of your Pipeline through a Trigger. But your Pipeline can also be initiated by the users directly in the cluster, or by using the Tekton Dashboard.

Sharing baked ami in Spinnaker with Prod account after Staging deployment

I am trying to evaluate Jenkins+Spinnaker as our CI/CD platform and i would like to say it worked perfectly for us till Staging environment. We are using AWS and AMIs for our flow.
Now, our requirement is that we want to share AMI with Prod account, which is a completely different account from UAT and Staging, only when it passes QA in Staging env as we want to keep only those image in our prod account which have passed quality gates. I tried searching some suggestion for same but didn't found any. Though there were some blogs on sharing AMI with different regions which baking step using aws-multi-ebs.json, which is not our requirement.
Is there any inbuilt process in Spinnaker itself for the same or i need to use some outside job, like integrating with Jenkins, for copying the AMI to Prod env ?
Spinnaker will do this by default via allow launch (the AMI remains owned by the baking account but when you deploy to the prod account launch permission is granted to that account)

Sonar Analysis of website before deployment

I currently have a website (consisting only of static files) and have currently automated the deployment of the website when changes are pushed to the master branch by using a Jenkins multibranch pipeline.
I'm planning to add an extra set of validations before deployment, and I've come across Sonar. Sonar can't be run on static files on its own; it requires these files to be served by a web server such as Apache2, because it also verifies HTTP headers.
Consequently, as long as my changes are not deployed in production, I will not be able to run Sonar on a particular development branch, and would have to wait until the branch is merged into master to obtain the results.
In this case, can you please give hints on how I can get validations results before deployment?
I would setup a Test environment on another machine. It should mirror your production environment as close as possible. Publish to there first. Run Sonar. If all checks out, then deploy to prod. This is a basic Continuous Deployment scenario.

How to add rollback functionality to a basic S3 CodeBuild deploy

I have followed this instruction to get a very basic ci workflow in aws. It works flawless but I want to have a extra functionality, rollback. First i though it would work "out-of-the-box", but not in my case, if I select the the previous job in CodeBuild that i want to rollback to and hit "Retry" i get this error message: "Error ArtifactsOverride must be set when using artifacts type CodePipelines". I have also tried to rerun the whole pipeline again with pipeline history page, but it's just a list of builds without any functionality.
My questions is: how to add a rollback function to my workflow. It doesn't have to be in the same pipeline etc. But it should not touch git.
AWS CloudFormation now supports rolling back based in a CloudWatch alarm.
I'd put a CloudFront distribution in front of your S3 bucket with the origin path set to a folder within that bucket. Every time you deploy to S3 from CodeBuild you deploy to a random new S3 folder.
You then pass the folder name in a JSON file as an output artifact from your CodeBuild step. You can use this artifact as a parameter to a CloudFormation template updated by a CloudFormation action in your pipeline.
The CloudFormation template would update the OriginPath field of your CloudFront distribution to the folder containing your new deployment.
If the alarm fires then the CloudFormation template would roll back and flip back to the old folder.
There are several advantages to this approach:
Customers should only see either the new or old version while the deployment is happening rather than seeing potentially mixed files while the deployment is running.
The deployment logic is simpler because you're uploading a fresh set of files every time, rather than figuring out which files are new and which need to be deleted.
The rollback is pretty simple because you're flipping back to files which are still there rather than re-deploying the old files.
Your pipeline would need to contain both the CodeBuild and a sequential CloudFormation action.