After deploy taking time to reflect changes - amazon-s3

I have a project which is built-in ReactJs. and I am using s3 and CloudFront for deployment.
I am facing an issue whenever I deploy code after deployment it takes too much time to reflect changes. sometimes I have to manually clear browser cookies for the latest changes. Do I need to configure S3 or CloudFront settings?

Follow this steps:
Go to cloudfront :
Do invalidation of objects
Create entry /*
Reference : https://aws.amazon.com/premiumsupport/knowledge-center/cloudfront-serving-outdated-content-s3/

Related

Read bucket object in CDK

In terraform to read an object from s3 bucket at the time of deployment I can use data source
data aws_s3_bucket_object { }
Is there a similar concept in CDK? I've seen various methods of uploading assets to s3, as well as importing an existing bucket, but not getting an object from the bucket. I need to read a configuration file from the bucket that will affect further deployment.
Its important to remember that CDK itself is not a deployment option. it can deploy, but the code you are writing in a cdk stack is the definition of your resources - not a method for deployment.
So, you can do one of a few things.
Use your SDK for your language to make a call to the s3 bucket and load the data directly. This is perfectly acceptable and an understood way to gather information you need before deployment - each time the stack Synths (which it does before every cdk deploy that code will run and will pull your data.
Use a CodePipeline to set up a proper pipeline, and give it two sources - one your version control repo and the second your s3 bucket:
https://docs.aws.amazon.com/codebuild/latest/userguide/sample-multi-in-out.html
The preferred way - drop the json file, and use Parameter Store. CDK contains modules that will create a token version of this parameter on synth, and when it deploys it will reference that properly back to the Systems Manager Parameter store
https://docs.aws.amazon.com/cdk/v2/guide/get_ssm_value.html
If your parameters change after deployment, you can have that as part of your cdk stack pretty easily (using cfn outputs). If they change in the middle/during deployment, you really need to be using a CodePipeline to manage these steps instead of just CDK.
Because remember: The cdk deploy option is just a convenience. It will execute everything and has no way to pause in the middle and execute specific steps. (other than a very basic, this depends on this resources)

Cloudfront caching react website pages despite using file versioning

So to explain a problem I have a static website hosted on s3 and CloudFront as the CDN. I have used create-react-app(CRA) to create the react package for my website. CRA by default does versioning of webpack build files and the versions are visible in the s3 bucket as well.
Still when I do a deployment, the latest changes don't come up(I have waited even a day hoping it would come). I am not sure what is causing this issue. Can anyone please help.
I have added the screenshots of my cloudfront behaviour tab and the s3 bucket files having build versions.
Ps, If it is the case of browser cache how can I disable it so that my clients always see the most latest version of my website.
Hi you have to invalidate cache in the distribution settings tab. I ideally invalidate all cache by passing /* or you can also specify folder or file to clear cacheing.
example: /index.html
Even in your CI/CD pools line you can ask deploy agent to invalidate cache by passing distribution ID and path

How does versioning work on Amazon Cloudfront?

I've just set up a static website on Amazon S3. I'm also using the Cloudfront CDN service.
According to Amazon, there are 2 methods available for clearing the Cloudfront cache: invalidation and versioning. My question is regarding the latter.
Consider the following example:
I link to an image file (image.jpg) from my index.html file. I then decide to replace the image. I upload a second image with the filename: image_2.jpg and change the link in my index.html file.
Will the changes automatically take effect or is some further action required?
What triggers the necessary changes if the edited and newly uploaded files are located in the bucket and not the cache?
Versioning in CloudFront is nothing more than adding (or prefixing) a version in the name of the object or 'folder' where objects are in stored.
all objects in a folder v1 and use a URL like
https://xxx.cloudfront.net/v1/image.png
all objects contain a version in their name like image_v1.png and use a URL like https://xxx.cloudfront.net/image_v1.png
The second option is often a bit more work but then you don't need to upload new files which do not require to be updated (=cheaper in the context of storage). The first solution is often more clear and requires less work.
Using CloudFront Versioning requires more S3 storage but is often cheaper than creating many invalidations.
The other way to invalidate the cache is to create invalidations (can be expensive). If you don't really need invalidations but just need more quick cache refreshes (default 24h) then you can update the origin TTL settings (origin level) or set cache duration for an individual object (object level).
Your cloudfront configuration has a cache TTL, which tells you when the file will be updated, regardless of when the source changes.
If you need it updated right away, use the invalidation function on your index.html file
I'll chime in on this in case anyone else comes here looking for what I did. You can set up Cloudfront with S3 versioning enabled and reference specific S3 versions if you know which version you need. I put it behind a presigned Cloudfront URL and ended up with this in the Java SDK:
S3Properties s3Properties... // Custom properties pulled from a config file
String cloudfrontUrl = "https://" + s3Properties.getCloudfrontDomain() + "/" +
documentS3Key + "?versionId=" + documentS3VersionId;
URL cloudfrontSignedUrl = new URL(CloudFrontUrlSigner.getSignedURLWithCannedPolicy(
cloudfrontUrl,
s3Properties.getCloudfrontKeypairId(),
SignerUtils.loadPrivateKey(s3Properties.getCloudfrontKeyfilePath()),
getPresignedUrlExpiration()));

How to add rollback functionality to a basic S3 CodeBuild deploy

I have followed this instruction to get a very basic ci workflow in aws. It works flawless but I want to have a extra functionality, rollback. First i though it would work "out-of-the-box", but not in my case, if I select the the previous job in CodeBuild that i want to rollback to and hit "Retry" i get this error message: "Error ArtifactsOverride must be set when using artifacts type CodePipelines". I have also tried to rerun the whole pipeline again with pipeline history page, but it's just a list of builds without any functionality.
My questions is: how to add a rollback function to my workflow. It doesn't have to be in the same pipeline etc. But it should not touch git.
AWS CloudFormation now supports rolling back based in a CloudWatch alarm.
I'd put a CloudFront distribution in front of your S3 bucket with the origin path set to a folder within that bucket. Every time you deploy to S3 from CodeBuild you deploy to a random new S3 folder.
You then pass the folder name in a JSON file as an output artifact from your CodeBuild step. You can use this artifact as a parameter to a CloudFormation template updated by a CloudFormation action in your pipeline.
The CloudFormation template would update the OriginPath field of your CloudFront distribution to the folder containing your new deployment.
If the alarm fires then the CloudFormation template would roll back and flip back to the old folder.
There are several advantages to this approach:
Customers should only see either the new or old version while the deployment is happening rather than seeing potentially mixed files while the deployment is running.
The deployment logic is simpler because you're uploading a fresh set of files every time, rather than figuring out which files are new and which need to be deleted.
The rollback is pretty simple because you're flipping back to files which are still there rather than re-deploying the old files.
Your pipeline would need to contain both the CodeBuild and a sequential CloudFormation action.

Backing up a Serverless Framework deployment

I'm familiar with Terraform and its terraform.tfstate file where it keeps track of which local resource identifiers map to which remote resources. I've noticed that there is a .serverless directory on my machine which seems to contain files such as CloudFormation templates and ZIP files containing Lambda code.
Suppose I create and deploy a project from my laptop, and Serverless spins up fooxyz.cloudfront.net which points to a Lambda function arn:aws:lambda:us-east-1:123456789012:function:handleRequest456. If I naively try to run Serverless again from another machine (or if I git clean my working directory), it'll spin up a new CloudFront endpoint since it doesn't know that fooxyz.cloudfront.net already represents the same application. I'm looking to back up the state it keeps internally, so that it modifies an existing resource rather than creates a new one. (The equivalent in Terraform would be to back up the terraform.tfstate file.)
If I wished to back up or restore a Serverless deployment state, which files would I back up? In the case of AWS, it seems like I should be backing up the CloudFormation templates; I don't want to back up the Lambda code since it's directly generated from the source. However, I'm likely going to use more than just AWS in the future, and so don't want to "special-case" the CloudFormation templates if at all possible.
How can I back up only the files I cannot regenerate?
I think what you are asking is If I or a colleague checks out the serverless code from git on a different machine, will we still be able to deploy and update the same lambda functions and the same API gateway endpoints?
And the answer to that is yes! Serverless keeps track of all of that for you within their files. Unless you run serverless destroy - no operation will create a new lambda or api endpoint.
My team and I are using this method: we commit all code to a git repo and one of us checks it out and deploys a function or the entire thing and it updates the existing set of functions properly. If you setup an environment file - that's all you need to worry about really. And I recommend leaving it outside of git entirely.
For AWS; Serverless Framework keeps track of your deployment via Cloudformation (CF) parameters/identifiers which are specific to an account/region. The CF stack templates are uploaded to an (auto-generated) S3 bucket so it's already backed up for you.
So all you really need to have is the original deployment code in a git repo and have access to your keys. Everything else is already backed up for you.