I have successfully configured S3 bucket with BunnyCDN and able to access files through it. Now I am facing the issue when I try to stream HSL encrypted video with BunnyCDN which stored in S3 bucket.
In the browser console I am getting issue like this:
Access to XMLHttpRequest at 'https://ovb-video.b-cdn.net/bcdn_token=hT1XzEdqq1xj5TGhEgM8JP1WsTeHzvfxmqfL3g3-_RE&expires=1632877673&token_path=%2F/books/11/2/video.m3u8' from origin 'https://my-domain.com' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource.
As we have to specify allow origins when request's credentials mode is true so I have specifies CORS policy at S3 like this:
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"POST",
"GET",
"PUT",
"HEAD"
],
"AllowedOrigins": [
"https://my-domain.com",
"https://ovb-video.b-cdn.net"
],
"ExposeHeaders": []
}
]
In BunnyCDN panel I have also specified m3u8, ts, key in headers but still no luck.
Can anybody please let me know what I am doing wrong.
You have to add 3 edge rules for that particular pull zone so that it can serve data with S3. Please find screenshots of 3 edge rules.
Related
CodeBuild project fails at the Provisioning phase due to the following error
BUILD_CONTAINER_UNABLE_TO_PULL_IMAGE: Unable to pull customer's container image. CannotPullContainerError: Error response from daemon: pull access denied for <image-name>, repository does not exist or may require 'docker login': denied: User: arn:aws:sts::<id>
The issue was with the Image Pull credentials.
CodeBuild was using default AWS CodeBuild credentials for pulling the image while the ECRAccessPolicy was attached to the Project Service Role.
I fixed it by updating the image pull credentials to use project service role.
To add additional clarity (not enough reputation yet to comment on an existing answer), the CodeBuild project service role needs to have the following permissions if trying to pull from a private repository:
{
"Action":[
"ecr:BatchCheckLayerAvailability",
"ecr:BatchGetImage",
"ecr:GetDownloadUrlForLayer"
],
"Effect":"Allow",
"Resource":[
"arn:aws:ecr:us-east-1:ACCOUNT_ID:repository/REPOSITORY_NAME*"
]
}
Also, the ECR repository policy should also look something like this (scope down root if desired):
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowAccess",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::ACCOUNT_ID:root"
},
"Action": [
"ecr:BatchCheckLayerAvailability",
"ecr:BatchGetImage",
"ecr:GetDownloadUrlForLayer"
]
}
]
}
fwiw I stumbled across this issue when using terraform to create my codebuild pipeline.
The setting to change for this was image_pull_credentials_type which should be set to SERVICE_ROLE rather than CODEBUILD in the environment block of the resource "aws_codebuild_project".
Thank you to Chaitanya for the response which pointed me in this direction with the accepted answer.
Trying to send cookie back after login request on my hobby project website. For some reason it is working when running locally i.e. http://localhost:3000. But as soon as I push my API online and try to access it through my live website, I see no cookie under Application -> Cookies -> website (using chrome). I have googled a lot and I believe I have set check off every CORS policy.
The nodeJS is running in AWS lambda and is invoked through API gateway. API GW is directed to through a cloudfront distribution (if it matters).
In my express backend I have logged my headers accordingly:
res.cookie('jwt', token, cookieOptions);
console.log('Checking cookie', res);
console.log('Checking cookie', res.cookies);
res.status(statusCode).json({
status: 'success',
data: {
user
}
});
The output of this is partially this:
'access-control-allow-origin': [ 'Access-Control-Allow-Origin', 'https://example.com' ],
vary: [ 'Vary', 'Origin' ],
'access-control-allow-credentials': [ 'Access-Control-Allow-Credentials', 'true' ],
'access-control-allow-methods':
[ 'Access-Control-Allow-Methods',
'GET, POST, PUT, PATCH, DELETE, HEAD, OPTIONS' ],
'access-control-allow-headers':
[ 'Access-Control-Allow-Headers',
'Origin, X-Requested-With, Content-Type, Accept, X-PINGOTHER' ],
'x-ratelimit-limit': [ 'X-RateLimit-Limit', 100 ],
'x-ratelimit-remaining': [ 'X-RateLimit-Remaining', 97 ],
date: [ 'Date', 'Fri, 11 Dec 2020 23:20:28 GMT' ],
'x-ratelimit-reset': [ 'X-RateLimit-Reset', 1607732145 ],
quizappuserloggedin: [ 'QuizAppUserLoggedIn', 'false' ],
'set-cookie':
[ 'Set-Cookie', 'my-cookie'; Path=/; Expires=Sat, 12 Dec 2020 23:20:34 GMT; HttpOnly; Secure'
From what I can tell I have set my CORS settings correctly. From my frontend I have set:
axios.defaults.withCredentials = true;
From what I can tell I have done everything I can find in Set cookies for cross origin requests
Meaning I have doubled checked my cors settings and from the print statement it looks like the cookie is being sent. But why is the browser not picking it up?
Could post the actual site and github repo if it helps, I have been stuck here for a whole now.
UPDATE
I looked at the response headers in my browser and compared it against the headers in the backend api. From that comparison I can see that my "set-cookie" header isn't included in the response even though I can clearly see that it is included in the response from the backend:
UPDATE 2
I believe after further investigation that I have narrowed it down to being an CORS issue with AWS API Gateway. I looked into these, but still no luck.
How to add CORS header to AWS API Gateway response with lambda proxy integration activate
Amazon API gateway ignores set-cookie
Logs from the lambda cloudwatch right before the response is being sent by the express framework as well as cloudwatch logs from the API Gateway (response headers).
API GW cloudwatch logs of the response headers:
Lambda cloudwatch logs of the response object sent by express framework:
Turns out it wasn’t a CORS issue. I had simply forgotten to forward cookies from my cloudfront distribution.
Good day,
I am using the following tutorial to create an S3 bucket to store a .csv file that is updated hourly from google drive via a Lambda routine:
https://labs.mapbox.com/education/impact-tools/sheetmapper-advanced/#cors-configuration
When I try to access the .csv from its S3 object URL by inserting it into the browser
https://mapbox-sheet-mapper-advanced-bucket.s3.amazonaws.com/SF+Food+Banks.csv
I get the following error
error image
The CORS permission given in the tutorial is in XML format:
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
I have tried to convert it into JSON format, as it seems the S3 console no longer supports CORS permissions in XML format:
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"GET"
],
"AllowedOrigins": [
"*"
],
"ExposeHeaders": [],
"MaxAgeSeconds": 3000
}
]
Any advice/help would be greatly appreciated!
Please make sure that you have you account permissions able to support public access to S3. There are four things that I ran into today while trying to make a public S3 resource.
Account settings for block public access has to be disabled. (MAKE SURE TO ENABLE IT FOR ANY PRIVATE BUCKETS OR OBJECTS)
Individual block public access has to be disabled. (As shown in your tutorial)
ACL must allow read access. You can find this under S3 - Buckets - your_bucket - permissions - Access Control list. Edit this for read access.
Go to the individual object and ensure that it also has permissions to be read from the public.
We are currently using S3 as our backend for preserving the tf state file. While executing terraform plan we are receiving the below error:
Error: Forbidden: Forbidden
status code: 403, request id: 18CB0EA827E6FE0F, host id: 8p0TMjzvooEBPNakoRsO3RtbARk01KY1KK3z93Lwyvh1Nx6sw4PpRyfoqNKyG2ryMNAHsdCJ39E=
We have enabled the debug mode and below is the error message we have noticed.
2020-05-31T20:02:20.842+0400 [DEBUG] plugin.terraform-provider-aws_v2.64.0_x4: Accept-Encoding: gzip
2020-05-31T20:02:20.842+0400 [DEBUG] plugin.terraform-provider-aws_v2.64.0_x4:
2020-05-31T20:02:20.842+0400 [DEBUG] plugin.terraform-provider-aws_v2.64.0_x4:
2020-05-31T20:02:20.842+0400 [DEBUG] plugin.terraform-provider-aws_v2.64.0_x4: -----------------------------------------------------
2020/05/31 20:02:20 [ERROR] <root>: eval: *terraform.EvalRefresh, err: Forbidden: Forbidden
status code: 403, request id: 2AB56118732D7165, host id: 5sM6IwjkufaDg1bt5Swh5vcQD2hd3fSf9UqAtlL4hVzVaGPRQgvs1V8S3e/h3ta0gkRcGI7GvBM=
2020/05/31 20:02:20 [ERROR] <root>: eval: *terraform.EvalSequence, err: Forbidden: Forbidden
status code: 403, request id: 2AB56118732D7165, host id: 5sM6IwjkufaDg1bt5Swh5vcQD2hd3fSf9UqAtlL4hVzVaGPRQgvs1V8S3e/h3ta0gkRcGI7GvBM=
2020/05/31 20:02:20 [TRACE] [walkRefresh] Exiting eval tree: aws_s3_bucket_object.xxxxxx
2020/05/31 20:02:20 [TRACE] vertex "aws_s3_bucket_object.xxxxxx": visit complete
2020/05/31 20:02:20 [TRACE] vertex "aws_s3_bucket_object.xxxxxx: dynamic subgraph encountered errors
2020/05/31 20:02:20 [TRACE] vertex "aws_s3_bucket_object.xxxxxx": visit complete
We have tried reverting the code and tfstate file to a working version and tried. Also, deleted the tfstate file locally as well. Still the same error.
s3 bucket policy is as below:
{
"Sid": "DelegateS3Access",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::xxxxxx:role/Administrator"
},
"Action": [
"s3:ListBucket",
"s3:GetObject",
"s3:GetObjectTagging"
],
"Resource": [
"arn:aws:s3:::xxxxxx/*",
"arn:aws:s3:::xxxxxx"
]
}
The same role is being assumed by terraform for execution and still it fails. I have emptied the bucket policy as well and tried but didn't see any success. I understand it is something to do with the bucket policy itself, but not sure how to fix it.
Any pointers to fix this issue is highly appreciated.
One thing to check is who you are (from an AWS API perspective), before running Terraform:
aws sts get-caller-identity
If the output is like this, then you are authenticated as an IAM User who will not have access to the bucket since it grants access to an IAM Role and not an IAM User:
{
"UserId": "AIDASAMPLEUSERID",
"Account": "123456789012",
"Arn": "arn:aws:iam::123456789012:user/DevAdmin"
}
In that case, you'll need to configure AWS CLI to assume arn:aws:iam::xxxxxx:role/Administrator.
[profile administrator]
role_arn = arn:aws:iam::xxxxxx:role/Administrator
source_profile = user1
Read more on that process here:
https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-role.html
If get-caller-identity returns something like this, then you are assuming the IAM Role and the issue is likely with the actions in the Bucket policy:
{
"UserId": "AIDASAMPLEUSERID",
"Account": "123456789012",
"Arn": "arn:aws:iam::123456789012:assumed-role/Administrator/role-session-name"
}
According to the Backend type: S3 documentation, you also need s3:PutObject:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::mybucket"
},
{
"Effect": "Allow",
"Action": ["s3:GetObject", "s3:PutObject"],
"Resource": "arn:aws:s3:::mybucket/path/to/my/key"
}
]
}
While I can't see why PutObject would be needed for plan, it is conceivably what is causing this Forbidden error.
You can also look for denied S3 actions in CloudTrail if you have enabled that.
The issue is fixed now. We have performed a s3 copy action prior to this which had copied all the s3 objects from account A to account B. The issue here is copy command always moves objects along with the same user permissions which made the current user role not able to access these newly copied objects resulting in Forbidden 403 error.
We have cleared all the objects in this bucket and run the aws sync command instead of cp which fixed the issue for us. Thank you Alain for the elaborated explanation. Those surely helped us in fixing this issue.
This helped us point to right issue.
Steps followed:
Backup all the s3 objects.
Empty the bucket.
Run terraform plan.
Once the changes are made to the bucket, run aws sync command.
I am trying to send use the Cloudwatch monitoring script to send metrics for memory, disk and swap utilization from an EC2 instance to Cloudwatch. In order to run the script I need to provide it AWS credentials or an IAM role. When attempting to use an IAM role I find that I get the below error
[ec2-user#ip-x-x-x-x aws-scripts-mon]$ /home/ec2-user/aws-scripts-
mon/mon-put-instance-data.pl --mem-util --mem-used --mem-avail --auto-
scaling=only --verbose --aws-iam-role=ACCT-CloudWatch-
service-role
Using AWS credentials file </home/ec2-user/aws-scripts-
mon/awscreds.conf>
WARNING: Failed to call EC2 to obtain Auto Scaling group name. HTTP
Status Code: 0. Error Message: Failed to obtain credentials for IAM
role ACCT-CloudWatch-service-role. Available roles: ACCT-service-role
WARNING: The Auto Scaling metrics will not be reported this time.
[ec2-user#ip-x-x-x-x aws-scripts-mon]$
This is what my IAM policy looks like:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"autoscaling:DescribeAutoScalingNotificationTypes",
"autoscaling:DescribeAutoScalingInstances",
"ec2:DescribeTags",
"autoscaling:DescribePolicies",
"logs:DescribeLogStreams",
"autoscaling:DescribeTags",
"autoscaling:DescribeLoadBalancers",
"autoscaling:*",
"ssm:GetParameter",
"logs:CreateLogGroup",
"logs:PutLogEvents",
"ssm:PutParameter",
"logs:CreateLogStream",
"cloudwatch:*",
"autoscaling:DescribeAutoScalingGroups",
"ec2:*",
"kms:*",
"autoscaling:DescribeLoadBalancerTargetGroups"
],
"Resource": "*"
}
]
}
What could I be missing?
The message states the problem comes from the role it tries to use:
Failed to obtain credentials for IAM
role ACCT-CloudWatch-service-role. Available roles: ACCT-service-role
Modify this part of your command to --aws-iam-role=ACCT-
service-role (I am assuming that this role is the one configured correctly)