S3 GET PreSigned URL 403 Forbidden from Viewing files in browsers - api

I am trying to view files from my S3 Bucket. I am a PreSigned URL and the react Package react-file-viewer.
Whenever I call the Signed URL through react-file-viewer I get an error 403 forbidden. But if I copy and paste the Pre-Signed URL into my search bar I can view the file. I can also download the files and they open the data.
This is what my response is.
Request Method:
HEAD
Status Code:
403 Forbidden
Remote Address:
xx.x.xx.xx..x..x
Referrer Policy:
strict-origin-when-cross-origin
In my S3 bucket, I have this as my CORS header:
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"PUT",
"POST",
"DELETE",
"HEAD",
"GET"
],
"AllowedOrigins": [
"*"
],
"ExposeHeaders": [
"x-amz-server-side-encryption",
"x-amz-request-id",
"x-amz-id-2"
],
"MaxAgeSeconds": 3000
}
]
and my pre-signed URL is
var url = s3.getSignedUrl('getObject', {
Bucket: BucketName,
Key: fileURLData[fileSpot].file_url,
Expires: signedUrlExpireSeconds,
})
Does this look like an CORS issue or something to do with my S3 Bucket permissions?

Related

Access denied when getting transcription

My setup is the following:
React-native app client -> AWS API Gateway -> AWS Lambda function -> AWS S3 -> AWS Transcribe -> AWS S3
I am successfully able to upload an audio file to an S3 bucket from the lambda, start the transcription and even access it manually in the S3 bucket. However when I try to access the json file with the transcription data using TranscriptFileUri I am getting 403 response.
On the s3 bucket with the transcriptions I have the following CORS configuration:
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"GET",
"PUT",
],
"AllowedOrigins": [
"*"
],
"ExposeHeaders": [
"ETag"
]
}
]
My lambda function code looks like this:
response = client.start_transcription_job(
TranscriptionJobName=jobName,
LanguageCode='en-US',
MediaFormat='mp4',
Media={
'MediaFileUri': s3Path
},
OutputBucketName = 'my-transcription-bucket',
OutputKey = str(user_id) + '/'
)
while True:
result = client.get_transcription_job(TranscriptionJobName=jobName)
if result['TranscriptionJob']['TranscriptionJobStatus'] in ['COMPLETED', 'FAILED']:
break
time.sleep(5)
if result['TranscriptionJob']['TranscriptionJobStatus'] == "COMPLETED":
data = result['TranscriptionJob']['Transcript']['TranscriptFileUri']
data = requests.get(data)
print(data)
In Cloudwatch I get the following: <Response [403]> when printing the response.
As far as I can tell, your code is invoking requests.get(data) where data is the TranscriptFileUri. What does that URI look like? Is it signed? If not, as I suspect, then you cannot use requests to get the file from S3 (it would have to be a signed URL or a public object for this to work).
You should use an authenticated mechanism such as get_object.

Cors error from a S3 static website to an api

I'm using S3 static website hosting for a vue 3 frontend, and from this static website I'm trying to do an ajax request to an API.
The request has a 200 result with a CORS issue
Reason: CORS header 'Access-Control-Allow-Origin' missing
I'm using axios for the ajax request
axios.post(
'https://api,
params,
{
headers: {
'Content-Type': 'application/x-www-form-urlencoded',
}
}
);
I cannot add "Access-Control-Allow-Origin" :"*" to the header because it will trigger a preflight OPTION request and my target API doesn't allow OPTION requests
Here's my S3 cors config
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"GET",
"POST"
],
"AllowedOrigins": [
"*",
],
"ExposeHeaders": [
],
"MaxAgeSeconds": 3000
}
]
I'm also using cloudfront to bind my static website to a nice url, I have allowed all HTTP methods and I have try pretty much every origin request and response header policies.
I have tried pretty much everything I could get from the docs but now I'm out of ideas. What am I missing? Please help.

sharing GCP image using api

I have been trying to share my GCP image with other accounts using API... on the UI I can do it in permissions and by adding members...
I used the following URL with the post request:
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/images/IMAGE_NAME:getIamPolicy
with headers 'Content-Type': 'application/json; charset=utf-8'
and with data:
{"version": "0",
"bindings":
[
{
"members": ["user:mymailid#gmail.com"],
"role":"roles/compute.imageUser"
}
]
}
}
bypassing the authorization bearer key,
after hitting it in postman or with curl or in python request, getting a response: 404 Not Found
I also enabled the API permissions using CLI using gcloud services enable pubsub.googleapis.com
what do I need to pass extra to make this work?
hoping I will get help from someone ... Thanks in advance
#Ganesh
To set IAM Policy, you need to use different url.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/images/IMAGE_NAME/setIamPolicy
json({
"version": "0",
"bindings": [
{
"members": ["user:mymailid#gmail.com"],
"role": "roles/compute.imageUser"
}
]
})
Note:
replace projectId
replace imagename
reference:
google-docs, collection, Dothttp

How do you specify an ACL policy when creating an S3 signed URL with Clojure's Amazonica?

I'm trying to make an upload have an ACL of public-read. The docs are super thin for Amazonica, and after hours of tinkering, I'm no closer to figuring out how to accomplish this goal. In short, I can't figure out how to get it to sign the header.
Server side, my code looks like this.
(s3/generate-presigned-url
creds
{:bucket-name "mybucket"
:method "PUT"
:expires 10000
:key "my-key"
:cache-control "max-age=31557600;"
:request-parameters {:x-amz-acl "public-read"}
})
Client side, I grab the URL that creates and do an XHR PUT request
var xhr = new XMLHttpRequest();
xhr.open("PUT", signedUrl);
xhr.setRequestHeader('Cache-Control', 'max-age=31557600')
xhr.onload = ...
xhr.onerror = ...
xhr.send(file);
And this works perfectly, with the exception that it has the wrong ACL: "private" rather than "public"
Adding it client side is easy
var xhr = new XMLHttpRequest();
xhr.open("PUT", signedUrl);
xhr.setRequestHeader('Cache-Control', 'max-age=31557600')
xhr.setRequestHeader('x-amz-acl', 'public-read')
xhr.onload = ...
xhr.onerror = ...
xhr.send(file);
But the request of course fails due to HeadersNotSigned. I can't at all figure out how to add it server side so that they get signed. The SignedHeaders section never includes any additional parameters.
I've blindly tried all sorts of combos
(s3/generate-presigned-url
creds
{:headers {:x-amz-acl "public-read"}
:x-amz-acl "public-read"
:metadata {:x-amz-acl "public-read"}
:signed-headers {:x-amz-acl "public-read"}
:amz-acl "public-read"
:x-amz-signed-headers {:x-amz-acl "public-read"}
:X-Amz-SignedHeaders ["x-amz-acl"]
:request-parameters {:x-amz-acl "public-read"}
})
How do you add an ACL policy to a signed url?
I don't have a direct answer to that, but a workaround for your consideration: making all objects in your s3 bucket default to public-read.
You can do this by adding this bucket policy to your bucket (replace bucketnm of course):
{
"Id": "Policy1397632521960",
"Statement": [
{
"Sid": "Stmt1397633323327",
"Action": [
"s3:GetObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::bucketnm/*",
"Principal": {
"AWS": [
"*"
]
}
}
]
}

Configuring X-Frame-Options Response Header on AWS CloudFront and S3

I'd like to add X-Frame-Options HTTP response header for static content hosted on Amazon S3 with a Cloudfront cache. How can I add these headers?
You can add the x-frame-options header to the response from CloudFront / S3 using a Lambda#Edge function. The lambda code runs within the local edge locations, but needs to be created and maintained in the us-east-1 region.
The example code here uses nodeJS 6.10 to add the response header
'use strict';
exports.handler = (event, context, callback) => {
const response = event.Records[0].cf.response;
const headers = response.headers;
response.headers['x-frame-options'] = [{"key":"X-Frame-Options","value":"SAMEORIGIN"}];
console.log(response.headers);
callback(null, response);
};
Create a definitive version of the Lambda, then set the Lambda Version's trigger configuration as the CloudFront origin-response Event type for your path pattern behavior.
The example code logs events to CloudWatch logs service for debugging purposes. If you don't already have one you will need to setup a lambda execution IAM role that allows a policy allowing CloudWatch logs actions to be assumed by edgelambda.amazonaws.com and lambda.amazonaws.com.
Basic Lambda Execution Policy allowing logs to be written to CloudWatch:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "arn:aws:logs:*:*:*",
"Effect": "Allow"
}
]
}
Trust Relationship allowing Lambda and Lambda#Edge to assume the role :
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"edgelambda.amazonaws.com",
"lambda.amazonaws.com"
]
},
"Action": "sts:AssumeRole"
}
]
}
It would be better if AWS simply allowed the x-frame-options header to be set in the GUI but until then this solution works and will allow you to keep your Security Auditors happy.
It is now possible to use SecurityHeaders via CloudFront
https://aws.amazon.com/premiumsupport/knowledge-center/cloudfront-http-security-headers/
Which include:
x-xss-protection: 1; mode=block
x-frame-options: SAMEORIGIN
x-content-type-options: nosniff
strict-transport-security: max-age=31536000
Yes, you can set the headers in the $http angular service like so:
$http(method: '<TYPE>', headers: headers, url: <URL>, data: {}).success(...);
var headers = {'X-Frame-Options': ...};