How to resolve 403 Forbidden error when uploading to S3 - vue.js

I'm setting up a vuejs / DropzoneJS - app loosely based on kfei's vue-s3-dropzone app. It's designed to upload files (by using a PUT method) to AWS S3 serverlessly using a AWS Lambda function and a AWS S3 bucket.I'm basically getting a XMLHttpRequest at 'https://xxxxxxxxxxxxxxxxxxx' from origin 'http://localhost:8080' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: It does not have HTTP ok status and a 403 error code when I try to upload an image to the S3 bucket. Is there anything I can do to
This is what I've did:
created an S3 bucket
Set up a bucket policy and a CORS configuration in the S3 bucket settings:
enter image description here
enter image description here
Created a lambda function that is supposed to sign a URL which allows a PUT upload for each file to S3, with the Role executing the Lambda having a PutObject and PutObjectAcl permission on the S3 bucket:
enter image description here
Set up an Api Gateway API with an OPTIONS method (to pass the preflight check) and a PUT method with these CORS settings:
b. The OPTIONS method has a Mock backend integration with the Integration Response returning the following:
Access-Control-Allow-Headers 'Content-Type,X-Amz-Date,Authorization,X-Api-Key,X-Amz-Security-Token,x-requested-with'
Access-Control-Allow-Methods ‘PUT,OPTIONS'
Access-Control-Allow-Origin '*'
c. The PUT method has:
"Access-Control-Allow-Origin": "*"
In AWS Api Gateway: Setup a api-key and a usage plan
The lambda code:
var AWS = require('aws-sdk');
var s3 = new AWS.S3();
var bucketName = process.env.AWS_BUCKET_NAME;
exports.handler = (event, context) => {
if (!event.hasOwnProperty('contentType')) {
context.fail({ err: 'Missing contentType' });
}
if (!event.hasOwnProperty('filePath')) {
context.fail({ err: 'Missing filePath' });
}
var params = {
Bucket: bucketName,
Key: event.filePath,
Expires: 3600,
ContentType: event.contentType
};
s3.getSignedUrl('putObject', params, (err, url) => {
if (err) {
context.fail({ err });
} else {
context.succeed({ url });
}
});
};
Expected: Successful upload of files
Actual: Possible CORS issues.

getSignedUrl will work fine if you were uploading the file from an API client like Postman or a Node.js server, but as you state you are seeing a preflight check fail, I'm assuming you are using some kind of HTML form & frontend js.
From the AWS JavaScript SDK Docs regarding getSignedUrl:
Note: Not all operation parameters are supported when using pre-signed
URLs. Certain parameters, such as SSECustomerKey, ACL, Expires,
ContentLength, or Tagging must be provided as headers when sending a
request. If you are using pre-signed URLs to upload from a browser and
need to use these fields, see createPresignedPost().
As you are setting the 'Expires' param when calling getSignedUrl and are sending from the browser, you need to use createPresignedPost instead of getSignedUrl in your Lambda code.
You will then need to POST instead of PUT from the browser to S3.
NB: Remember to update your CORS rules for S3 with POST

Related

"Access key does not exist" when generating pre-signed S3 URL from Lambda function

I'm trying to generate a presigned URL from within a Lambda function, to get an existing S3 object .
(The Lambda function runs an ExpressJS app, and the code to generate the URL is called on one of its routes.)
I'm getting an error "The AWS Access Key Id you provided does not exist in our records." when I visit the generated URL, though, and Google isn't helping me:
<Error>
<Code>InvalidAccessKeyId</Code>
<Message>The AWS Access Key Id you provided does not exist in our records.</Message>
<AWSAccessKeyId>AKIAJ4LNLEBHJ5LTJZ5A</AWSAccessKeyId>
<RequestId>DKQ55DK3XJBYGKQ6</RequestId>
<HostId>IempRjLRk8iK66ncWcNdiTV0FW1WpGuNv1Eg4Fcq0mqqWUATujYxmXqEMAFHAPyNyQQ5tRxto2U=</HostId>
</Error>
The Lambda function is defined via AWS SAM and given bucket access via the predefined S3CrudPolicy template:
ExpressLambdaFunction:
Type: AWS::Serverless::Function
Properties:
FunctionName: ExpressJSApp
Description: Main website request handler
CodeUri: ../lambda.zip
Handler: lambda.handler
[SNIP]
Policies:
- S3CrudPolicy:
BucketName: my-bucket-name
The URL is generated via the AWS SDK:
const router = require('express').Router();
const AWS = require('aws-sdk');
router.get('/', (req, res) => {
const s3 = new AWS.S3({
region: 'eu-west-1',
signatureVersion: 'v4'
});
const params = {
'Bucket': 'my-bucket-name',
'Key': 'my-file-name'
};
s3.getSignedUrl('getObject', params, (error, url) => {
res.send(`<p>${url}</p>`)
});
});
What's going wrong? Do I need to pass credentials explicitly when calling getSignedUrl() from within a Lambda function? Doesn't the function's execute role supply those? Am I barking up the wrong tree?
tldr; Go sure, to have the correct order of signature_v4 headers/formdata, in your request.
I had the same exact issue.
I am not sure if this is the solution for everyone who is encountering the problem, but I learned the following:
The error message, and other misleading error messages can occur, if you don't use the correct order of security headers. In my case I was using the endpoint to create a presigned url, for posting a file, to upload it. In this case, you need to go sure, that you are having the correct order of security relevant data in your form-data. For signatureVersion 's3v3' it is:
key
x-amz-algorithm
x-amz-credential
x-amz-date
policy
x-amz-security-token
x-amz-signature
In the special case of a POST-Request to a presigned url, to upload a file, it's important to have your file, AFTER the security data.
After that, the request works as expected.
I can't say for certain but I'm guessing this may have something to do with you using the old SDK. Here it is w/ v3 of the SDK. You may need to massage it a little more.
const { getSignedUrl } = require("#aws-sdk/s3-request-presigner");
const { S3Client, GetObjectCommand } = require("#aws-sdk/client-s3");
// ...
const client = new S3Client({ region: 'eu-west-1' });
const params = {
'Bucket': 'my-bucket-name',
'Key': 'my-file-name'
};
const command = new GetObjectCommand(params);
getSignedUrl(client, command(error, url) => {
res.send(`<p>${url}</p>`)
});

node S3 Object Storage Linode

Im trying to use the aws-sdk to acces my linode S3 compatible bucket, but everything I try doesn't work. Not sure what the correct endpoint should be? For testing purposes is my bucket set to public read/write.
const s3 = new S3({
endpoint: "https://linodeobjects.com",
region: eu-central-1,
accesKeyId: <accesKey>,
secretAccessKey: <secretKey>,
});
const params = {
Bucket: bucketName,
Key: "someKey",
Expires: 60,
};
const uploadURL = await s3.getSignedUrlPromise("putObject", params);
The error im getting
code: 'CredentialsError',
time: 2021-07-15T08:29:50.000Z,
retryable: true,
originalError: {
message: 'Could not load credentials from any providers',
code: 'CredentialsError',
time: 2021-07-15T08:29:50.000Z,
retryable: true,
originalError: {
message: 'EC2 Metadata roleName request returned error',
code: 'TimeoutError',
time: 2021-07-15T08:29:49.999Z,
retryable: true,
originalError: [Object]
}
}
}
It seems like a problem with the credentials of the environment that this code is executed in and not with the bucket permissions themselves.
The pre-signing of the URL is an operation that is done entirely locally. It uses local credentials (i.e., access key ID and secret access key) to create a sigv4 signature for the URL. This also means that whether or not the credentials used for signing the URL are valid is only checked at the moment the URL is used, and not at the moment of signing the URL itself.
The error simply indicates that from all the ways the SDK is trying to find credentials (more info here) it cannot find credentials it can use to sign the URL.
This might be unrelated, but according to the documentation, the endpoint should be the following: The endpoint URI to send requests to. The default endpoint is built from the configured region. The endpoint should be a string like 'https://{service}.{region}.amazonaws.com' or an Endpoint object. Which, in the code example above, is not the case.
You should set the endpoint to be eu-central-1.linodeobjects.com. When using Linode object storage the region is not determined by the endpoint that you use.

AWS S3 - No CORS set but Amplify GET requests are working?

I have an S3 bucket with no CORS configuration (it is empty) yet using AWS Amplify Storage.get i am able to successfully retrieve object URLs. I expected a 403 Forbidden response.
Confusingly, Storage.list does generate a 403 Forbidden response, as i would have expected.
Code Sample 1 - CORS Not Set in S3 Bucket Permissions
const URL = await Storage.get('file.jpg', {
level: 'private',
contentType: 'image/jpeg',
expires: 5,
})
Expected Result: 403 Error, Forbidden. Access to XMLHttpRequest at 'destination_url' from origin 'source_url' has been blocked by CORS policy (beacuse the default is to deny cross origin requests unless a policy exists)
Actual Result: No error - Amplify.get returned a valid URL allowing access to file.jpg
Code Sample 2 - CORS Not Set in S3 Bucket Permissions
Storage.list('/', { level: 'private' })
.then(result => {
// process response
})
.catch(err => console.log(err));
Result is as expected: 403 Error, Forbidden. Access to XMLHttpRequest at 'destination_url' from origin 'source_url' has been blocked by CORS policy
I can make Code Sample 2 work if i set CORS in the S3 Bucket Permissions.
What am i missing? I want CORS to apply to Storage.get requests.

How to Upload a csv file lager than 10MB on S3 using Lambda /API Gateway

Hello I am new here on AWS i was trying to upload a csv file on my bucket s3 but when the file is larger than 10mb it is returing "{"message":"Request Entity Too Large"}" I am using postman to do this. Below is the current code I created but in the future I will add some validation to change the name of the file that being uploaded into my format. Is there any way to do this with this kind of code or if you have any suggestion that can help me with the issue I have encountered?
const AWS = require('aws-sdk');
const s3 = new AWS.S3();
const bucket = process.env.UploadBucket;
const prefix = "csv-files/";
const filename = "file.csv";
exports.handler = (event, context, callback) => {
let data = event.body;
let buff = new Buffer(data, 'base64');
let text = buff.toString('ascii');
console.log(text);
let textFileSplit = text.split('?');
//get filename split
let getfilename = textFileSplit[0].split('"');
console.log(textFileSplit[0]);
console.log(textFileSplit[1]);
// //remove lower number on csv
let csvFileSplit = textFileSplit[1].split('--')
const params = {
Bucket: bucket,
Key: prefix + getfilename[3],
Body: csvFileSplit[0]
};
s3.upload(params, function (err, data) {
if (err) {
console.log('error uploading');
callback(err);
}
console.log("Uploaded")
callback(null, "Success")
});
}
For scenarios like this one, we normally use a different approach.
Instead of sending the file to lambda through API Gateway, you send the file directly to S3. This will make your solution more robust and cost you less because you don't need to transfer the data to API Gateway and you don't need to process the entire file inside the lambda.
The question is: How do you do this in a secure way, without opening your S3 Bucket to everyone on the internet and uploading anything to it? You use s3 signed urls. Signed Urls are a feature of S3 that allows you to bake in the url the correct permissions to upload an object to a secured bucket.
In summary the process will be:
Frontend sends a request to API Gateway;
API Gateway forward the request to a Lambda Function;
The Lambda Function generate a signed Url with the permissions to upload the object to a specific s3 bucket;
API Gateway sends back the response from Lambda Function to the Frontend. Frontend upload the file to the signed Url.
To generate the signed url you will need to use the normal aws-sdk in your lambda function. There you will call the method getSignedUrl (signature depends on your language). You can find more information about signed urls here.

Redirecting AWS API Gateway to S3 Binary

I'm trying to download large binaries from S3 via an API Gateway URL. Because the maximum download size in API Gateway is limited I thought I just could provide the basic URL to Amazon S3 (in the swagger file) and add the folder/item to the binary I want to download.
But all I find is redirection API Gateway via a Lambda function, but I don't want that.
I want a swagger file where the redirect is already configured.
So if I call <api_url>/folder/item I want to be redirected to s3-url/folder/item
Is this possible? And if so, how?
Example:
S3: https://s3.eu-central-1.amazonaws.com/folder/item (item = large binary file)
API Gateway: https://<id>.execute-api.eu-central-1.amazonaws.com/stage/folder/item -> redirect to s3 url
I am not sure if you can redirect the request to a presigned S3 url via API Gateway without a backend to calculate the presigned S3 url. The presigned S3 url feature is provided by the SDK instead of an API. You need to use a Lambda function to calculate the presigned S3 url and return.
var AWS = require('aws-sdk');
AWS.config.region = "us-east-1";
var s3 = new AWS.S3({signatureVersion: 'v4'});
var BUCKET_NAME = 'my-bucket-name'
exports.handler = (event, context, callback) => {
var params = {Bucket: BUCKET_NAME, Key: event.path};
s3.getSignedUrl('putObject', params, function (err, url) {
console.log('The URL is', url);
callback(null, url);
});
};