In Fargate container why can I CRUD S3 but can't create a presigned post - amazon-s3

I'm using node in a docker container and locally I use my IAM keys for both creating, reading and deleting files to an S3 bucket as well as creating pre-signed posts. When up on a Fargate container, I create a taskRole and attach a policy which gives it full access to S3.
taskRole.attachInlinePolicy(
new iam.Policy(this, `${clientPrefix}-task-policy`, {
statements: [
new iam.PolicyStatement({
effect: iam.Effect.ALLOW,
actions: ['S3:*'],
resources: ['*'],
}),
],
})
);
With that role, I can create, read and delete files with no issues from the API. When the API tries to create a pre-signed post however, I get the error:
Error: Unable to create a POST object policy without a bucket, region, and credentials
It seems super strange to me that I can run the other operations, but it fails with the presignedPOST, especially since my S3 actions are all allowed.
const post: aws.S3.PresignedPost = await s3.createPresignedPost({
Bucket: bucket,
Fields: { key },
Expires: 60,
Conditions: [['content-length-range', 0, 5242880]],
});
Here is the code I use. I am logging the bucket and key so I'm positive that they are valid values. One thought I had was when running locally, I will run aws.configure to set my keys but in Fargate I purposefully omit that. I thought that it was getting the right keys since the other s3 operations work without fail. Am I approaching this right?

When using IAM role credentials with AWS sdk, you must either use the asynchronous (callback) version of createPresignedPost or guarantee that your credentials have been resolved before calling the await version of this method.
Something like this will work with IAM based credentials:
const s3 = new AWS.S3()
const _presign = params => {
return new Promise((res, rej) => {
s3.createPresignedPost(params, (err, data) => {
if (err) return rej(err)
return res(data)
})
})
}
// await _presign(...) <- works
// await s3.createPresignedPost(...) <- won't work
Refer: https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#createPresignedPost-property

Related

"Access key does not exist" when generating pre-signed S3 URL from Lambda function

I'm trying to generate a presigned URL from within a Lambda function, to get an existing S3 object .
(The Lambda function runs an ExpressJS app, and the code to generate the URL is called on one of its routes.)
I'm getting an error "The AWS Access Key Id you provided does not exist in our records." when I visit the generated URL, though, and Google isn't helping me:
<Error>
<Code>InvalidAccessKeyId</Code>
<Message>The AWS Access Key Id you provided does not exist in our records.</Message>
<AWSAccessKeyId>AKIAJ4LNLEBHJ5LTJZ5A</AWSAccessKeyId>
<RequestId>DKQ55DK3XJBYGKQ6</RequestId>
<HostId>IempRjLRk8iK66ncWcNdiTV0FW1WpGuNv1Eg4Fcq0mqqWUATujYxmXqEMAFHAPyNyQQ5tRxto2U=</HostId>
</Error>
The Lambda function is defined via AWS SAM and given bucket access via the predefined S3CrudPolicy template:
ExpressLambdaFunction:
Type: AWS::Serverless::Function
Properties:
FunctionName: ExpressJSApp
Description: Main website request handler
CodeUri: ../lambda.zip
Handler: lambda.handler
[SNIP]
Policies:
- S3CrudPolicy:
BucketName: my-bucket-name
The URL is generated via the AWS SDK:
const router = require('express').Router();
const AWS = require('aws-sdk');
router.get('/', (req, res) => {
const s3 = new AWS.S3({
region: 'eu-west-1',
signatureVersion: 'v4'
});
const params = {
'Bucket': 'my-bucket-name',
'Key': 'my-file-name'
};
s3.getSignedUrl('getObject', params, (error, url) => {
res.send(`<p>${url}</p>`)
});
});
What's going wrong? Do I need to pass credentials explicitly when calling getSignedUrl() from within a Lambda function? Doesn't the function's execute role supply those? Am I barking up the wrong tree?
tldr; Go sure, to have the correct order of signature_v4 headers/formdata, in your request.
I had the same exact issue.
I am not sure if this is the solution for everyone who is encountering the problem, but I learned the following:
The error message, and other misleading error messages can occur, if you don't use the correct order of security headers. In my case I was using the endpoint to create a presigned url, for posting a file, to upload it. In this case, you need to go sure, that you are having the correct order of security relevant data in your form-data. For signatureVersion 's3v3' it is:
key
x-amz-algorithm
x-amz-credential
x-amz-date
policy
x-amz-security-token
x-amz-signature
In the special case of a POST-Request to a presigned url, to upload a file, it's important to have your file, AFTER the security data.
After that, the request works as expected.
I can't say for certain but I'm guessing this may have something to do with you using the old SDK. Here it is w/ v3 of the SDK. You may need to massage it a little more.
const { getSignedUrl } = require("#aws-sdk/s3-request-presigner");
const { S3Client, GetObjectCommand } = require("#aws-sdk/client-s3");
// ...
const client = new S3Client({ region: 'eu-west-1' });
const params = {
'Bucket': 'my-bucket-name',
'Key': 'my-file-name'
};
const command = new GetObjectCommand(params);
getSignedUrl(client, command(error, url) => {
res.send(`<p>${url}</p>`)
});

node S3 Object Storage Linode

Im trying to use the aws-sdk to acces my linode S3 compatible bucket, but everything I try doesn't work. Not sure what the correct endpoint should be? For testing purposes is my bucket set to public read/write.
const s3 = new S3({
endpoint: "https://linodeobjects.com",
region: eu-central-1,
accesKeyId: <accesKey>,
secretAccessKey: <secretKey>,
});
const params = {
Bucket: bucketName,
Key: "someKey",
Expires: 60,
};
const uploadURL = await s3.getSignedUrlPromise("putObject", params);
The error im getting
code: 'CredentialsError',
time: 2021-07-15T08:29:50.000Z,
retryable: true,
originalError: {
message: 'Could not load credentials from any providers',
code: 'CredentialsError',
time: 2021-07-15T08:29:50.000Z,
retryable: true,
originalError: {
message: 'EC2 Metadata roleName request returned error',
code: 'TimeoutError',
time: 2021-07-15T08:29:49.999Z,
retryable: true,
originalError: [Object]
}
}
}
It seems like a problem with the credentials of the environment that this code is executed in and not with the bucket permissions themselves.
The pre-signing of the URL is an operation that is done entirely locally. It uses local credentials (i.e., access key ID and secret access key) to create a sigv4 signature for the URL. This also means that whether or not the credentials used for signing the URL are valid is only checked at the moment the URL is used, and not at the moment of signing the URL itself.
The error simply indicates that from all the ways the SDK is trying to find credentials (more info here) it cannot find credentials it can use to sign the URL.
This might be unrelated, but according to the documentation, the endpoint should be the following: The endpoint URI to send requests to. The default endpoint is built from the configured region. The endpoint should be a string like 'https://{service}.{region}.amazonaws.com' or an Endpoint object. Which, in the code example above, is not the case.
You should set the endpoint to be eu-central-1.linodeobjects.com. When using Linode object storage the region is not determined by the endpoint that you use.

putObject upload broken files on S3 only when its by API

I have a problem when i try to upload a file on S3 through my API
I use the "putObject" method, and the thing that surprise me it's it work when i run my serverless application locally, with serverless-offline, i can push the all file on s3 and i can open it
But when i deploy my application on API Gateway, if i use the API Gateway route, the file is lightweight than original, and i can't open it, it tell me that the file is corrupted
If anyone has an idea, it could really help me
Thanks
My putObject method looks like this
const bucketName = _.get(getBucket,'bucketName');
const extension = _.get(data,'media.filename').split('.').pop();
const keyName = _.get(data,'keyName') + '.' + extension;
const content = _.get(data,'media.content')
let params = {
Bucket: bucketName,
Key: keyName,
ContentType: _.get(data,'media.contentType'),
Body: content,
ACL: 'public-read'
};
return new Promise((resolve, reject) => {
s3.putObject(params, function(err, data) {
err
?
reject(console.log(err)) :
resolve(
response(200,"Object Added")
);
});
});
Uploading files via ApiGateway is not a good idea, I'd strongly advise using resigned URL https://docs.aws.amazon.com/AmazonS3/latest/dev/PresignedUrlUploadObject.html
If you want to upload through ApiGateway, than provided information is not enough. Try to log the received lambda event, that should help

How to Upload a csv file lager than 10MB on S3 using Lambda /API Gateway

Hello I am new here on AWS i was trying to upload a csv file on my bucket s3 but when the file is larger than 10mb it is returing "{"message":"Request Entity Too Large"}" I am using postman to do this. Below is the current code I created but in the future I will add some validation to change the name of the file that being uploaded into my format. Is there any way to do this with this kind of code or if you have any suggestion that can help me with the issue I have encountered?
const AWS = require('aws-sdk');
const s3 = new AWS.S3();
const bucket = process.env.UploadBucket;
const prefix = "csv-files/";
const filename = "file.csv";
exports.handler = (event, context, callback) => {
let data = event.body;
let buff = new Buffer(data, 'base64');
let text = buff.toString('ascii');
console.log(text);
let textFileSplit = text.split('?');
//get filename split
let getfilename = textFileSplit[0].split('"');
console.log(textFileSplit[0]);
console.log(textFileSplit[1]);
// //remove lower number on csv
let csvFileSplit = textFileSplit[1].split('--')
const params = {
Bucket: bucket,
Key: prefix + getfilename[3],
Body: csvFileSplit[0]
};
s3.upload(params, function (err, data) {
if (err) {
console.log('error uploading');
callback(err);
}
console.log("Uploaded")
callback(null, "Success")
});
}
For scenarios like this one, we normally use a different approach.
Instead of sending the file to lambda through API Gateway, you send the file directly to S3. This will make your solution more robust and cost you less because you don't need to transfer the data to API Gateway and you don't need to process the entire file inside the lambda.
The question is: How do you do this in a secure way, without opening your S3 Bucket to everyone on the internet and uploading anything to it? You use s3 signed urls. Signed Urls are a feature of S3 that allows you to bake in the url the correct permissions to upload an object to a secured bucket.
In summary the process will be:
Frontend sends a request to API Gateway;
API Gateway forward the request to a Lambda Function;
The Lambda Function generate a signed Url with the permissions to upload the object to a specific s3 bucket;
API Gateway sends back the response from Lambda Function to the Frontend. Frontend upload the file to the signed Url.
To generate the signed url you will need to use the normal aws-sdk in your lambda function. There you will call the method getSignedUrl (signature depends on your language). You can find more information about signed urls here.

AWS Bucket Undeletable

The AWS S3 web console lists buckets that were deleted. Selecting the bucket and trying to empty or delete it causes the "Empty bucket" or "Delete bucket" modal Confirm button to fail silently.
If you click into the bucket and try to upload a file, you get an error message, "Error Data not found". If you try to create a folder, you get, "Error Failed to create folder with name ''." If you try to change and Property, Permissions, or Management, you also get error messages.
If you try to create a bucket with the same name (presumably to overwrite the old bucket), you get an error message indicating that the bucket name is taken.
Libraries such as s3-upload are similarly unable to delete or overwrite the bucket.
AWS (Node) SDK:
var aws = require('aws-sdk');
var s3 = new aws.S3();
s3.listBuckets({}, (error, data) => {
console.log(error);
console.log(data);
});
returns the bucket, even though it should not exist.
{
Buckets: [
{ Name: 'bucket.that.shouldnt.exist', CreationDate: 2017-02-20T01:51:19.000Z },
],
Owner: {
DisplayName: '...',
ID: '...'
}
}
and
s3.deleteBucket({
Bucket: 'bucket.that.shouldnt.exist'
}, (error, data) => {
console.log(error);
console.log(data);
});
returns
{
NoSuchBucket: The specified bucket does not exist
at Request.extractError (.../aws-sdk/lib/services/s3.js:585:35)
at Request.callListeners (.../aws-sdk/lib/sequential_executor.js:106:20)
at Request.emit (.../aws-sdk/lib/sequential_executor.js:78:10)
at Request.emit (.../aws-sdk/lib/request.js:683:14)
at Request.transition (.../aws-sdk/lib/request.js:22:10)
at AcceptorStateMachine.runTo (.../aws-sdk/lib/state_machine.js:14:12)
at .../aws-sdk/lib/state_machine.js:26:10
at Request.<anonymous> (.../aws-sdk/lib/request.js:38:9)
at Request.<anonymous> (.../aws-sdk/lib/request.js:685:12)
at Request.callListeners (.../aws-sdk/lib/sequential_executor.js:116:18)
message: 'The specified bucket does not exist',
code: 'NoSuchBucket',
region: null,
time: 2019-06-04T16:56:35.537Z,
requestId: '...',
extendedRequestId: '...',
cfId: undefined,
statusCode: 404,
retryable: false,
retryDelay: 33.90621042754991
}
Amazon S3 is a large-scale, distributed system. Deleting an S3 bucket is quite different to deleting a local folder on your hard drive.
After you initiate the deletion of a bucket, the bucket name becomes unavailable for a certain amount of time. You cannot re-create the bucket, re-delete the bucket, get objects from the bucket, or put objects to the bucket.
The amount of time that a recently-deleted bucket name cannot be reused to create a new bucket varies. If you owned the bucket name previously and you are trying to re-create the bucket in the same region then you can typically re-create it almost immediately. If you are not the previous owner or you are trying to re-create the bucket name in a different region then the bucket name will typically be unavailable for hours.
Note that as a general rule, if you intend to re-use the same bucket name then it is typically better to simply empty the bucket rather than delete and re-create the bucket. Another customer could create the same-named bucket between your deletion and re-creation attempts, thus causing you to lose control of the bucket name (unlikely, of course, but possible).
For some reason, the AWS createBucket API worked (although the console did not):
var aws = require('aws-sdk');
var s3 = new aws.S3();
s3.createBucket({
Bucket: 'bucket.that.shouldnt.exist'
}, (error, data) => {
console.log(error);
console.log(data);
});
then, you should be able to perform operations normally on the overwritten bucket.