S3 temporary URLs for Bucket Contents without Key Information - amazon-s3

I am using the code below to return temporary URLs for files in an S3 bucket that are displayed on my website.
export function PreURL(IAM_USER_KEY, IAM_USER_SECRET, BUCKET_NAME, Key1) {
let s3bucket = new AWS.S3({
endpoint: 's3-us-west-1.amazonaws.com',
signatureVersion: 'v4',
region: 'us-west-1',
accessKeyId: IAM_USER_KEY,
secretAccessKey: IAM_USER_SECRET,
Bucket: BUCKET_NAME
});
var params = {Bucket: BUCKET_NAME, Key: Key1, Expires: 60};
var url = s3bucket.getSignedUrl('getObject', params);
console.log('Image The URL is', url); // expires in 60 seconds
return url.toString()
} // End of PreURL
It returns a URL along these lines:
https://BUCKET.s3-REGION.amazonaws.com/FILE?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=IAM_KEY%2F20200909%2FREGION%2Fs3%2Faws4_request&X-Amz-Date=TIME&X-Amz-Expires=60&X-Amz-Signature=SIGNATURE&X-Amz-SignedHeaders=HEADERS#t=0.1
This URL troubles me somewhat as it gives away everything but the secret_access_key. I know that the bucket cannot be accessed without it but I would prefer a temporary URL without so much information in it. (Bucket_Name, .amazonaws, IAM_KEY). Is there any other way of creating a temporary URL for files in an S3 bucket that does not give away so much information?

No, there is no facility to create an 'alias' for objects.
I understand that you are worried that it is showing your Bucket and Key values, but this is the normal way that pre-signed URLs operate.
If you really don't like what it is doing, then you could:
Copy the files to a 'random' Key
Provide the pre-signed URL for the temporary object
Use a Lifecycle Policy to delete the objects after 1 day

Related

"Access key does not exist" when generating pre-signed S3 URL from Lambda function

I'm trying to generate a presigned URL from within a Lambda function, to get an existing S3 object .
(The Lambda function runs an ExpressJS app, and the code to generate the URL is called on one of its routes.)
I'm getting an error "The AWS Access Key Id you provided does not exist in our records." when I visit the generated URL, though, and Google isn't helping me:
<Error>
<Code>InvalidAccessKeyId</Code>
<Message>The AWS Access Key Id you provided does not exist in our records.</Message>
<AWSAccessKeyId>AKIAJ4LNLEBHJ5LTJZ5A</AWSAccessKeyId>
<RequestId>DKQ55DK3XJBYGKQ6</RequestId>
<HostId>IempRjLRk8iK66ncWcNdiTV0FW1WpGuNv1Eg4Fcq0mqqWUATujYxmXqEMAFHAPyNyQQ5tRxto2U=</HostId>
</Error>
The Lambda function is defined via AWS SAM and given bucket access via the predefined S3CrudPolicy template:
ExpressLambdaFunction:
Type: AWS::Serverless::Function
Properties:
FunctionName: ExpressJSApp
Description: Main website request handler
CodeUri: ../lambda.zip
Handler: lambda.handler
[SNIP]
Policies:
- S3CrudPolicy:
BucketName: my-bucket-name
The URL is generated via the AWS SDK:
const router = require('express').Router();
const AWS = require('aws-sdk');
router.get('/', (req, res) => {
const s3 = new AWS.S3({
region: 'eu-west-1',
signatureVersion: 'v4'
});
const params = {
'Bucket': 'my-bucket-name',
'Key': 'my-file-name'
};
s3.getSignedUrl('getObject', params, (error, url) => {
res.send(`<p>${url}</p>`)
});
});
What's going wrong? Do I need to pass credentials explicitly when calling getSignedUrl() from within a Lambda function? Doesn't the function's execute role supply those? Am I barking up the wrong tree?
tldr; Go sure, to have the correct order of signature_v4 headers/formdata, in your request.
I had the same exact issue.
I am not sure if this is the solution for everyone who is encountering the problem, but I learned the following:
The error message, and other misleading error messages can occur, if you don't use the correct order of security headers. In my case I was using the endpoint to create a presigned url, for posting a file, to upload it. In this case, you need to go sure, that you are having the correct order of security relevant data in your form-data. For signatureVersion 's3v3' it is:
key
x-amz-algorithm
x-amz-credential
x-amz-date
policy
x-amz-security-token
x-amz-signature
In the special case of a POST-Request to a presigned url, to upload a file, it's important to have your file, AFTER the security data.
After that, the request works as expected.
I can't say for certain but I'm guessing this may have something to do with you using the old SDK. Here it is w/ v3 of the SDK. You may need to massage it a little more.
const { getSignedUrl } = require("#aws-sdk/s3-request-presigner");
const { S3Client, GetObjectCommand } = require("#aws-sdk/client-s3");
// ...
const client = new S3Client({ region: 'eu-west-1' });
const params = {
'Bucket': 'my-bucket-name',
'Key': 'my-file-name'
};
const command = new GetObjectCommand(params);
getSignedUrl(client, command(error, url) => {
res.send(`<p>${url}</p>`)
});

In Fargate container why can I CRUD S3 but can't create a presigned post

I'm using node in a docker container and locally I use my IAM keys for both creating, reading and deleting files to an S3 bucket as well as creating pre-signed posts. When up on a Fargate container, I create a taskRole and attach a policy which gives it full access to S3.
taskRole.attachInlinePolicy(
new iam.Policy(this, `${clientPrefix}-task-policy`, {
statements: [
new iam.PolicyStatement({
effect: iam.Effect.ALLOW,
actions: ['S3:*'],
resources: ['*'],
}),
],
})
);
With that role, I can create, read and delete files with no issues from the API. When the API tries to create a pre-signed post however, I get the error:
Error: Unable to create a POST object policy without a bucket, region, and credentials
It seems super strange to me that I can run the other operations, but it fails with the presignedPOST, especially since my S3 actions are all allowed.
const post: aws.S3.PresignedPost = await s3.createPresignedPost({
Bucket: bucket,
Fields: { key },
Expires: 60,
Conditions: [['content-length-range', 0, 5242880]],
});
Here is the code I use. I am logging the bucket and key so I'm positive that they are valid values. One thought I had was when running locally, I will run aws.configure to set my keys but in Fargate I purposefully omit that. I thought that it was getting the right keys since the other s3 operations work without fail. Am I approaching this right?
When using IAM role credentials with AWS sdk, you must either use the asynchronous (callback) version of createPresignedPost or guarantee that your credentials have been resolved before calling the await version of this method.
Something like this will work with IAM based credentials:
const s3 = new AWS.S3()
const _presign = params => {
return new Promise((res, rej) => {
s3.createPresignedPost(params, (err, data) => {
if (err) return rej(err)
return res(data)
})
})
}
// await _presign(...) <- works
// await s3.createPresignedPost(...) <- won't work
Refer: https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#createPresignedPost-property

How to Upload a csv file lager than 10MB on S3 using Lambda /API Gateway

Hello I am new here on AWS i was trying to upload a csv file on my bucket s3 but when the file is larger than 10mb it is returing "{"message":"Request Entity Too Large"}" I am using postman to do this. Below is the current code I created but in the future I will add some validation to change the name of the file that being uploaded into my format. Is there any way to do this with this kind of code or if you have any suggestion that can help me with the issue I have encountered?
const AWS = require('aws-sdk');
const s3 = new AWS.S3();
const bucket = process.env.UploadBucket;
const prefix = "csv-files/";
const filename = "file.csv";
exports.handler = (event, context, callback) => {
let data = event.body;
let buff = new Buffer(data, 'base64');
let text = buff.toString('ascii');
console.log(text);
let textFileSplit = text.split('?');
//get filename split
let getfilename = textFileSplit[0].split('"');
console.log(textFileSplit[0]);
console.log(textFileSplit[1]);
// //remove lower number on csv
let csvFileSplit = textFileSplit[1].split('--')
const params = {
Bucket: bucket,
Key: prefix + getfilename[3],
Body: csvFileSplit[0]
};
s3.upload(params, function (err, data) {
if (err) {
console.log('error uploading');
callback(err);
}
console.log("Uploaded")
callback(null, "Success")
});
}
For scenarios like this one, we normally use a different approach.
Instead of sending the file to lambda through API Gateway, you send the file directly to S3. This will make your solution more robust and cost you less because you don't need to transfer the data to API Gateway and you don't need to process the entire file inside the lambda.
The question is: How do you do this in a secure way, without opening your S3 Bucket to everyone on the internet and uploading anything to it? You use s3 signed urls. Signed Urls are a feature of S3 that allows you to bake in the url the correct permissions to upload an object to a secured bucket.
In summary the process will be:
Frontend sends a request to API Gateway;
API Gateway forward the request to a Lambda Function;
The Lambda Function generate a signed Url with the permissions to upload the object to a specific s3 bucket;
API Gateway sends back the response from Lambda Function to the Frontend. Frontend upload the file to the signed Url.
To generate the signed url you will need to use the normal aws-sdk in your lambda function. There you will call the method getSignedUrl (signature depends on your language). You can find more information about signed urls here.

Correct code to upload local file to S3 proxy of API Gateway

I created an API function to work with S3. I imported the template swagger. After deployment, I tested with a Node.js project by the npm module aws-api-gateway-client.
It works well with: get bucket lists, get bucket info, get one item, put a bucket, put a plain text object, however I am blocked with put a binary file.
firstly, I ensure ACL is allowed with all permissions on S3. secondly, binary support also added
image/gif
application/octet-stream
The code snippet is as below. The behaviors are:
1) after invokeAPI, the callback function is never hit, after sometime, the Node.js project did not respond. no any error message. The file size (such as an image) is very small.
2) with only two times, the uploading seemed to work, but the result file size is bigger (around 2M bigger) than the original file, so the file is corrupt.
Could you help me out? Thank you!
var filepathname = './items/';
var filename = 'image1.png';
fs.stat(filepathname+filename, function (err, stats) {
var fileSize = stats.size ;
fs.readFile(filepathname+filename,'binary',function(err,data){
var len = data.length;
console.log('file len' + len);
var pathTemplate = '/my-test-bucket/' +filename ;
var method = 'PUT';
var params = {
folder: '',
item:''
};
var additionalParams = {
headers: {
'Content-Type': 'application/octet-stream',
//'Content-Type': 'image/gif',
'Content-Length': len
}
};
var result1 = apigClient.invokeApi(params,pathTemplate,method,additionalParams,data)
.then(function(result){
//never hit :(
console.log(result);
}).catch( function(result){
//never hit :(
console.log(result);
});;
});
});
We encountered the same problem. API Gateway is meant for limited data (10MB as of now), limits shown here,
http://docs.aws.amazon.com/apigateway/latest/developerguide/limits.html
Self Signed URL to S3:
Create an S3 self signed URL for POST from the lambda or the endpoint where you are trying to post.
How do I put object to amazon s3 using presigned url?
Now POST the image directly to S3.
Presigned POST:
Apart from posting the image if you want to post additional properties, you can post it in multi-form format as well.
http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#createPresignedPost-property
If you want to process the file after delivering to S3, you can create a trigger from S3 upon creation and process with your Lambda or anypoint that need to process.
Hope it helps.

Redirecting AWS API Gateway to S3 Binary

I'm trying to download large binaries from S3 via an API Gateway URL. Because the maximum download size in API Gateway is limited I thought I just could provide the basic URL to Amazon S3 (in the swagger file) and add the folder/item to the binary I want to download.
But all I find is redirection API Gateway via a Lambda function, but I don't want that.
I want a swagger file where the redirect is already configured.
So if I call <api_url>/folder/item I want to be redirected to s3-url/folder/item
Is this possible? And if so, how?
Example:
S3: https://s3.eu-central-1.amazonaws.com/folder/item (item = large binary file)
API Gateway: https://<id>.execute-api.eu-central-1.amazonaws.com/stage/folder/item -> redirect to s3 url
I am not sure if you can redirect the request to a presigned S3 url via API Gateway without a backend to calculate the presigned S3 url. The presigned S3 url feature is provided by the SDK instead of an API. You need to use a Lambda function to calculate the presigned S3 url and return.
var AWS = require('aws-sdk');
AWS.config.region = "us-east-1";
var s3 = new AWS.S3({signatureVersion: 'v4'});
var BUCKET_NAME = 'my-bucket-name'
exports.handler = (event, context, callback) => {
var params = {Bucket: BUCKET_NAME, Key: event.path};
s3.getSignedUrl('putObject', params, function (err, url) {
console.log('The URL is', url);
callback(null, url);
});
};