Cannot upload files with ACL public-read to Digital Ocean spaces - amazon-s3

I'm trying to upload images to a Digital Ocean space from the browser. These images should be public. I'm able to upload the images successfully.
However, though the ACL is set to public-read, the uploaded files are always private.
I know they're private because a) the dashboard says that the permissions are "private", and b) because the public urls don't work, and c) manually changing the permissions to "public" in the dashboard fixes everything.
Here's the overall process I'm using.
Create a pre-signed URL on the backend
Send that url to the browser
Upload the image to that pre-signed url
Any ideas why the images aren't public?
Code
The following examples are written in TypeScript and use AWS's v3 SDK.
Backend
This generates the pre-signed url to upload a file.
import { S3Client, PutObjectCommand } from '#aws-sdk/client-s3'
import { getSignedUrl } from '#aws-sdk/s3-request-presigner'
const client = new S3Client({
region: 'nyc3',
endpoint: 'https://nyc3.digitaloceanspaces.com',
credentials: {
accessKeyId: process.env.DIGITAL_OCEAN_SPACES_KEY,
secretAccessKey: process.env.DIGITAL_OCEAN_SPACES_SECRET,
},
})
const command = new PutObjectCommand({
ACL: 'public-read',
Bucket: 'bucket-name',
Key: fileName,
ContentType: mime,
})
const url = await getSignedUrl(client, command)
The pre-signed url is then sent to the browser.
Frontend
This is the code on the client to actually upload the file to Digital Ocean. file is a File object.
const uploadResponse = await fetch(url, {
headers: {
'Content-Type': file.type,
'Cache-Control': 'public,max-age=31536000,immutable',
},
body: file,
method: 'PUT',
})
Metadata
AWS SDK: 3.8.0

Turns out that for Digital Ocean, you also need to set the public-read ACL as a header in the put request.
//front-end
const uploadResponse = await fetch(url, {
headers: {
'Content-Type': file.type,
'Cache-Control': 'public,max-age=31536000,immutable',
'x-amz-acl': 'public-read', // add this line
},
body: file,
method: 'PUT',
})

I don't have the reputation to comment, hence adding a response. Thank you #Nick ... this is one of the few working examples of code I have seen for DigitalOcean pre-signed url. While the official DigitalOcean description here mentions Content-Type is needed for uploading with pre-signed urls, there is no example code.
Another mistake that prevented me from uploading a file using pre-signed URLs in DigitalOcean was using 'Content-Type':'multipart/form-data' and FormData().
After seeing this post, I followed #Nick's suggestion of using a File() object and 'Content-Type':'<relevant_mime>'. Then, the file upload worked like a charm. This is also not covered in official docs.

Try this to force ACL to Public in Digital Ocean Spaces:
s3cmd --access_key=YOUR_ACCESS_KEY --secret_key=YOUR_SECRET_KEY --host=YOUR_BUCKET_REGION.digitaloceanspaces.com --host-bucket=YOUR_BUCKET_NAME.YOUR_BUCKET_REGION.digitaloceanspaces.com --region=YOUR_BUCKET_REGION setacl s3://YOUR_BUCKET_NAME --acl-public

Related

"Access key does not exist" when generating pre-signed S3 URL from Lambda function

I'm trying to generate a presigned URL from within a Lambda function, to get an existing S3 object .
(The Lambda function runs an ExpressJS app, and the code to generate the URL is called on one of its routes.)
I'm getting an error "The AWS Access Key Id you provided does not exist in our records." when I visit the generated URL, though, and Google isn't helping me:
<Error>
<Code>InvalidAccessKeyId</Code>
<Message>The AWS Access Key Id you provided does not exist in our records.</Message>
<AWSAccessKeyId>AKIAJ4LNLEBHJ5LTJZ5A</AWSAccessKeyId>
<RequestId>DKQ55DK3XJBYGKQ6</RequestId>
<HostId>IempRjLRk8iK66ncWcNdiTV0FW1WpGuNv1Eg4Fcq0mqqWUATujYxmXqEMAFHAPyNyQQ5tRxto2U=</HostId>
</Error>
The Lambda function is defined via AWS SAM and given bucket access via the predefined S3CrudPolicy template:
ExpressLambdaFunction:
Type: AWS::Serverless::Function
Properties:
FunctionName: ExpressJSApp
Description: Main website request handler
CodeUri: ../lambda.zip
Handler: lambda.handler
[SNIP]
Policies:
- S3CrudPolicy:
BucketName: my-bucket-name
The URL is generated via the AWS SDK:
const router = require('express').Router();
const AWS = require('aws-sdk');
router.get('/', (req, res) => {
const s3 = new AWS.S3({
region: 'eu-west-1',
signatureVersion: 'v4'
});
const params = {
'Bucket': 'my-bucket-name',
'Key': 'my-file-name'
};
s3.getSignedUrl('getObject', params, (error, url) => {
res.send(`<p>${url}</p>`)
});
});
What's going wrong? Do I need to pass credentials explicitly when calling getSignedUrl() from within a Lambda function? Doesn't the function's execute role supply those? Am I barking up the wrong tree?
tldr; Go sure, to have the correct order of signature_v4 headers/formdata, in your request.
I had the same exact issue.
I am not sure if this is the solution for everyone who is encountering the problem, but I learned the following:
The error message, and other misleading error messages can occur, if you don't use the correct order of security headers. In my case I was using the endpoint to create a presigned url, for posting a file, to upload it. In this case, you need to go sure, that you are having the correct order of security relevant data in your form-data. For signatureVersion 's3v3' it is:
key
x-amz-algorithm
x-amz-credential
x-amz-date
policy
x-amz-security-token
x-amz-signature
In the special case of a POST-Request to a presigned url, to upload a file, it's important to have your file, AFTER the security data.
After that, the request works as expected.
I can't say for certain but I'm guessing this may have something to do with you using the old SDK. Here it is w/ v3 of the SDK. You may need to massage it a little more.
const { getSignedUrl } = require("#aws-sdk/s3-request-presigner");
const { S3Client, GetObjectCommand } = require("#aws-sdk/client-s3");
// ...
const client = new S3Client({ region: 'eu-west-1' });
const params = {
'Bucket': 'my-bucket-name',
'Key': 'my-file-name'
};
const command = new GetObjectCommand(params);
getSignedUrl(client, command(error, url) => {
res.send(`<p>${url}</p>`)
});

How to get a pre-signed URL that downloads file with http compression

Here is my code in node.js:
const downloadURL = await s3.getSignedUrlPromise('getObject', {
Bucket: BUCKET_NAME,
Key: 'key to a large json file',
});
One got the URL, I want to download a very large JSON file stored in S3 from browser. Since it is large, I would like to use HTTP compression which would compress a 20MB JSON to less than 1MB. I could not find anywhere how to do it or whether it is at all possible with S3 APIs.
I also tried to do below when using the signed URL to download file and it seems not work.
const dataRes = await fetch(downloadURL, {
headers: {
'Accept-Encoding': 'gzip, deflate',
},
method: 'GET',
});
Hope somebody could help me out. Thanks a lot!
After doing some study, I have resolved this. Post here and hope it is helpful to others.
You cannot ask S3 to compress file on the fly when getObject or using signed URL to getObject
You would have to save the zipped file into S3 in the first place. In Linux, using below command to do it:
gzip -9 <file to compress>
Upload the zipped file to S3
Use below code to generate the signed URL:
const downloadURL = await s3.getSignedUrlPromise('getObject', {
Bucket: BUCKET_NAME,
Key: 'key to a large zipped json file',
ResponseContentEncoding: 'gzip',
ResponseContentType: 'application/json',
});
Use below code to download from the signed URL:
const res = await fetch(downloadurl);
const jsonData = await res.json();

putObject upload broken files on S3 only when its by API

I have a problem when i try to upload a file on S3 through my API
I use the "putObject" method, and the thing that surprise me it's it work when i run my serverless application locally, with serverless-offline, i can push the all file on s3 and i can open it
But when i deploy my application on API Gateway, if i use the API Gateway route, the file is lightweight than original, and i can't open it, it tell me that the file is corrupted
If anyone has an idea, it could really help me
Thanks
My putObject method looks like this
const bucketName = _.get(getBucket,'bucketName');
const extension = _.get(data,'media.filename').split('.').pop();
const keyName = _.get(data,'keyName') + '.' + extension;
const content = _.get(data,'media.content')
let params = {
Bucket: bucketName,
Key: keyName,
ContentType: _.get(data,'media.contentType'),
Body: content,
ACL: 'public-read'
};
return new Promise((resolve, reject) => {
s3.putObject(params, function(err, data) {
err
?
reject(console.log(err)) :
resolve(
response(200,"Object Added")
);
});
});
Uploading files via ApiGateway is not a good idea, I'd strongly advise using resigned URL https://docs.aws.amazon.com/AmazonS3/latest/dev/PresignedUrlUploadObject.html
If you want to upload through ApiGateway, than provided information is not enough. Try to log the received lambda event, that should help

S3 uploading and serving image with pre signed URL

I am trying to upload an image to my S3 bucket through a pre-signed url. Everything works well except that when I hit the public URL for that image, the browser downloads it instead of showing it. When I upload the same image from the AWS Console, everything works well and the image gets displayed in the browser.
Here how I do it:
Generation of the pre-signed URL:
s3.getSignedUrl('putObject', {
Bucket: myBucket,
Key: myKey,
Expires: signedUrlExpireSeconds
})
Upload of the file with axios:
const response = await axios.put(url, formElement.files[0])
Should I configure headers somewhere in the process to tell S3 the mime type of the content I'm uploading or something like this?
Thank you for your help
There are two places you can do this.
If you know the type of image ahead of time, then you can explicitly set the ContentType in the s3.getSignedUrl params. This is because those params will be encoded and passed with the signed put request: getSignedUrl docs / putObject docs. So for example:
s3.getSignedUrl('putObject', {
Bucket: myBucket,
Key: myKey,
Expires: signedUrlExpireSeconds,
ContentType: 'image/png'
});
Alternatively, you can set the Content-Type header on the Axios request REST PUT docs, for example:
const response = await axios.put(
url,
formElement.files[0],
{ headers: { 'Content-Type': formElement.files[0].type } });

Correct code to upload local file to S3 proxy of API Gateway

I created an API function to work with S3. I imported the template swagger. After deployment, I tested with a Node.js project by the npm module aws-api-gateway-client.
It works well with: get bucket lists, get bucket info, get one item, put a bucket, put a plain text object, however I am blocked with put a binary file.
firstly, I ensure ACL is allowed with all permissions on S3. secondly, binary support also added
image/gif
application/octet-stream
The code snippet is as below. The behaviors are:
1) after invokeAPI, the callback function is never hit, after sometime, the Node.js project did not respond. no any error message. The file size (such as an image) is very small.
2) with only two times, the uploading seemed to work, but the result file size is bigger (around 2M bigger) than the original file, so the file is corrupt.
Could you help me out? Thank you!
var filepathname = './items/';
var filename = 'image1.png';
fs.stat(filepathname+filename, function (err, stats) {
var fileSize = stats.size ;
fs.readFile(filepathname+filename,'binary',function(err,data){
var len = data.length;
console.log('file len' + len);
var pathTemplate = '/my-test-bucket/' +filename ;
var method = 'PUT';
var params = {
folder: '',
item:''
};
var additionalParams = {
headers: {
'Content-Type': 'application/octet-stream',
//'Content-Type': 'image/gif',
'Content-Length': len
}
};
var result1 = apigClient.invokeApi(params,pathTemplate,method,additionalParams,data)
.then(function(result){
//never hit :(
console.log(result);
}).catch( function(result){
//never hit :(
console.log(result);
});;
});
});
We encountered the same problem. API Gateway is meant for limited data (10MB as of now), limits shown here,
http://docs.aws.amazon.com/apigateway/latest/developerguide/limits.html
Self Signed URL to S3:
Create an S3 self signed URL for POST from the lambda or the endpoint where you are trying to post.
How do I put object to amazon s3 using presigned url?
Now POST the image directly to S3.
Presigned POST:
Apart from posting the image if you want to post additional properties, you can post it in multi-form format as well.
http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#createPresignedPost-property
If you want to process the file after delivering to S3, you can create a trigger from S3 upon creation and process with your Lambda or anypoint that need to process.
Hope it helps.