I’m using the cesium ion rest api to upload las file to cesium. It's a 3 part process. First you have to make a call to create the asset in ion, then it respond with the upload location access infos.
Then you have to use those infos to upload the file to S3.
My problem is that I get AccessDenied: Access Denied
at S3ServiceException.ServiceException [as constructor]
If I use my own bucket, with my own credential, it’s working, but that’s not what I want for now.
When I console log uploadLocation, I have an accessKey, a sessionToken etc.
Every thing is in order, that’s why I don’t understand why I get an AccessDenied.
What am I missing ? Thanks for the help.
const S3ClientCred = {
accessKeyId: uploadLocation.accessKey,
secretAccessKey: uploadLocation.secretAccessKey,
sessionToken: uploadLocation.sessionToken
}
const params = {
Bucket: uploadLocation.bucket,
Prefix: uploadLocation.prefix,
Key: selectedFile.name,
Body: selectedFile
};
try {
const parallelUploads3 = new Upload({
client: new S3Client({apiVersion: '2006-03-01', region: 'us-east-1',signatureVersion: 'v4',endpoint: uploadLocation.endpoint, credentials: S3ClientCred}),
params: params,
});
parallelUploads3.on("httpUploadProgress", (progress) => {
console.log(progress);
});
await parallelUploads3.done();
console.log('parallelUploads3.done()');
} catch (e) {
console.log(e);
}
Related
I am using the following code to upload multiple images to s3 bucket using AWS API gateway.
And a strange issue is happening that when I upload image for the first time it uploads fine but when I try to upload again it fails the upload to s3 bucket.
After some time when I try again it works and again fails.
const s3Client = new AWS.S3({
credentials: {
accessKeyId: process.env.AWS_S3_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_S3_SECRET_ACCESS_KEY,
region: ''
},
});
And when it fails it does not print any logs which are after s3Client.upload() function call. Not sure How to debug this? I have tried to add progress check but it never goes into that check when upload fails.
Maybe its upload frequency limit on s3? I didn't find any such limit on aws docs though.
if (contentType && contentType.includes('multipart/form-data;')) {
const result = await parser.parse(event);
body = await schema.parseAsync(JSON.parse(result.JsonData))
console.log('DEBUG>>>>> HandlerTS File: JSON.parse(result.JsonData): ', body)
console.log('DEBUG>>>>> HandlerTS File: Result: ', result)
if (result.files) {
result.files.forEach(f => {
console.log("DEBUG>>>>> Uploading file")
console.log(f)
s3Client.upload(
{
Bucket: bucket,
Key: `${body.name}/${f.filename}`,
Body: f.content,
},
(err, data) => {
console.log(err, data);
},
).on("httpUploadProgress", (progress) => {
const uploaded = Math.round(progress.loaded / progress.total * 100);
console.log('DEBUG>>>>>>>>>> checking http upload progress ', uploaded)
}).send(function (err, data) {
if (err) {
// an error occurred, handle the error
console.log('DEBUG>>>>>>>>>>>>>> Error Upload')
console.log(err, err.stack);
return;
}
const fileUrl = data.Location;
console.log('DEBUG>>>>>>>>>>>>>> File URL:', fileUrl);
});
})
}
P.s: I am using API gateway and lambda functions.
I am trying to let my users upload photos in a Next.js application.
I set up a remote database and I am writing to the database properly, but the images are appearing blank. I'm thinking it must be a problem with the format of the data coming in.
Here is my code on the front end in React:
async function handleProfileImageUpload(e) {
const file = e.target.files[0];
await fetch('/api/image/profileUpload', {
method: 'POST',
body: file,
'Content-Type': 'image/jpg',
})
.then(res => {
console.log('final:', res);
})
};
return (
<label htmlFor="file-upload">
<div>
<img src={profileImage} className="profile-image-lg dashboard-profile-image"/>
<div id="dashboard-image-hover" >Upload Image</div>
</div>
</label>
<input id="file-upload" type="file" onChange={handleProfileImageUpload}/>
)
The "file" I declare above (const file = e.target.files[0]) appears like this on console.log(file):
+ --------++-+-++-+------------+----++-+--7--7----7-���"�!1A"Qaq��2��B�#br���$34R����CSst���5����)!1"AQaq23B����
?�#��P�n�9?Y�
ޞ�p#��zE� Nk�2iH��l��]/P4��JJ!��(�#�r�Mң[ ���+���PD�HVǵ�f(*znP�>�HRT�!W��\J���$�p(Q�=JF6L�ܧZ�)�z,[�q��� *
�i�A\5*d!%6T���ͦ�#J{6�6��
k#��:JK�bꮘh�A�%=+E q\���H
q�Q��"�����B(��OЛL��B!Le6���(�� aY
�*zOV,8E�2��IC�H��*)#4է4.�ɬ(�<5��j!§eR27��
��s����IdR���V�u=�u2a��
... and so on. It's long.
I am uploading to Digital Ocean's Spaces object storage, which interfaces with AWS S3. Again, my application is written in Next.js and I am using a serverless environment.
Here is the API route I am sending it to ('/api/image/profileUpload.js'):
import AWS from 'aws-sdk';
export default async function handler(req, res) {
// get the image data
let image = req.body;
// create S3 instance with credentials
const s3 = new AWS.S3({
endpoint: new AWS.Endpoint('nyc3.digitaloceanspaces.com'),
accessKeyId: process.env.SPACES_KEY,
secretAccessKey: process.env.SPACES_SECRET,
region: 'nyc3',
});
// create parameters for upload
const uploadParams = {
Bucket: 'oscarexpert',
Key: 'asdff',
Body: image,
ContentType: "image/jpeg",
ACL: "public-read",
};
// execute upload
s3.upload(uploadParams, (err, data) => {
if (err) return console.log('reject', err)
else return console.log('resolve', data)
})
// returning arbitrary object for now
return res.json({});
};
When I console.log(image), it shows the same garbled string that I posted above, so I know it's getting the same exact data. Maybe this needs to be further parsed?
The code above is directly from a Digital Ocean tutorial but catered to my environment. I am taking note of the "Body" parameter, which is where the garbled string is being passed in.
What I've tried:
Stringifying the "image" before passing it to the Body param
Using multer-s3 to process the request on the backend
Requesting through Postman (the image comes in with the exact same garbled format)
I've spent days on this issue. Any guidance would be much appreciated.
Figured it out. I wasn't encoding the image properly in my Next.js serverless backend.
First, on the front end, I made my fetch request like this. It's important to put it in the "form" format for the next step in the backend:
async function handleProfileImageUpload(e) {
const file = e.target.files[0];
const formData = new FormData();
formData.append('file', file);
// CHECK THAT THE FILE IS PROPER FORMAT (size, type, etc)
let url = false;
await fetch(`/api/image/profileUpload`, {
method: 'POST',
body: formData,
'Content-Type': 'image/jpg',
})
}
There were several components that helped me finally do this on the backend, so I am just going to post the code I ended up with. Here's the API route:
import AWS from 'aws-sdk';
import formidable from 'formidable-serverless';
import fs from 'fs';
export const config = {
api: {
bodyParser: false,
},
};
export default async (req, res) => {
// create S3 instance with credentials
const s3 = new AWS.S3({
endpoint: new AWS.Endpoint('nyc3.digitaloceanspaces.com'),
accessKeyId: process.env.SPACES_KEY,
secretAccessKey: process.env.SPACES_SECRET,
region: 'nyc3',
});
// parse request to readable form
const form = new formidable.IncomingForm();
form.parse(req, async (err, fields, files) => {
// Account for parsing errors
if (err) return res.status(500);
// Read file
const file = fs.readFileSync(files.file.path);
// Upload the file
s3.upload({
// params
Bucket: process.env.SPACES_BUCKET,
ACL: "public-read",
Key: 'something',
Body: file,
ContentType: "image/jpeg",
})
.send((err, data) => {
if (err) {
console.log('err',err)
return res.status(500);
};
if (data) {
console.log('data',data)
return res.json({
url: data.Location,
});
};
});
});
};
If you have any questions feel free to leave a comment.
I am creating presigned url using aws-sdk’s nodejs/createPresignedPost method. Its all working via serverless-offline plugin on my local, cause my personal accesskey has all accesses. But when I deploy it via serverless framework it errors out with HTTP 403 and the error in browser reads as follows
The AWS Access Key Id you provided does not exist in our records.
The key starts with ASIA ASIAQDGRI5OSPEXMAPLE
I have granted all action permission to my lambda on target bucket.
My Api gateway and lambdas that return the signed url are in ‘us-east-1’ region and the bucket is in ‘ap-south-1’ region.
I am sure I am missing some IAM permissions but I can not figure it our. Can some one help me here?
Here is my function that returns a promise on getting pre signed post url
function getSignedUploadUrl() {
const params = {
Expires: 600,
Bucket: process.env.AWS_S3_BUCKET_NAME,
Fields: {
key: s3FilePathKey,
acl: acl,
"content-type": contentType,
},
conditions: [
{ acl: acl },
{ "content-type": contentType },
["content-length-range", 1000000, 75000000],
],
};
return new Promise((resolve, reject) => {
const s3 = new S3({
region: AWS_REGION
});
s3.createPresignedPost(params, (err, data) => {
if (err) {
console.log(err);
reject({
message: "Something went wrong",
});
}
resolve(data);
});
});
}
The fields to include in the formdata varies based on where the code is run, at least that was the case in my problem. In Elastic Beanstalk environment, there was an additional x-amz-security-token field (alongside the AWSAccessKeyId, key etc), which was not present in the fields returned by Boto3 create_presigned_post while run on local env.
I'm trying to use the JavaScript fetch API, AWS API Gateway, AWS Lambda, and AWS S3 to create a service that allows users to upload and download media. Server is using NodeJs 8.10; browser is Google Chrome Version 69.0.3497.92 (Official Build) (64-bit).
In the long term, allowable media would include audio, video, and images. For now, I'd be happy just to get images to work.
The problem I'm having: my browser-side client, implemented using fetch, is able to upload JPEG's to S3 via API Gateway and Lambda just fine. I can use curl or the S3 Console to download the JPEG from my S3 bucket, and then view the image in an image viewer just fine.
But, if I try to download the image via the browser-side client and fetch, I get nothing that I'm able to display in the browser.
Here's the code from the browser-side client:
fetch(
'path/to/resource',
{
method: 'post',
mode: "cors",
body: an_instance_of_file_from_an_html_file_input_tag,
headers: {
Authorization: user_credentials,
'Content-Type': 'image/jpeg',
},
}
).then((response) => {
return response.blob();
}).then((blob) => {
const img = new Image();
img.src = URL.createObjectURL(blob);
document.body.appendChild(img);
}).catch((error) => {
console.error('upload failed',error);
});
Here's the server-side code, using Claudia.js:
const AWS = require('aws-sdk');
const ApiBuilder = require('claudia-api-builder');
const api = new ApiBuilder();
api.corsOrigin(allowed_origin);
api.registerAuthorizer('my authorizer', {
providerARNs: ['arn of my cognito user pool']
});
api.get(
'/media',
(request) => {
'use strict';
const s3 = new AWS.S3();
const params = {
Bucket: 'name of my bucket',
Key: 'name of an object that is confirmed to exist in the bucket and to be properly encoded as and readable as a JPEG',
};
return s3.getObject(params).promise().then((response) => {
return response.Body;
})
;
}
);
module.exports = api;
Here are the initial OPTION request and response headers in Chrome's Network Panel:
Here's the consequent GET request and response headers:
What's interesting to me is that the image size is reported as 699873 (with no units) in the S3 Console, but the response body of the GET transaction is reported in Chrome at roughly 2.5 MB (again, with no units).
The resulting image is a 16x16 square and dead link. I get no errors or warnings whatsoever in the browser's console or CloudWatch.
I've tried a lot of things; would be interested to hear what anyone out there can come up with.
Thanks in advance.
EDIT: In Chrome:
Claudia requires that the client specify which MIME type it will accept on binary payloads. So, keep the 'Content-type' config in the headers object client-side:
fetch(
'path/to/resource',
{
method: 'post',
mode: "cors",
body: an_instance_of_file_from_an_html_file_input_tag,
headers: {
Authorization: user_credentials,
'Content-Type': 'image/jpeg', // <-- This is important.
},
}
).then((response) => {
return response.blob();
}).then((blob) => {
const img = new Image();
img.src = URL.createObjectURL(blob);
document.body.appendChild(img);
}).catch((error) => {
console.error('upload failed',error);
});
Then, on the server side, you need to tell Claudia that the response should be binary and which MIME type to use:
const AWS = require('aws-sdk');
const ApiBuilder = require('claudia-api-builder');
const api = new ApiBuilder();
api.corsOrigin(allowed_origin);
api.registerAuthorizer('my authorizer', {
providerARNs: ['arn of my cognito user pool']
});
api.get(
'/media',
(request) => {
'use strict';
const s3 = new AWS.S3();
const params = {
Bucket: 'name of my bucket',
Key: 'name of an object that is confirmed to exist in the bucket and to be properly encoded as and readable as a JPEG',
};
return s3.getObject(params).promise().then((response) => {
return response.Body;
})
;
},
/** Add this. **/
{
success: {
contentType: 'image/jpeg',
contentHandling: 'CONVERT_TO_BINARY',
},
}
);
module.exports = api;
I'm working on this project and I've managed to upload an image through an end-point I created on my loopback model, the problem is I need the uploaded image to be publicly accessible and can't seem to find where to do that.
I've tried using the aws sdk to change the object permissions with putObjectACL but couldn't make it work, it said that I have build incorrectly the xml, since I can't even figure how to fill the properties that the method requires, so I found a way to change it and is to copy it and set the ACL to 'public-read' and then delete the original, then copying it again to it's original filename and delete again the other copy, seems like a pretty naughty solution, and I'm pretty sure there must be a more neat way to do it.
I do the upload with my remote method like this:
Container.upload(req,res,{container: "my-s3-bucket"},function(err,uploadInfo) { ... }
Container is my model connected to aws s3. And then I do the permission change like this (copying and deleting):
var AWS = require('aws-sdk');
AWS.config.update({accessKeyId:"my-key-id",secretAccessKey:"my-key", region:"us-east-1"});
var s3 = new AWS.S3();
s3.copyObject( {
Bucket:'my-s3-bucket',
CopySource: 'my-s3-bucket/'+filename,
Key: filename+"1",
ACL: 'public-read'
}, function(err,info) {
if (err) return cb(err);
s3.deleteObject( {
Bucket:'my-s3-bucket',
Key:filename
}, function(err,info) {
if (err) return cb(err);
s3.copyObject( {
Bucket: 'my-s3-bucket',
CopySource: 'my-s3-bucket/'+filename+"1",
Key: filename,
ACL: 'public-read'
}, function(err,info) {
if (err) return cb(err);
s3.deleteObject( {
Bucket: 'my-s3-bucket',
Key: my-s3-bucket+"1"
}, function(err,info) {
if (err) return cb(err);
cb(null,uploadInfo);
})
})
})
});
I wonder if there is something more clean like this:
Container.upload(req,res,{container: "my-s3-bucket", ACL:'public-read'},function(err,uploadInfo) { ... }
Thanks in advance :)
Sorry this comes a little late but the answer is in here:
https://github.com/strongloop/loopback-component-storage/pull/47
They added support for applying acls and some other stuff:
var dsImage = loopback.createDataSource({
connector: require('loopback-component-storage'),
provider: 'filesystem',
root: path.join(__dirname, 'images'),
getFilename: function(fileInfo) {
return 'image-' + fileInfo.name;
},
acl: 'public-read',
allowedContentTypes: ['image/png', 'image/jpeg'],
maxFileSize: 5 * 1024 * 1024
});
Putting that acl to 'public-read', does the trick.
So in the end I had to discard the whole loopback component storage, and since I also needed to get some extra params aside from the file, I parsed the form with formidable and upload the file directly with the aws sdk, like this:
var formidable = require('formidable');
var form = new formidable.IncomingForm();
form.parse(req, function(err,fields,files) {
if (err) return cb(err);
var fs = require('fs');
var AWS = require('aws-sdk');
AWS.config.update({
accessKeyId:"my-key-id",
secretAccessKey:"my-key",
region:"us-east-1"
});
var s3 = new AWS.S3();
s3.putObject({
Bucket:'shopika',
Key: files.file.name,
ACL:'public-read', //Public plz T^T
Body: fs.createReadStream(files.file.path),
ContentType:files.file.type
}, function(err,data) {
if (err) return cb(err);
//Upload success, now I have the params I wanted in 'fields' and do my stuff with them :P
cb(null,data);
});
});