upload preset must be specified when using unsigned upload cloudinary - httprequest

am trying to upload files directly from my front-end(Angular 8) using the cloudinary API_URL but am still getting the same bad request (400) and the same error "Upload preset must be whitelisted for unsigned uploads" even i tried different solutions like providing the preset_name in the FormData and setting the preset to unsigned in my cloudinary settings but still not working. is there any solution ?
my upload code :
const images = new FormData();
images.append('images', file);
images.append('upload_preset', [presetName]);
this.progressBar = true
const req = new HttpRequest('POST', 'https://api.cloudinary.com/v1_1/[cloudName]/image/upload', images,
{
reportProgress: true,
});
this.http.request(req).subscribe(event => {
if (event.type === HttpEventType.UploadProgress) {
const percentDone = Math.round(100 * event.loaded / event.total);
console.log(`File is ${percentDone}% uploaded.`);
} else if (event instanceof HttpResponse) {
console.log('File is completely uploaded!');
}
});

Upload preset must be whitelisted for unsigned uploads error means that that the preset you are using is marked for Signed uploads. Since you are not performing an authenticated API call, i.e. using a signature, the upload preset must be set as Unsigned. If you haven't already, go to the Settings -> Upload tab in your account and verify that the Signing Mode is set to Unsigned for the preset you are trying to use.
In addition, I see that you are passing a parameter called 'images'. This is not a valid parameter for the Upload API. Please update that to "file".
const data = new FormData();
data.append("file", file);
data.append("upload_preset", "default-preset");

Related

React-Native AWS-SDK : Can't upload image to S3 buckets Error: "SignatureDoesNotMatch"

I'm using the AWS-SDK with react-native to upload an image to S3 Bucket.
First of all, I want to say that my access and connectivity works well, I tried uploading plain text it works, I tried listing the objects and the buckets it works too.
Here is my code:
async function handleImage(capturedImage) {
setImage(capturedImage);
setScreenState(ScreenStates.LOADING);
try {
const result = await classifyImage(capturedImage);
console.log(result.tensor_)
// {dtype:"float32",shape:[…]}
// dtype:"float32"
// shape:[1,3,273,224]
const blob_jpeg = new Blob([result.tensor_], {type: "image/jpeg"});
console.log(typeof blob_jpeg._data)
// object
console.log(blob_jpeg._data)
// {blobId:"e7a667ad-4363-4a2e-9850-8695f103e9e0",offset:0,size:1489546,type:"image/jpeg",__collector:{}}
try {
const keyName = 'image.jpeg';
const putCommand = new PutObjectCommand({
Bucket: "mybucket",
ContentType:"image/jpeg",
Key: "myimage",
Body: blob_jpeg._data,
});
await s3.send(putCommand);
console.log(
'Successfully uploaded data to ' + bucketName + '/' + keyName);
} catch (e) {
console.log(e,e);
}
My error:
Error: "The request signature we calculated does not match the signature you provided. Check your key and signing method." in SignatureDoesNotMatch: The request signature we calculated does not match the signature you provided. Check your key and signing method. << at construct (native) << at apply (native) << at i (#aws-sdk/client-s3.js:3:461197)
Any ideas about how can I solve this problem and succesffully upload my image ?

How to Upload a csv file lager than 10MB on S3 using Lambda /API Gateway

Hello I am new here on AWS i was trying to upload a csv file on my bucket s3 but when the file is larger than 10mb it is returing "{"message":"Request Entity Too Large"}" I am using postman to do this. Below is the current code I created but in the future I will add some validation to change the name of the file that being uploaded into my format. Is there any way to do this with this kind of code or if you have any suggestion that can help me with the issue I have encountered?
const AWS = require('aws-sdk');
const s3 = new AWS.S3();
const bucket = process.env.UploadBucket;
const prefix = "csv-files/";
const filename = "file.csv";
exports.handler = (event, context, callback) => {
let data = event.body;
let buff = new Buffer(data, 'base64');
let text = buff.toString('ascii');
console.log(text);
let textFileSplit = text.split('?');
//get filename split
let getfilename = textFileSplit[0].split('"');
console.log(textFileSplit[0]);
console.log(textFileSplit[1]);
// //remove lower number on csv
let csvFileSplit = textFileSplit[1].split('--')
const params = {
Bucket: bucket,
Key: prefix + getfilename[3],
Body: csvFileSplit[0]
};
s3.upload(params, function (err, data) {
if (err) {
console.log('error uploading');
callback(err);
}
console.log("Uploaded")
callback(null, "Success")
});
}
For scenarios like this one, we normally use a different approach.
Instead of sending the file to lambda through API Gateway, you send the file directly to S3. This will make your solution more robust and cost you less because you don't need to transfer the data to API Gateway and you don't need to process the entire file inside the lambda.
The question is: How do you do this in a secure way, without opening your S3 Bucket to everyone on the internet and uploading anything to it? You use s3 signed urls. Signed Urls are a feature of S3 that allows you to bake in the url the correct permissions to upload an object to a secured bucket.
In summary the process will be:
Frontend sends a request to API Gateway;
API Gateway forward the request to a Lambda Function;
The Lambda Function generate a signed Url with the permissions to upload the object to a specific s3 bucket;
API Gateway sends back the response from Lambda Function to the Frontend. Frontend upload the file to the signed Url.
To generate the signed url you will need to use the normal aws-sdk in your lambda function. There you will call the method getSignedUrl (signature depends on your language). You can find more information about signed urls here.

Display PDF from azure blob in browsers using Microsoft Azure Storage SDK for Node.js and JavaScript for Browsers

I am trying to use Microsoft Azure Storage SDK for Node.js and JavaScript for Browsers (https://github.com/Azure/azure-storage-node) to display PDF contents stored in Azure blob in browsers. So far I couldn't find any examples on how to do it.
I tried to follow the suggestion from https://github.com/Azure/azure-storage-node/issues/440, But couldn't make it work. I am using Azure function.
module.exports = async function (context, req) {
let accessToken = await getAccessToken();
let container = req.params.container;
let filename = req.params.filename;
let tokenCredential = new azure.TokenCredential(accessToken);
let storageAccountName = process.env.StorageAccountName;
let blobService = azure.createBlobServiceWithTokenCredential(`https://${storageAccountName}.blob.core.windows.net/`, tokenCredential);
return new Promise((resolve, reject) => {
let readStream = blobService.createReadStream(container, filename, function (error, result, response) {
if (error) {
context.log(error);
context.log(response);
context.res = {
status: 400,
body: response
};
resolve(context.res);
}
});
let body = '';
readStream.on('data', (chunk) => {
body += chunk;
});
readStream.on('end', () => {
context.res = {
headers: {
'Content-Type': "application/pdf"
},
body: body
};
resolve(context.res);
});
});
};
But I got "Couldn't open PDF" error message in the browser or timeout error.
For downloading blob in browser environment, using URL with SAS is recommended, and in the framework you are using, would an accessible URL pointing to PDF be enough?
Please follow example:
Download Blob
BlobService provides interfaces for downloading a blob into browser memory. Because of browser's sandbox limitation, we cannot save the downloaded data trunks into disk until we get all the data trunks of a blob into browser memory. The browser's memory size is also limited especially for downloading huge blobs, so it's recommended to download a blob in browser with SAS Token authorized link directly.
Shared access signatures (SAS) are a secure way to provide granular access to blobs and containers without providing your storage account name or keys. Shared access signatures are often used to provide limited access to your data, such as allowing a mobile app to access blobs.

How to send multiple images in a Expressjs api get request with sendFIle()

I'm looking for away to send multiple images in one GET request from an Expressjs server through an api.
I want to create an image gallery of each users uploaded images in a MEAN stack. When images are uploaded using multer, the image information is saved to mongodb, including the userid of whoever uploaded it.
When on angularjs, I want user to have access to any of the images they have previously uploaded. Currently I'm sending one file on a GET request based on user id. Is there anyway of sending multiple files in one json. I'm currently using Expressjs's res.sendFile, but haven't found any info about sending multiple back yet.
https://expressjs.com/en/api.html#res.sendFile
Here is my current get request:
exports.getUpload = function(req, res) {
Upload.find({createdby: req.params.Id}).exec(function(err, upload) {
errorhandle.errorconsole(err, 'file found');
console.log(upload[0]);
var options = {
root: '/usr/src/app/server/public/uploads/images'
};
var name = "" + upload[0].storedname +"";
console.log(name);
res.sendFile(name, options,function(err) {
errorhandle.errorconsole(err, 'file sent');
});
});
};
You can't with res.sendFile. In fact I don't think you can at all. Maybe with HTTP/2 Server Push
, but I'm not sure.
What you can do is send a JSON response with a link to all the images:
exports.getUpload = async (req, res) => {
const uploads = await Upload.find({ createdby: req.params.Id }).exec()
const response = uploads.map(image => {name: `https://example.com/uploads/images/${image.storedname}`})
res.json(response)
}
Note error handling omitted.

Correct code to upload local file to S3 proxy of API Gateway

I created an API function to work with S3. I imported the template swagger. After deployment, I tested with a Node.js project by the npm module aws-api-gateway-client.
It works well with: get bucket lists, get bucket info, get one item, put a bucket, put a plain text object, however I am blocked with put a binary file.
firstly, I ensure ACL is allowed with all permissions on S3. secondly, binary support also added
image/gif
application/octet-stream
The code snippet is as below. The behaviors are:
1) after invokeAPI, the callback function is never hit, after sometime, the Node.js project did not respond. no any error message. The file size (such as an image) is very small.
2) with only two times, the uploading seemed to work, but the result file size is bigger (around 2M bigger) than the original file, so the file is corrupt.
Could you help me out? Thank you!
var filepathname = './items/';
var filename = 'image1.png';
fs.stat(filepathname+filename, function (err, stats) {
var fileSize = stats.size ;
fs.readFile(filepathname+filename,'binary',function(err,data){
var len = data.length;
console.log('file len' + len);
var pathTemplate = '/my-test-bucket/' +filename ;
var method = 'PUT';
var params = {
folder: '',
item:''
};
var additionalParams = {
headers: {
'Content-Type': 'application/octet-stream',
//'Content-Type': 'image/gif',
'Content-Length': len
}
};
var result1 = apigClient.invokeApi(params,pathTemplate,method,additionalParams,data)
.then(function(result){
//never hit :(
console.log(result);
}).catch( function(result){
//never hit :(
console.log(result);
});;
});
});
We encountered the same problem. API Gateway is meant for limited data (10MB as of now), limits shown here,
http://docs.aws.amazon.com/apigateway/latest/developerguide/limits.html
Self Signed URL to S3:
Create an S3 self signed URL for POST from the lambda or the endpoint where you are trying to post.
How do I put object to amazon s3 using presigned url?
Now POST the image directly to S3.
Presigned POST:
Apart from posting the image if you want to post additional properties, you can post it in multi-form format as well.
http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#createPresignedPost-property
If you want to process the file after delivering to S3, you can create a trigger from S3 upon creation and process with your Lambda or anypoint that need to process.
Hope it helps.