How to Upload images from FileReader to Amazon s3, using meteor - amazon-s3

Im trying to build an image uploader with meteor to Amazon S3. Thanks to Hubert OG, Ive found AWS-SDK which makes things easy.
My problem is that the data uploaded seems to be corrupt. When I download the file it says, the file may be corrupt. Probably it is.
Inserting the data into an image src, does work, and the preview of the image shows up as it supposed to, so the original file, and probably the data is correct.
I'm loading the file with FileReader, and than pass the result data to AWS-SDK putObject method.
var file=template.find('[type=file]').files[0];
var key="uploads/"+file.name;
var reader=new FileReader();
reader.onload=function(event){
var data=event.target.result;
template.find('img').src=data;
Meteor.call("upload_to_s3",file,"uploads",reader.result);
};
reader.readAsDataURL(file);
and this is the method on the server:
"upload_to_s3":function(file,folder,data){
s3 = new AWS.S3({endpoint:ep});
s3.putObject(
{
Bucket: "myportfoliositebucket",
ACL:'public-read',
Key: folder+"/"+file.name,
ContentType: file.type,
Body:data
},
function(err, data) {
if(err){
console.log('upload error:',err);
}else{
console.log('upload was succesfull',data);
}
}
);
}

I wrapped an npm module as a smart package found here: https://atmosphere.meteor.com/package/s3policies
With it you can make a Meteor Method that returns a write policy, and with that policy you can upload to S3 using an ajax call.
Example:
Meteor.call('s3Upload', name, function (error, policy) {
if(error)
onFinished({error: error});
var formData = new FormData();
formData.append("AWSAccessKeyId", policy.s3Key);
formData.append("policy", policy.s3PolicyBase64);
formData.append("signature", policy.s3Signature);
formData.append("key", policy.key);
formData.append("Content-Type", policy.mimeType);
formData.append("acl", "private");
formData.append("file", file);
$.ajax({
url: 'https://s3.amazonaws.com/' + policy.bucket + '/',
type: 'POST',
xhr: function() { // custom xhr
var myXhr = $.ajaxSettings.xhr();
if(myXhr.upload){ // check if upload property exists
myXhr.upload.addEventListener('progress',
function (e){
if(e.lengthComputable)
onProgressUpdate(e.loaded / e.total * 100);
}, false); // for handling the progress of the upload
}
return myXhr;
},
success: function () {
// file finished uploading
},
error: function () { onFinished({error: arguments[1]}); },
processData: false,
contentType: false,
// Form data
data: formData,
cache: false,
xhrFields: { withCredentials: true },
dataType: 'xml'
});
});
EDIT:
The "file" variable in the line: formData.append("file", file); is from a line similar to this: var file = document.getElementById('fileUpload').files[0];
The server side code looks like this:
Meteor.methods({
s3Upload: function (name) {
var myS3 = new s3Policies('my key', 'my secret key');
var location = Meteor.userId() + '/' + moment().format('MMM DD YYYY').replace(/\s+/g, '_') + '/' + name;
if(Meteor.userId()) {
var bucket = 'my bucket';
var policy = myS3.writePolicy(location, bucket, 10, 4096);
policy.key = location;
policy.bucket = bucket;
policy.mimeType = mime.lookup(name);
return policy;
}
}
});

The body should be converted to buffer – see the documentation.
So instead of Body: data you should have Body: new Buffer(data, 'binary').

Related

React native content uri to base64 string

I'm trying to upload files using RN Document Picker.
Once I get those files selected, I need to turn them to base64 string so I can send it to my API.
const handlePickFiles = async () => {
if (await requestExternalStoreageRead()) {
const results = await DocumentPicker.pickMultiple({
type: [
DocumentPicker.types.images,
DocumentPicker.types.pdf,
DocumentPicker.types.docx,
DocumentPicker.types.zip,
],
});
const newUploadedFile: IUploadedFile[] = [];
for (const res of results) {
console.log(JSON.stringify(res, null, 2));
newUploadedFile.push({
name: res.name,
type: res.type as string,
size: res.size as number,
extension: res.type!.split('/')[1],
blob: res.uri, <<-- Must turn this in base64 string
});
}
setUploadedFiles(newUploadedFile);
console.log(newUploadedFile);
}
}
};
The document picker returns content uri (content://...)
They lists this as an example of handling blob data and base64:
let data = new FormData()
data.append('image', {uri: 'content://path/to/content', type: 'image/png', name: 'name'})
const response = await fetch(url, {
method: 'POST',
headers: {
'Content-Type': 'multipart/form-data',
},
body: data
})
Where they basically say that you don't need to use blob or base64 when using multipart/form-data as content type. However, my graphql endpoint cannot handle multipart data and I don't have time to rewrite the whole API. All I want is to turn it to blob and base64 string, even if other ways are more performant.
Searching for other libraries, all of them are no longer maintained, or has issues with new versions of android. RN Blob Utils is the latest npm that was no longer maintained.
I tried to use RN Blob Utils but I either get errors, wrong data type, or the file uploads but is corrupted.
Some other things I found is that I can use
fetch(res.uri).then(response => {response.blob()})
const response = await ReactNativeBlobUtil.fetch('GET', res.uri);
const data = response.base64();
ReactNativeBlobUtil.wrap(decodeURIComponent(blob))
///----
const blob = ReactNativeBlobUtil.fs.readFile(res.uri, 'base64');
But I can't do anything with that blob file.
What is the simplest way to uplaod files from document picker as base64 format? Is it possible to avoid using external storage permission?
You don't need to the third-party package to fetch BLOB data
const blob = await new Promise((resolve, reject) => {
const xhr = new XMLHttpRequest();
xhr.onload = function () {
resolve(xhr.response);
};
xhr.onerror = function (e) {
reject(new TypeError("Network request failed"));
};
xhr.responseType = "blob";
xhr.open("GET", "[LOCAL_FILE_PATH]", true);
xhr.send(null);
});
// Code to submit blob file to server
// We're done with the blob, close and release it
blob.close();
I ended up using react-native-blob-util
const res = await DocumentPicker.pickSingle({
type: [
DocumentPicker.types.images,
DocumentPicker.types.pdf,
DocumentPicker.types.docx,
DocumentPicker.types.zip,
],
});
const newUploadedFile: IUploadedFile[] = [];
const fileType = res.type;
if (fileType) {
const fileExtension = fileType.substr(fileType.indexOf('/') + 1);
const realURI = Platform.select({
android: res.uri,
ios: decodeURI(res.uri),
});
if (realURI) {
const b64 = await ReactNativeBlobUtil.fs.readFile(
realURI,
'base64',
);
const filename = res.name.replace(/\s/g, '');
const path = uuid.v4();
newUploadedFile.push({
name: filename,
type: fileType,
size: res.size as number,
extension: fileExtension,
blob: b64,
path: Array.isArray(path) ? path.join() : path,
});
} else {
throw new Error('Failed to process file');
}
} else {
throw new Error('Failed to process file');
}

Uploading image - data appears like this "���"�!1A"Qaq��2��B�#" and image is blank - Next.js application upload to DigitalOcean Spaces / AWS S3

I am trying to let my users upload photos in a Next.js application.
I set up a remote database and I am writing to the database properly, but the images are appearing blank. I'm thinking it must be a problem with the format of the data coming in.
Here is my code on the front end in React:
async function handleProfileImageUpload(e) {
const file = e.target.files[0];
await fetch('/api/image/profileUpload', {
method: 'POST',
body: file,
'Content-Type': 'image/jpg',
})
.then(res => {
console.log('final:', res);
})
};
return (
<label htmlFor="file-upload">
<div>
<img src={profileImage} className="profile-image-lg dashboard-profile-image"/>
<div id="dashboard-image-hover" >Upload Image</div>
</div>
</label>
<input id="file-upload" type="file" onChange={handleProfileImageUpload}/>
)
The "file" I declare above (const file = e.target.files[0]) appears like this on console.log(file):
+ --------++-+-++-+------------+----++-+--7--7----7-���"�!1A"Qaq��2��B�#br���$34R����CSst���5����)!1"AQaq23B����
?�#��P�n�9?Y�
ޞ�p#��zE� Nk�2iH��l��]/P4��JJ!��(�#�r�Mң[ ���+���PD�HVǵ�f(*znP�>�HRT�!W��\J���$�p(Q�=JF6L�ܧZ�)�z,[�q��� *
�i�A\5*d!%6T���ͦ�#J{6�6��
k#��:JK�bꮘh�A�%=+E q\���H
q�Q��"�����B(��OЛL��B!Le6���(�� aY
�*zOV,8E�2��IC�H��*)#4է4.�ɬ(�<5��j!§eR27��
��s����IdR���V�u=�u2a��
... and so on. It's long.
I am uploading to Digital Ocean's Spaces object storage, which interfaces with AWS S3. Again, my application is written in Next.js and I am using a serverless environment.
Here is the API route I am sending it to ('/api/image/profileUpload.js'):
import AWS from 'aws-sdk';
export default async function handler(req, res) {
// get the image data
let image = req.body;
// create S3 instance with credentials
const s3 = new AWS.S3({
endpoint: new AWS.Endpoint('nyc3.digitaloceanspaces.com'),
accessKeyId: process.env.SPACES_KEY,
secretAccessKey: process.env.SPACES_SECRET,
region: 'nyc3',
});
// create parameters for upload
const uploadParams = {
Bucket: 'oscarexpert',
Key: 'asdff',
Body: image,
ContentType: "image/jpeg",
ACL: "public-read",
};
// execute upload
s3.upload(uploadParams, (err, data) => {
if (err) return console.log('reject', err)
else return console.log('resolve', data)
})
// returning arbitrary object for now
return res.json({});
};
When I console.log(image), it shows the same garbled string that I posted above, so I know it's getting the same exact data. Maybe this needs to be further parsed?
The code above is directly from a Digital Ocean tutorial but catered to my environment. I am taking note of the "Body" parameter, which is where the garbled string is being passed in.
What I've tried:
Stringifying the "image" before passing it to the Body param
Using multer-s3 to process the request on the backend
Requesting through Postman (the image comes in with the exact same garbled format)
I've spent days on this issue. Any guidance would be much appreciated.
Figured it out. I wasn't encoding the image properly in my Next.js serverless backend.
First, on the front end, I made my fetch request like this. It's important to put it in the "form" format for the next step in the backend:
async function handleProfileImageUpload(e) {
const file = e.target.files[0];
const formData = new FormData();
formData.append('file', file);
// CHECK THAT THE FILE IS PROPER FORMAT (size, type, etc)
let url = false;
await fetch(`/api/image/profileUpload`, {
method: 'POST',
body: formData,
'Content-Type': 'image/jpg',
})
}
There were several components that helped me finally do this on the backend, so I am just going to post the code I ended up with. Here's the API route:
import AWS from 'aws-sdk';
import formidable from 'formidable-serverless';
import fs from 'fs';
export const config = {
api: {
bodyParser: false,
},
};
export default async (req, res) => {
// create S3 instance with credentials
const s3 = new AWS.S3({
endpoint: new AWS.Endpoint('nyc3.digitaloceanspaces.com'),
accessKeyId: process.env.SPACES_KEY,
secretAccessKey: process.env.SPACES_SECRET,
region: 'nyc3',
});
// parse request to readable form
const form = new formidable.IncomingForm();
form.parse(req, async (err, fields, files) => {
// Account for parsing errors
if (err) return res.status(500);
// Read file
const file = fs.readFileSync(files.file.path);
// Upload the file
s3.upload({
// params
Bucket: process.env.SPACES_BUCKET,
ACL: "public-read",
Key: 'something',
Body: file,
ContentType: "image/jpeg",
})
.send((err, data) => {
if (err) {
console.log('err',err)
return res.status(500);
};
if (data) {
console.log('data',data)
return res.json({
url: data.Location,
});
};
});
});
};
If you have any questions feel free to leave a comment.

Adobe PDF Embed API Save PDF to Firestore

Using Adobe PDF Embed API and want to save annotated PDFs in a browser window to Firestore.
The file is uploaded to Firebase but corrupt and only about 9 bytes in size.
Please see the below code. Is there something I need to do with "content" in the callback?
Attached also a picture of the console.log.
const previewConfig = {
embedMode: "FULL_WINDOW",
showAnnotationTools: true,
showDownloadPDF: true,
showPrintPDF: true,
showPageControls: true
}
document.addEventListener("adobe_dc_view_sdk.ready", function () {
var adobeDCView = new AdobeDC.View({
clientId: "2eab88022c63447f8796b580d5058e71",
divId: "adobe-dc-view"
});
adobeDCView.previewFile({
content: { location: { url: decoded } },
metaData: { fileName: decodedTitle }
}, previewConfig);
/* Register save callback */
adobeDCView.registerCallback(
AdobeDC.View.Enum.CallbackType.SAVE_API,
async function (metaData, content, options) {
console.log(metaData);
console.log(content);
var meta = {
contentType: 'application/pdf'
};
var pdfRef = storageRef.child(decodedTitle);
var upload = await pdfRef.put(content, meta);
console.log('Uploaded a file!');
return new Promise(function (resolve, reject) {
/* Dummy implementation of Save API, replace with your business logic */
setTimeout(function () {
var response = {
code: AdobeDC.View.Enum.ApiResponseCode.SUCCESS,
data: {
metaData: Object.assign(metaData, { updatedAt: new Date().getTime() })
},
};
resolve(response);
}, 2000);
});
}
);
});
I was able to use putString() in Firebase Storage to upload the PDF to storage in the end.
Before I was only using put() which ended up having a corrupt file.

Uploading pdfkit pdf stream to S3 bucket from Lambda function gives Error: Cannot determine length of [object PDFDocument]

I'm using pdfkit in a lamda function which creates a pdf and then is supposed to upload the pdf to an S3 bucket. But when I test the function I get Error: Cannot determine length of [object PDFDocument]
Here is my function:
var PDFDocument = require('pdfkit');
var AWS = require('aws-sdk');
process.env['PATH'] = process.env['PATH'] + ':' +
process.env['LAMBDA_TASK_ROOT'];
exports.handler = function(event, context) {
// create a document and pipe to a blob
var doc = new PDFDocument();
// draw some text
doc.fontSize(25)
.text('Hello World', 100, 80);
var params = {
Bucket : "test-bucket",
Key : event.pdf_name + ".pdf",
Body : doc
}
var s3 = new AWS.S3();
s3.putObject(params, function(err, data) {
if (err) {
console.log(err)
} else {
context.done(null, { status: 'pdf created' });
doc.end();
}
});
};
What am I doing wrong? How do I provide the file size if that is needed? Is this a good way to do this or is there a better way to upload a stream of a pdf file to an s3 bucket?
Here is my solution:
const PDFDocument = require('pdfkit');
const fs = require("fs");
const AWS = require('aws-sdk');
const s3 = new AWS.S3();
exports.handler = function (event, callback) {
let doc = new PDFDocument;
let fileName = "yourfile.pdf";
//We use Lambdas temp folder to store file temporarily.
//When Lambda execution ends temp is flushed
let file = fs.createWriteStream("/tmp/" + fileName);
doc.pipe(file);
doc.text("hello");
// # Finalize PDF file
doc.end();
// Send pdf file to s3
file.on("finish", function () {
//get the file size
const stats = fs.statSync("/tmp/" + fileName);
console.log("filesize: " + stats.size);
console.log("starting s3 putObject");
s3.putObject({
Bucket: "[your-bucket]",
Key: fileName,
Body: fs.createReadStream("/tmp/" + fileName),
ContentType: "application/pdf",
ContentLength: stats.size,
}, function (err) {
if (err) {
console.log(err, err.stack);
callback(err);
} else {
console.log("Done");
callback(null, "done");
}
});
});
}
Key elements to this solution was use of filestreams and lambda temp folder. file.on("finish") is used to actualy check if the file writing is ended.
if you want the pdf to be accessible to users remember to add the following attribute ACL: 'public-read' . also when using s3client for digital ocean this worked for me looks like.
s3Client.putObject({
Bucket: bucketName,
Key: fileName,
Body: fs.createReadStream("/tmp/" + fileName),
ContentType: "application/pdf",
ContentLength: stats.size,
ACL: 'public-read',
}, function (err) {
if (err) {
console.log(err, err.stack);
callback(err);
} else {
console.log("Done");
callback(null, "done");
}
});

how to download a pdf file from an url in angular 5

I currently spend one day in this issue,still failed to download a file from an url in angular 5
leadGenSubmit() {
return this.http.get('http://kmmc.in/wp-content/uploads/2014/01/lesson2.pdf',
{responseType:ResponseContentType.Blob}).subscribe((data)=>{
console.log(data);
var blob = new Blob([data], {type: 'application/pdf'});
console.log(blob);
saveAs(blob, "testData.pdf");
},
err=>{
console.log(err);
}
)
}
when I run above code it shows following error
ERROR TypeError: req.responseType.toLowerCase is not a function
at Observable.eval [as _subscribe] (http.js:2187)
at Observable._trySubscribe (Observable.js:172)
how can I solve this issue.Can any one post the correct code to download a pdf file from an url in angular 5?
I think you should define header and responseType like this:
let headers = new HttpHeaders();
headers = headers.set('Accept', 'application/pdf');
return this.http.get(url, { headers: headers, responseType: 'blob' });
Here is my simple solution to open a PDF based on an ID in Angular :
In my service, I created this method :
public findById(id?: string): Observable<Blob> {
return this.httpClient.get(`${this.basePath}/document/${id}`, {responseType: 'blob'});
}
Then in my component, I can do use this method (behind a button or whatever):
showDocument(documentId: string): void {
this.yourSuperService.findById(documentId)
.subscribe((blob: Blob): void => {
const file = new Blob([blob], {type: 'application/pdf'});
const fileURL = URL.createObjectURL(file);
window.open(fileURL, '_blank', 'width=1000, height=800');
});
}
Try this
let headers = new HttpHeaders();
headers = headers.set('Accept', 'application/pdf');
return this.http.get(url, { headers: headers, responseType: 'blob' as 'json' });
References:
Discussion on Angular Github
Stackoverflow