NestJS multer not as expected - express

So, when i upload a file with NestJS multer via formdata, its gets the file and uploads it. That part is working, but when a try to set the destination to a value from the formdata, its not working.
When i log the req.body it returns [Object: null prototype].
Can you tell me why? Thanks!
Heres my code:
#Post("uploadImg")
#UseInterceptors(
AnyFilesInterceptor({
storage: diskStorage({
destination: function (req: any, file, cb) {
var newAbsoluteDir = "CDN";
console.log(req.body)
cb(null, newAbsoluteDir);
},
filename: function (req, file, cb) {
cb(null, file.originalname + '-' + Date.now() + ".png");
},
}),
})
)
async uploadedFile(#UploadedFiles() file) {
console.log(file)
// console.log(file)
// return file;
}

It may seem a little strange to you, but the order of files and data is important in this context.
You need to change the order in form-data request as well as Postman. First, attach your fields and then attach the files as the last properties.
So you can get your fields from req.body in destination scope.

Related

Axios + Multer Express req.file undefined

I'm trying to simply upload a single file from the client (react/axios) to the server (multer / express). I've read through every "req.file undefined" and can't seem to see the same issues with my own code.
The other issue is that actually my req on the server sees the file in the "files", but multer doesn't save it and req.file is undefined.
What could be happening here?
For client I've tried both methods of sending the form data, neither work.
const onAnalyze = async () => {
if (selectedFile !== null) {
//we have a file, so that's what we're sending
var formData = new FormData();
formData.append("analyze", selectedFile);
//let res = await api.post('/analyze/upload', formData)
try {
const response = await axios({
method: "post",
url: "http://localhost:5000/analyze/upload",
data: formData,
header: { "Content-Type": "multipart/form-data" }
});
console.log(response)
} catch (error) {
console.log(error)
}
// console.log(res)
// setAnalysis(res.data)
} else if (text.length <= maxLength) {
let res = await api.post('/analyze', { text: text })
setAnalysis(res.data)
}
}
For the server it seems simple.. I just don't know. This file destination exists. req.file is always undefined
import express from 'express';
import { getMedia, createMedia } from '../controllers/media.js';
import { AnalyzeText, AnalyzeFile } from '../controllers/analyze.js'
import multer from 'multer'
const fileStorageEngine = multer.diskStorage({
destination: "uploads",
filename: (req, file, cb) => {
cb(null, file.originalname)
}
});
var upload = multer({ storage: fileStorageEngine })
const router = express.Router();
//Get All Movies and TV shows.
router.get('/', getMedia);
//Request to create a new item based on a title
router.post('/', createMedia);
//Recuist to analyze information (not sure if this should be a post or not)
router.post('/analyze', AnalyzeText)
router.post('/analyze/upload', upload.single('analyze'), (req, res) => {
console.log(req.file)
res.status(200).json('well we found it again');
});
Turns out I had another middleware running that was wrapping my file upload. Removed that, everything works.
If you're using react you may face this problem sending your request with axios. But I solved it by adding a name attribute to my input element. And removing the new formData method totally and passing the input.file[0] into axios, content-type multipart-formdata, and you must use the multer.diskStorage method. If not your image would be saved as text file

Get uploaded object URL with Javascript 'aws-sdk' v3

Currently, we are using aws-sdk v2, and extracting uploaded file URL in this way
const res = await S3Client
.upload({
Body: body,
Bucket: bucket,
Key: key,
ContentType: contentType,
})
.promise();
return res.Location;
Now we have to upgrade to aws-sdk v3, and the new way to upload files looks like this
const command = new PutObjectCommand({
Body: body,
Bucket: bucket,
Key: key,
ContentType: contentType,
});
const res = await S3Client.send(command);
Unfortunately, res object doesn't contain Location property now.
getSignedUrl SDK function doesn't look suitable because it just generates a URL with an expiration date (probably it can be set to some extra huge duration, but anyway, we still need to have a possibility to analyze the URL path)
Building the URL manually does not look like a good idea and a stable solution to me.
Answering myself: I don't know whether a better solution exists, but here is how I do it
const command = new PutObjectCommand({
Body: body,
Bucket: bucket,
Key: key,
ContentType: contentType,
});
const [res, region] = await Promise.all([
s3Client.send(command),
s3Client.config.region(),
]);
const url = `https://${bucket}.s3.${region}.amazonaws.com/${key}`
You can use Upload method from "#aws-sdk/lib-storage" with sample code as below.
import { Upload } from "#aws-sdk/lib-storage";
import { S3Client } from "#aws-sdk/client-s3";
const target = { Bucket, Key, Body };
try {
const parallelUploads3 = new Upload({
client: new S3Client({}),
tags: [...], // optional tags
queueSize: 4, // optional concurrency configuration
leavePartsOnError: false, // optional manually handle dropped parts
params: target,
});
parallelUploads3.on("httpUploadProgress", (progress) => {
console.log(progress);
});
await parallelUploads3.done();
} catch (e) {
console.log(e);
}
Make sure you return parallelUploads3.done() object where you will get location in the return object as below
S3 Upload Response
Reference
https://stackoverflow.com/a/70159394/16729176

Uploading image - data appears like this "���"�!1A"Qaq��2��B�#" and image is blank - Next.js application upload to DigitalOcean Spaces / AWS S3

I am trying to let my users upload photos in a Next.js application.
I set up a remote database and I am writing to the database properly, but the images are appearing blank. I'm thinking it must be a problem with the format of the data coming in.
Here is my code on the front end in React:
async function handleProfileImageUpload(e) {
const file = e.target.files[0];
await fetch('/api/image/profileUpload', {
method: 'POST',
body: file,
'Content-Type': 'image/jpg',
})
.then(res => {
console.log('final:', res);
})
};
return (
<label htmlFor="file-upload">
<div>
<img src={profileImage} className="profile-image-lg dashboard-profile-image"/>
<div id="dashboard-image-hover" >Upload Image</div>
</div>
</label>
<input id="file-upload" type="file" onChange={handleProfileImageUpload}/>
)
The "file" I declare above (const file = e.target.files[0]) appears like this on console.log(file):
+ --------++-+-++-+------------+----++-+--7--7----7-���"�!1A"Qaq��2��B�#br���$34R����CSst���5����)!1"AQaq23B����
?�#��P�n�9?Y�
ޞ�p#��zE� Nk�2iH��l��]/P4��JJ!��(�#�r�Mң[ ���+���PD�HVǵ�f(*znP�>�HRT�!W��\J���$�p(Q�=JF6L�ܧZ�)�z,[�q��� *
�i�A\5*d!%6T���ͦ�#J{6�6��
k#��:JK�bꮘh�A�%=+E q\���H
q�Q��"�����B(��OЛL��B!Le6���(�� aY
�*zOV,8E�2��IC�H��*)#4է4.�ɬ(�<5��j!§eR27��
��s����IdR���V�u=�u2a��
... and so on. It's long.
I am uploading to Digital Ocean's Spaces object storage, which interfaces with AWS S3. Again, my application is written in Next.js and I am using a serverless environment.
Here is the API route I am sending it to ('/api/image/profileUpload.js'):
import AWS from 'aws-sdk';
export default async function handler(req, res) {
// get the image data
let image = req.body;
// create S3 instance with credentials
const s3 = new AWS.S3({
endpoint: new AWS.Endpoint('nyc3.digitaloceanspaces.com'),
accessKeyId: process.env.SPACES_KEY,
secretAccessKey: process.env.SPACES_SECRET,
region: 'nyc3',
});
// create parameters for upload
const uploadParams = {
Bucket: 'oscarexpert',
Key: 'asdff',
Body: image,
ContentType: "image/jpeg",
ACL: "public-read",
};
// execute upload
s3.upload(uploadParams, (err, data) => {
if (err) return console.log('reject', err)
else return console.log('resolve', data)
})
// returning arbitrary object for now
return res.json({});
};
When I console.log(image), it shows the same garbled string that I posted above, so I know it's getting the same exact data. Maybe this needs to be further parsed?
The code above is directly from a Digital Ocean tutorial but catered to my environment. I am taking note of the "Body" parameter, which is where the garbled string is being passed in.
What I've tried:
Stringifying the "image" before passing it to the Body param
Using multer-s3 to process the request on the backend
Requesting through Postman (the image comes in with the exact same garbled format)
I've spent days on this issue. Any guidance would be much appreciated.
Figured it out. I wasn't encoding the image properly in my Next.js serverless backend.
First, on the front end, I made my fetch request like this. It's important to put it in the "form" format for the next step in the backend:
async function handleProfileImageUpload(e) {
const file = e.target.files[0];
const formData = new FormData();
formData.append('file', file);
// CHECK THAT THE FILE IS PROPER FORMAT (size, type, etc)
let url = false;
await fetch(`/api/image/profileUpload`, {
method: 'POST',
body: formData,
'Content-Type': 'image/jpg',
})
}
There were several components that helped me finally do this on the backend, so I am just going to post the code I ended up with. Here's the API route:
import AWS from 'aws-sdk';
import formidable from 'formidable-serverless';
import fs from 'fs';
export const config = {
api: {
bodyParser: false,
},
};
export default async (req, res) => {
// create S3 instance with credentials
const s3 = new AWS.S3({
endpoint: new AWS.Endpoint('nyc3.digitaloceanspaces.com'),
accessKeyId: process.env.SPACES_KEY,
secretAccessKey: process.env.SPACES_SECRET,
region: 'nyc3',
});
// parse request to readable form
const form = new formidable.IncomingForm();
form.parse(req, async (err, fields, files) => {
// Account for parsing errors
if (err) return res.status(500);
// Read file
const file = fs.readFileSync(files.file.path);
// Upload the file
s3.upload({
// params
Bucket: process.env.SPACES_BUCKET,
ACL: "public-read",
Key: 'something',
Body: file,
ContentType: "image/jpeg",
})
.send((err, data) => {
if (err) {
console.log('err',err)
return res.status(500);
};
if (data) {
console.log('data',data)
return res.json({
url: data.Location,
});
};
});
});
};
If you have any questions feel free to leave a comment.

Serverless upload file to S3 cannot open

I'm trying to use serverless (Node.js) for file uploading
const contentType = event.headers['Content-Type'] || event.headers['content-type'];
const bb = new busboy({ headers: { 'content-type': contentType }});
// When file load
bb.on('file', function (fieldname, file, filename, encoding, mimetype) {
console.log(fieldname, filename, encoding, mimetype);
console.log(file);
const key = 'upload/' + filename;
var s3obj = new AWS.S3({
params: {
Bucket: 'fileupload',
Key: key,
ACL: 'public-read',
ContentEncoding: encoding,
ContentType: mimetype,
}
});
s3obj.upload({ Body: file })
.on('httpUploadProgress', function(evt) { console.log(evt); })
.send(function(err, data) { console.log(err, data) });
})
bb.end(event.body);
callback(null, response({ status: 'success' }));
After ran this code S3 successfully created the file, but if I uploaded an image or other non-text files (not .txt, .csv), the file size will differ and the file cannot open.
May I know which part of my code goes wrong?
Found out that need to
Add multipart/form-data binary media type
under API gateway to get the correct encoding for the file.
I've followed this plugin
https://github.com/myshenin/aws-lambda-multipart-parser
to solve this question.
I ran into the same problem when trying to upload an image. The solution was to enable the binary media types in the bucket settings.
I set the media types to */*

Multer isn't passing in express put route

I'm trying to upload an image to the file system with Multer. Please take a look at the relevant data in my route:
const
..
multer = require('multer'),
..;
const storage = multer.diskStorage({
destination: function (req, file, callback) {
callback(null, './uploads');
},
filename: function (req, file, callback) {
callback(null, req.params.id + file.originalname);
}
}),
upload = multer({storage: storage}).single('profilePic');
router.put(
'/:id',
middleware.isLoggedIn,
(req, res, next) => {
User
.findByIdAndUpdate(
req.params.id, req.body.user,
(err, updatedUser) => {
if (err) {
return req.flash('error', err.message);
}
upload(req, res, (err) => {
if (err) {
eval(locus);
return req.flash('error', err.message);
}
updatedUser = req.body.user;
eval(locus);
//redirect show page
res.redirect('/dashboard/profile/' + req.params.id + '/edit');
});
});
});
module.exports = router;
When I look at updatedUser the first thing I see is
{ profilePic: 'data:image/jpeg;base64,....} what am I doing wrong? It's not even updating the page now that I have the upload function in here. What I really want to do is get the destination to work on s3 but I need to get this to save first.
So, this is a the most basic example of uploading an image using multer:
var express = require('express')
var multer = require('multer')
var app = express()
var storage = multer.diskStorage({
// define where the file should be uploaded, else it will be uploaded to the system temp dir
destination: function (req, file, cb) {
// ./uploads should be created beforehand
cb(null, './uploads')
},
// define "filename", else a random name will be used for the uploaded file
filename: function (req, file, cb) {
cb(null, file.fieldname + '-' + file.originalname)
}
})
var upload = multer({ storage: storage })
// pic is the name of image field in the form
app.put('/profile', upload.single('pic'), function (req, res, next) {
console.log(req.file)
res.send('Uploaded')
})
app.listen(3000)
And here is an example curl command to upload an image from the file system to the above app:
curl -X PUT -F 'pic=#/projects/eg/foto.png' localhost:3000/profile
Make sure the example works fine, to ensure you understand how multer handles file uploads, and that the issue is not with multer.
That said and done, User.findByIdAndUpdate seems to be storing the image data as a base64 encoded string somewhere; I have no idea what User.findByIdAndUpdate connects to. It is beyond the domain of multer.
Someone on our Gitter channel (https://gitter.im/expressjs/express) might be able to suggest something. Join us there.