Nestjs upload file with s3 works only in local - amazon-s3

i'm really new with NestJS and i can't understand why my code work perfect in local but in my ec2 doesn't works
I have this controller
#Post(':id/add-image')
#UseInterceptors(FileInterceptor('file'))
uploadFile(
#Param('id') id: string,
#UploadedFile() file: Express.Multer.File) {
return this.invitationService.addImage(id, file);
}
And this is my service:
async addImage(id: string, file: Express.Multer.File) {
const invitation = await this.findOne(id);
if(!invitation) throw new Error('Invitation not found');
const s3 = await this.s3Service.uploadFile(file, 'invitations');
console.log(s3);
return this.update(id, {image: s3.fileUrl})
}
my function finally it's this
import { Injectable, Req, Res } from '#nestjs/common';
import * as AWS from "aws-sdk";
#Injectable()
export class S3Service
{
AWS_S3_BUCKET = process.env.AWS_S3_BUCKET;
s3 = new AWS.S3({
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
signatureVersion: 'v4'
});
async uploadFile(file: Express.Multer.File, directory: string){
const originalName = file.originalname;
console.log(file)
return await this.s3_upload(file.buffer, this.AWS_S3_BUCKET, originalName, file.mimetype, directory);
}
async s3_upload(buffer: Buffer, bucket: string, name: string, mimetype: string, directory = 'images'){
const params = {
Bucket: `${bucket}/${directory}`,
Key: String(name),
Body: buffer,
ACL: "public-read",
ContentType: mimetype,
ContentDisposition:"inline",
CreateBucketConfiguration: {
LocationConstraint: process.env.AWS_DEFAULT_REGION
}
};
console.log(params);
try{
const s3Response = await this.s3.upload(params).promise();
console.log(s3Response);
return {
fileName: name,
fileUrl: s3Response.Location,
key: s3Response.Key,
};
}
catch (e){
console.log(e);
}
}
}
When try all this in my localhost works perfectly but in production i have a good response for S3 and return the url correctly but if i try use that url i can't see the file because.
All this file are image so when i put the url returned from S3 in local and i paste in chrome works but in production, when i paste that url, chrome try download the image and when i try see that file en my computer said:
"It may be damaged or use a file format that Preview doesn’t recognize."
If anyone has any idea what might be going on, I'd really appreciate your help.

Related

Upload file via #aws-sdk/client-s3 and graphql-upload

S3('#aws-sdk/client-s3') upload function
import { Upload } from '#aws-sdk/lib-storage';
async s3UploadPhoto(fileStream, name, mimetype) {
const fileKey = this.getFileKey(name);
const sendParams: PutObjectCommandInput = {
Bucket: process.env.AWS_BUCKET_NAME,
Body: fileStream,
Key: fileKey,
ContentType: mimetype,
};
try {
const parallelUploads3 = new Upload({
client: this.s3,
tags: [],
queueSize: 4,
leavePartsOnError: false,
params: sendParams,
});
parallelUploads3.on('httpUploadProgress', (progress) => {
console.log(progress);
});
return parallelUploads3.done();
} catch (e) {
throw new BadRequestException('');
}
}
And Graphql upload code via 'graphql-upload'
const fileStream = file.createReadStream();
await this.s3Service.s3UploadPhoto(
fileStream,
file.filename,
file.mimetype,
);
I get error: ReferenceError: ReadableStream is not defined
If uploads a file to s3 without lib-storage, I get error: Are you using a Stream of unknown length as the Body of a PutObject request? Consider using Upload instead from #aws-sdk/lib-storage.
What is wrong written that I get error "ReadableStream is not defined"?

S3 to IPFS from Pinata

I am trying to upload a lot of files from S3 to IPFS via Pinata. I haven't found in Pinata documentation something like that.
This is my solution, using the form-data library. I haven't tested it yet (I will do it soon, I need to code some things).
Is it a correct approach? anyone who has done something similar?
async uploadImagesFolder(
items: ItemDocument[],
bucket?: string,
path?: string,
) {
try {
const form = new FormData();
for (const item of items) {
const file = getObjectStream(item.tokenURI, bucket, path);
form.append('file', file, {
filename: item.tokenURI,
});
}
console.log(`Uploading files to IPFS`);
const pinataOptions: PinataOptions = {
cidVersion: 1,
};
const result = await pinata.pinFileToIPFS(form, {
pinataOptions,
});
console.log(`Piñata Response:`, JSON.stringify(result, null, 2));
return result.IpfsHash;
} catch (e) {
console.error(e);
}
}
I had the same problem
So, I have found this: https://medium.com/pinata/stream-files-from-aws-s3-to-ipfs-a0e23ffb7ae5
But in the article If am not wrong, is used a different version to the JavaScript AWS SDK v3 (nowadays the most recent: https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/index.html).
This is for the Client side with TypeScript:
If you have this version, for me works this code snippet:
export const getStreamObjectInAwsS3 = async (data: YourParamsType) => {
try {
const BUCKET = data.bucketTarget
const KEY = data.key
const client = new S3Client({
region: 'your-region',
credentials: {
accessKeyId: 'your-access-key',
secretAccessKey: 'secret-key'
}
})
const resource = await client.send(new GetObjectCommand({
Bucket: BUCKET,
Key: KEY
}))
const response = resource.Body
if (response) {
return new Response(await response.transformToByteArray()).blob()
}
return null
} catch (error) {
return null
}
}
With the previous code, you can get the Blob Object for pass it to the File object with this method and get the URL resource using the API:
export const uploadFileToIPFS = async(file: Response) => {
const url = `https://api.pinata.cloud/pinning/pinFileToIPFS`
const data = new FormData()
data.append('file', file)
try {
const response = await axios.post(url, data, {
maxBodyLength: Infinity,
headers: {
pinata_api_key: 'your-api',
pinata_secret_api_key: 'your-secret'
},
data: data
})
return {
success: true,
pinataURL: `https://gateway.pinata.cloud/ipfs/${ response.data.IpfsHash }`
}
} catch (error) {
console.log(error)
return null
}
}
I have found this solution from this nice article and you can explore other implementations (including the Node.js side)

Get uploaded object URL with Javascript 'aws-sdk' v3

Currently, we are using aws-sdk v2, and extracting uploaded file URL in this way
const res = await S3Client
.upload({
Body: body,
Bucket: bucket,
Key: key,
ContentType: contentType,
})
.promise();
return res.Location;
Now we have to upgrade to aws-sdk v3, and the new way to upload files looks like this
const command = new PutObjectCommand({
Body: body,
Bucket: bucket,
Key: key,
ContentType: contentType,
});
const res = await S3Client.send(command);
Unfortunately, res object doesn't contain Location property now.
getSignedUrl SDK function doesn't look suitable because it just generates a URL with an expiration date (probably it can be set to some extra huge duration, but anyway, we still need to have a possibility to analyze the URL path)
Building the URL manually does not look like a good idea and a stable solution to me.
Answering myself: I don't know whether a better solution exists, but here is how I do it
const command = new PutObjectCommand({
Body: body,
Bucket: bucket,
Key: key,
ContentType: contentType,
});
const [res, region] = await Promise.all([
s3Client.send(command),
s3Client.config.region(),
]);
const url = `https://${bucket}.s3.${region}.amazonaws.com/${key}`
You can use Upload method from "#aws-sdk/lib-storage" with sample code as below.
import { Upload } from "#aws-sdk/lib-storage";
import { S3Client } from "#aws-sdk/client-s3";
const target = { Bucket, Key, Body };
try {
const parallelUploads3 = new Upload({
client: new S3Client({}),
tags: [...], // optional tags
queueSize: 4, // optional concurrency configuration
leavePartsOnError: false, // optional manually handle dropped parts
params: target,
});
parallelUploads3.on("httpUploadProgress", (progress) => {
console.log(progress);
});
await parallelUploads3.done();
} catch (e) {
console.log(e);
}
Make sure you return parallelUploads3.done() object where you will get location in the return object as below
S3 Upload Response
Reference
https://stackoverflow.com/a/70159394/16729176

Uploading image - data appears like this "���"�!1A"Qaq��2��B�#" and image is blank - Next.js application upload to DigitalOcean Spaces / AWS S3

I am trying to let my users upload photos in a Next.js application.
I set up a remote database and I am writing to the database properly, but the images are appearing blank. I'm thinking it must be a problem with the format of the data coming in.
Here is my code on the front end in React:
async function handleProfileImageUpload(e) {
const file = e.target.files[0];
await fetch('/api/image/profileUpload', {
method: 'POST',
body: file,
'Content-Type': 'image/jpg',
})
.then(res => {
console.log('final:', res);
})
};
return (
<label htmlFor="file-upload">
<div>
<img src={profileImage} className="profile-image-lg dashboard-profile-image"/>
<div id="dashboard-image-hover" >Upload Image</div>
</div>
</label>
<input id="file-upload" type="file" onChange={handleProfileImageUpload}/>
)
The "file" I declare above (const file = e.target.files[0]) appears like this on console.log(file):
+ --------++-+-++-+------------+----++-+--7--7----7-���"�!1A"Qaq��2��B�#br���$34R����CSst���5����)!1"AQaq23B����
?�#��P�n�9?Y�
ޞ�p#��zE� Nk�2iH��l��]/P4��JJ!��(�#�r�Mң[ ���+���PD�HVǵ�f(*znP�>�HRT�!W��\J���$�p(Q�=JF6L�ܧZ�)�z,[�q��� *
�i�A\5*d!%6T���ͦ�#J{6�6��
k#��:JK�bꮘh�A�%=+E q\���H
q�Q��"�����B(��OЛL��B!Le6���(�� aY
�*zOV,8E�2��IC�H��*)#4է4.�ɬ(�<5��j!§eR27��
��s����IdR���V�u=�u2a��
... and so on. It's long.
I am uploading to Digital Ocean's Spaces object storage, which interfaces with AWS S3. Again, my application is written in Next.js and I am using a serverless environment.
Here is the API route I am sending it to ('/api/image/profileUpload.js'):
import AWS from 'aws-sdk';
export default async function handler(req, res) {
// get the image data
let image = req.body;
// create S3 instance with credentials
const s3 = new AWS.S3({
endpoint: new AWS.Endpoint('nyc3.digitaloceanspaces.com'),
accessKeyId: process.env.SPACES_KEY,
secretAccessKey: process.env.SPACES_SECRET,
region: 'nyc3',
});
// create parameters for upload
const uploadParams = {
Bucket: 'oscarexpert',
Key: 'asdff',
Body: image,
ContentType: "image/jpeg",
ACL: "public-read",
};
// execute upload
s3.upload(uploadParams, (err, data) => {
if (err) return console.log('reject', err)
else return console.log('resolve', data)
})
// returning arbitrary object for now
return res.json({});
};
When I console.log(image), it shows the same garbled string that I posted above, so I know it's getting the same exact data. Maybe this needs to be further parsed?
The code above is directly from a Digital Ocean tutorial but catered to my environment. I am taking note of the "Body" parameter, which is where the garbled string is being passed in.
What I've tried:
Stringifying the "image" before passing it to the Body param
Using multer-s3 to process the request on the backend
Requesting through Postman (the image comes in with the exact same garbled format)
I've spent days on this issue. Any guidance would be much appreciated.
Figured it out. I wasn't encoding the image properly in my Next.js serverless backend.
First, on the front end, I made my fetch request like this. It's important to put it in the "form" format for the next step in the backend:
async function handleProfileImageUpload(e) {
const file = e.target.files[0];
const formData = new FormData();
formData.append('file', file);
// CHECK THAT THE FILE IS PROPER FORMAT (size, type, etc)
let url = false;
await fetch(`/api/image/profileUpload`, {
method: 'POST',
body: formData,
'Content-Type': 'image/jpg',
})
}
There were several components that helped me finally do this on the backend, so I am just going to post the code I ended up with. Here's the API route:
import AWS from 'aws-sdk';
import formidable from 'formidable-serverless';
import fs from 'fs';
export const config = {
api: {
bodyParser: false,
},
};
export default async (req, res) => {
// create S3 instance with credentials
const s3 = new AWS.S3({
endpoint: new AWS.Endpoint('nyc3.digitaloceanspaces.com'),
accessKeyId: process.env.SPACES_KEY,
secretAccessKey: process.env.SPACES_SECRET,
region: 'nyc3',
});
// parse request to readable form
const form = new formidable.IncomingForm();
form.parse(req, async (err, fields, files) => {
// Account for parsing errors
if (err) return res.status(500);
// Read file
const file = fs.readFileSync(files.file.path);
// Upload the file
s3.upload({
// params
Bucket: process.env.SPACES_BUCKET,
ACL: "public-read",
Key: 'something',
Body: file,
ContentType: "image/jpeg",
})
.send((err, data) => {
if (err) {
console.log('err',err)
return res.status(500);
};
if (data) {
console.log('data',data)
return res.json({
url: data.Location,
});
};
});
});
};
If you have any questions feel free to leave a comment.

Why when I upload file with apollo-server the file is uploaded but the file is 0kb?

I tried to solve the problem but I don't understand why the file is uploaded but his size is 0Kb.
I see this code in the tutorial but he works on that tutorial but, is not worked for me
const { ApolloServer, gql } = require('apollo-server');
const path = require('path');
const fs = require('fs');
const typeDefs = gql`
type File {
url: String!
}
type Query {
hello: String!
}
type Mutation {
fileUpload(file: Upload!): File!
}
`;
const resolvers = {
Query: {
hello: () => 'Hello world!',
},
Mutation: {
fileUpload: async (_, { file }) => {
const { createReadStream, filename, mimetype, encoding } = await file;
const stream = createReadStream();
const pathName = path.join(__dirname, `/public/images/${filename}`);
await stream.pipe(fs.createWriteStream(pathName));
return {
url: `http://localhost:4000/images/${filename}`,
};
},
},
};
const server = new ApolloServer({
typeDefs,
resolvers,
});
server.listen().then(({ url }) => {
console.log(`🚀 Server ready at ${url}`);
});
then when I upload the file, it is uploaded, but the file is 0kb
like this
What is happening is the resolver is returning before the file has uploaded, causing the server to respond before the client has finished uploading. You need to promisify and await the file upload stream events in the resolver.
Here is an example:
https://github.com/jaydenseric/apollo-upload-examples/blob/c456f86b58ead10ea45137628f0a98951f63e239/api/server.js#L40-L41
In your case:
const resolvers = {
Query: {
hello: () => "Hello world!",
},
Mutation: {
fileUpload: async (_, { file }) => {
const { createReadStream, filename } = await file;
const stream = createReadStream();
const path = path.join(__dirname, `/public/images/${filename}`);
// Store the file in the filesystem.
await new Promise((resolve, reject) => {
// Create a stream to which the upload will be written.
const writeStream = createWriteStream(path);
// When the upload is fully written, resolve the promise.
writeStream.on("finish", resolve);
// If there's an error writing the file, remove the partially written
// file and reject the promise.
writeStream.on("error", (error) => {
unlink(path, () => {
reject(error);
});
});
// In Node.js <= v13, errors are not automatically propagated between
// piped streams. If there is an error receiving the upload, destroy the
// write stream with the corresponding error.
stream.on("error", (error) => writeStream.destroy(error));
// Pipe the upload into the write stream.
stream.pipe(writeStream);
});
return {
url: `http://localhost:4000/images/${filename}`,
};
},
},
};
Note that it’s probably not a good idea to use the filename like that to store the uploaded files, as future uploads with the same filename will overwrite earlier ones. I'm not really sure what will happen if two files with the same name are uploaded at the same time by two clients.