Upload File to S3 Dynamic Storage - amazon-s3

I'm having problems uploading files to a dynamic storage service.
AWS.config.update({
accessKeyId: env.s3.accessKey,
secretAccessKey: env.s3.sharedSecret,
httpOptions: {
agent: proxy(env.auth.proxy),
},
});
this.s3Client = new AWS.S3({ endpoint: env.s3.accessHost, signatureVersion: 'v2' });
This is the configuration. I have to define the proxy settings since I'm behind the Swisscom corp proxy.
public upload(image: IFile): Promise<any> {
return new Promise((resolve, reject) => {
const key = image.originalname;
const paramsCreateFile = { Bucket: 'test', Key: key, Body: image.buffer };
this.s3Client.putObject(paramsCreateFile, (err, data) => {
if (err) {
return reject(err);
}
return resolve(data);
});
});
}
And this is my upload method.
However, when I try to upload nothing happens. After approx. 2 mins I get a timeout, but no error.
I followed the official documentation during implementation.

Related

S3 to IPFS from Pinata

I am trying to upload a lot of files from S3 to IPFS via Pinata. I haven't found in Pinata documentation something like that.
This is my solution, using the form-data library. I haven't tested it yet (I will do it soon, I need to code some things).
Is it a correct approach? anyone who has done something similar?
async uploadImagesFolder(
items: ItemDocument[],
bucket?: string,
path?: string,
) {
try {
const form = new FormData();
for (const item of items) {
const file = getObjectStream(item.tokenURI, bucket, path);
form.append('file', file, {
filename: item.tokenURI,
});
}
console.log(`Uploading files to IPFS`);
const pinataOptions: PinataOptions = {
cidVersion: 1,
};
const result = await pinata.pinFileToIPFS(form, {
pinataOptions,
});
console.log(`PiƱata Response:`, JSON.stringify(result, null, 2));
return result.IpfsHash;
} catch (e) {
console.error(e);
}
}
I had the same problem
So, I have found this: https://medium.com/pinata/stream-files-from-aws-s3-to-ipfs-a0e23ffb7ae5
But in the article If am not wrong, is used a different version to the JavaScript AWS SDK v3 (nowadays the most recent: https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/index.html).
This is for the Client side with TypeScript:
If you have this version, for me works this code snippet:
export const getStreamObjectInAwsS3 = async (data: YourParamsType) => {
try {
const BUCKET = data.bucketTarget
const KEY = data.key
const client = new S3Client({
region: 'your-region',
credentials: {
accessKeyId: 'your-access-key',
secretAccessKey: 'secret-key'
}
})
const resource = await client.send(new GetObjectCommand({
Bucket: BUCKET,
Key: KEY
}))
const response = resource.Body
if (response) {
return new Response(await response.transformToByteArray()).blob()
}
return null
} catch (error) {
return null
}
}
With the previous code, you can get the Blob Object for pass it to the File object with this method and get the URL resource using the API:
export const uploadFileToIPFS = async(file: Response) => {
const url = `https://api.pinata.cloud/pinning/pinFileToIPFS`
const data = new FormData()
data.append('file', file)
try {
const response = await axios.post(url, data, {
maxBodyLength: Infinity,
headers: {
pinata_api_key: 'your-api',
pinata_secret_api_key: 'your-secret'
},
data: data
})
return {
success: true,
pinataURL: `https://gateway.pinata.cloud/ipfs/${ response.data.IpfsHash }`
}
} catch (error) {
console.log(error)
return null
}
}
I have found this solution from this nice article and you can explore other implementations (including the Node.js side)

Using aws-sdk to upload to DigitalOceans

I'm using aws-sdk to upload images to DigitalOceans bucket. On localhost it works 100% but production seems like the function goes on without an error but the file does not upload to the bucket.
I cannot figure out what is going on and can't think of a way to debug this. tried aswell executing the POST request with Postman multipart/form-data + adding file to the body of the request and it is the same for localhost, working, and production is not.
my api endpoint:
import AWS from 'aws-sdk'
import formidable from "formidable"
import fs from 'fs'
const s3Client = new AWS.S3({
endpoint: process.env.DO_SPACES_URL,
region: 'fra1',
credentials: {
accessKeyId: process.env.DO_SPACES_KEY,
secretAccessKey: process.env.DO_SPACES_SECRET
}
})
export const config = {
api: {
bodyParser: false
}
}
export default async function uploadFile(req, res) {
const { method } = req
const form = formidable()
const now = new Date()
const fileGenericName = `${now.getTime()}`
const allowedFileTypes = ['jpg', 'jpeg', 'png', 'webp']
switch (method) {
case "POST":
try {
form.parse(req, async (err, fields, files) => {
const fileType = files.file?.originalFilename?.split('.').pop().toLowerCase()
if (!files.file) {
return res.status(400).json({
status: 400,
message: 'no files'
})
}
if (allowedFileTypes.indexOf(fileType) === -1) {
return res.status(400).json({
message: 'bad file type'
})
}
const fileName = `${fileGenericName}.${fileType}`
try {
s3Client.putObject({
Bucket: process.env.DO_SPACES_BUCKET,
Key: `${fileName}`,
Body: fs.createReadStream(files.file.filepath),
ACL: "public-read"
}, (err, data) => {
console.log(err)
console.log(data)
})
const url = `${process.env.FILE_URL}/${fileName}`
return res.status(200).json({ url })
} catch (error) {
console.log(error)
throw new Error('Error Occured While Uploading File')
}
});
return res.status(200)
} catch (error) {
console.log(error)
return res.status(500).end()
}
default:
return res.status(405).end('Method is not allowed')
}
}

Multer google storage returns 'Internal Server Error'

I have just switched image upload with Multer from local to Google Cloud Storage using 'multer-google-storage'. It used to work fine earlier, but now sends a 500 Internal Server Error without message. I am using Nodejs and Express, React for front end. FormData is formatted correctly since it works fine if I go back to local storage. Any ideas on how to fix this? Or display an error message? I am not able to find much documentation on 'multer-google-storage'. Thanks for the help!
Here the back-end post route (I hid the configuration options)
const multer = require('multer');
const multerGoogleStorage = require('multer-google-storage');
const upload = multer({
storage: multerGoogleStorage.storageEngine({
autoRetry: true,
bucket: '******',
projectId: '******',
keyFilename: '../server/config/key.json',
filename: (req, file, callback) => {
callback(null, file.originalname);
},
}),
});
//#route POST api/listings
//#description Create listing
//#access Private
router.post(
'/',
upload.any(),
[
isLoggedIn,
[
check('title', 'Title is required').not().isEmpty(),
check('coordinates').not().isEmpty(),
check('address').not().isEmpty(),
check('price', 'Set a price').not().isEmpty(),
check('description', 'Type a description').not().isEmpty(),
check('condition', 'Declare the condition').not().isEmpty(),
check('category', 'Please select a category').not().isEmpty(),
],
],
async (req, res) => {
const errors = validationResult(req);
if (!errors.isEmpty()) {
console.log('validation error');
return res.status(400).json({ errors: errors.array() });
}
try {
const files = req.files;
let images = [];
for (let image of files) {
images.push(image.originalname);
}
const newListing = new Listing({
title: req.body.title,
images: images,
coordinates: JSON.parse(req.body.coordinates),
price: req.body.price,
description: req.body.description,
condition: req.body.condition,
dimensions: req.body.dimensions,
quantity: req.body.quantity,
address: req.body.address,
author: req.user.id,
category: JSON.parse(req.body.category),
});
const author = await User.findById(req.user.id);
await author.listings.push(newListing);
await author.save();
const listing = await newListing.save();
res.json(listing);
} catch (error) {
console.log('error');
console.error(error);
res.json(error);
res.status(500).send('Server Error');
}
}
);
I have solved the issue, it was a permission problem. My previous Google Cloud Storage bucket had access control 'Uniform' while it should have been 'Fine-grained'.

How can I speed up S3 signedUrl upload from 1Mbps

I am currently utilizing s3 signedUrl's in order to hide my credentials from users on the front end. I have it set up and working but it is extremely slow around 1.2mb/s. Using speed test my wifi is showing 11.9mb/s so I don't believe it is my network. The image is only 8MB in size that I have been testing.
Server
const { uploadFile } = require("../services/aws");
app.post("/activity/image-upload", async (req, res) => {
try {
const { _projectId, name, type } = req.body;
const key = `${_projectId}/activities/${name}`;
const signedUrl = await uploadFile({ key, type });
res.status(200).send(signedUrl);
} catch (err) {
console.log("/activity/upload-image err", err);
res.status(422).send();
}
});
AWS Service
const aws = require("aws-sdk");
const keys = require("../config/keys");
aws.config.update({
accessKeyId: keys.aws.accessKeyId,
secretAccessKey: keys.aws.secretAccessKey,
useAccelerateEndpoint: true,
signatureVersion: "v4",
region: "my-region",
});
const s3 = new aws.S3();
exports.uploadFile = async ({ type, key }) => {
try {
const awsUrl = await s3.getSignedUrl("putObject", {
Bucket: keys.aws.bucket,
ContentType: type,
Key: key,
ACL: "public-read",
});
return awsUrl;
} catch (err) {
throw err;
}
};
Front End
const handleUpload = async ({ file, onSuccess, onProgress }) => {
try {
const res = await api.post("/activity/image-upload", {
type: file.type,
name: file.name,
_projectId,
});
const upload = await axios.put(res.data, file, {
headers: {
"Content-Type": file.type,
},
onUploadProgress: handleProgressChange,
});
} catch (err) {
console.log("err", err);
}
};
Image of Request Speeds
You can see above the call to image-upload is returning in 63ms so the hang up isn't my server getting the signedURL. The axios PUT request to s3 signedURL is 6.37s. Unless I am horrible at math that for the 8MB file I am uploading that is roughly 1.2mb/s. What am I missing?
Update 7/23
Here is a picture of my speed test through Google showing upload speed of 10.8mbs.
I tried uploading the image in s3 console to compare speeds. When I uploaded it through the s3 console it was 10.11s!!! Are their different plans that throttle speeds? I am even utilizing Transfer Acceleration and its this slow.

Unexpected end of multipart data nodejs multer s3

iam trying to upload image in s3 this is my code
const upload = require('../services/file_upload');
const singleUpload = upload.single('image');
module.exports.uploadImage = (req,res) => {
singleUpload(req, res, function (err) {
if (err) {
console.log(err);
return res.status(401).send({ errors: [{ title: 'File Upload Error', detail: err}] });
}
console.log(res);
return res.json({ 'imageUrl': req.file.location });
});
}
FileUpload.js
const aws = require('aws-sdk');
const multer = require('multer');
const multerS3 = require('multer-s3');
const s3 = new aws.S3();
const fileFilter = (req, file, cb) => {
if (file.mimetype === 'image/jpeg' || file.mimetype === 'image/png') {
cb(null, true)
} else {
cb(new Error('Invalid Mime Type, only JPEG and PNG'), false);
}
}
const upload = multer({
fileFilter,
storage: multerS3({
s3,
bucket: 'image-bucket',
acl: 'public-read',
contentType: multerS3.AUTO_CONTENT_TYPE,
metadata: function (req, file, cb) {
cb(null, {fieldName: 'TESTING_META_DATA!'});
},
key: function (req, file, cb) {
cb(null,"category_"+Date.now().toString()+".png")
}
})
})
module.exports = upload;
I tried to test api with postmanin serverless local it is giving this error
Error: Unexpected end of multipart data
at D:\Flutter\aws\mishpix_web\node_modules\dicer\lib\Dicer.js:62:28
at process._tickCallback (internal/process/next_tick.js:61:11) storageErrors: []
After deploying online. i tried the api. the file is uploaded in server but its a broken
Are you using aws-serverless-express. aws-serverless-express will forward original request to Buffer as utf8 encoding. So multipart data will be lost or error. I am not sure why.
So, I change aws-serverless-express to aws-serverless-express-binary and everything worked.
yarn add aws-serverless-express-binary
Hope this help!