Error while uploading Multiple Files with different File Names With Skipper in Sails Js - file-upload

I am trying to upload multiple type of files to S3 in sails js
Files are as Follow : VehicleImages, VehicleVideos, VehicleDocuments
I am passing these three type of files with attribute name with same name as these files to my server via multipart/form-data
My approach is to upload one type of file lets say VehicleImages by using await
Then upload the next and then the next.
I am unable to do so since i get the error :
{
"cause": {
"code": "EMAXBUFFER",
"message": "EMAXBUFFER: An upstream (`vehicleDocuments`) timed out before it was
plugged into a receiver. It was still unused after waiting 4500ms.
You can configure this timeout by changing the `maxTimeToBuffer`
option.\n\nNote that this error might be occurring due to an earlier
file upload that is finally timing out after an unrelated server
error."
},
"isOperational": true,
"code": "EMAXBUFFER"
}
This is my code for policies folder
VehicleController: {
create: ["isLoggedIn", "vehicleImages", "vehicleKey", "vehicleDocuments"],
update: ["isLoggedIn", "vehicleImages", "vehicleKey", "vehicleDocuments"],
"*": "isLoggedIn",
},
This is the helper function that i am using in my policies
fn: async function (inputs) {
const { req, res, proceed, fieldName, folderName, setParams } = inputs;
const skipperUpstream = req.file(fieldName);
const file = skipperUpstream._files[0];
if (!file) {
// `skipperUpstream.__proto__` is `Upstream`. It provides `noMoreFiles()` to stop receiving files.
// It also clears all timeouts: https://npmdoc.github.io/node-npmdoc-skipper/build/apidoc.html#apidoc.element.skipper.Upstream.prototype.noMoreFiles
skipperUpstream.noMoreFiles();
return proceed();
}
let timestamp = Date.now();
let uploadedFiles = await sails.upload(req.file(fieldName), {
adapter: require("skipper-s3"),
key: process.env.AWS_ACCESS_KEY_ID,
secret: process.env.AWS_SECRET_ACCESS_KEY,
bucket: process.env.AWS_BUCKET_NAME,
region: "us-east-2",
headers: {
"x-amz-acl": "public-read",
},
// maxBytes: sails.config.custom.maxFileSize,
// change folder and file name
dirname: folderName,
saveAs: function (__newFileStream, next) {
next(undefined, timestamp + __newFileStream.filename);
},
});
sails.log("filesUploaded In Helper", uploadedFiles);
if (uploadedFiles) {
uploadedFiles.forEach((file) => {
file.fd =
process.env.AWS_BUCKET_URL +
((folderName ? folderName + "%5C" : "") +
timestamp +
file.filename.replaceAll("\\", "%5C"));
});
sails.log("filesUploaded", uploadedFiles[0]);
req.files = uploadedFiles;
setParams && setParams(uploadedFiles);
}
return proceed();
},
My aim to upload large files (images, videos, documents) from react application to s3 bucket . How can I do this?

Related

Nuxt3 and Cloudinary Upload API: The "path" argument must be of type string. Received an instance of Object

I am trying to use the Cloudinary Upload API to programmatically upload a File object to my Cloudinary media library, but for some reason I get the following server error:
{
"url": "/api/cloudinary/upload",
"statusCode": 500,
"statusMessage": "Internal Server Error",
"message": "The \"path\" argument must be of type string. Received an instance of Object",
"description": "<pre><span class=\"stack internal\">at new NodeError (node:internal/errors:371:5)</span>\n<span class=\"stack internal\">at validateString (node:internal/validators:119:11)</span>\n<span class=\"stack\">at basename (node:path:752:5)</span>\n<span class=\"stack internal\">at post // snipped out rest
Am I not setting up the folder path correctly here or is something else the problem?
I am using the following code (in my NuxtJS app):
components/UserProfileAvatar.vue:
const file = ref(null)
const handleAvatarUpload = async (e) => {
file.value = e.target.files[0]
// Upload file to Cloudinary and get public_id
const options = {
public_id: 'profile-image',
folder: `users/${userStore.userProfile.uid}/avatar/`
}
try {
const response = await $fetch('/api/cloudinary/upload', {
method: 'POST',
body: {
options: options,
file: file.value
}
})
console.log('Uploaded!', response.asset)
} catch (error) {
console.log('Error uploading file', error)
}
}
If I console log the logic above, I get the following output:
Anyone spot anything as to why I get the the error "The Path argument must be of type string. Received an instance of Object" in my original attempt?

react-native (Expo) upload file on background

In my Expo (react-native) application, I want to do the upload task even if the application is in the background or killed.
the upload should be done to firebase storage, so we don't have a REST API.
checked out the Expo task manager library, but I could not figure out how it should be done. is it even possible to achieve this goal with Expo? is the TaskManager the correct package for this task?
there are only some Expo packages that could be registered as a task (e.g. backgroundFetch), and it is not possible to register a custom function (in this case uploadFile method).
I even got more confused as we should enable add UIBackgroundModes key for iOS but it only has audio,location,voip,external-accessory,bluetooth-central,bluetooth-peripheral,fetch,remote-notification,processing as possible values.
I would appreciate it if you can at least guide me on where to start or what to search for, to be able to upload the file even if the app is in the background is killed/terminated.
import { getStorage, ref, uploadBytes } from "firebase/storage";
const storage = getStorage();
const storageRef = ref(storage, 'videos');
const uploadFile = async (file)=>{
// the file is Blob object
await uploadBytes(storageRef, file);
}
I have already reviewed react-native-background-fetch, react-native-background-upload, react-native-background-job . upload should eject Expo, job does not support iOS, and fetch is a fetching task designed for doing task in intervals.
if there is a way to use mentioned libraries for my purpose, please guide me :)
to my understanding, the Firebase Cloud JSON API does not accept files, does it ? if so please give me an example. If I can make storage json API work with file upload, then I can use Expo asyncUpload probably without ejecting.
I have done something similar like you want, you can use expo-task-manager and expo-background-fetch. Here is the code as I used it. I Hope this would be useful for you.
import * as BackgroundFetch from 'expo-background-fetch';
import * as TaskManager from 'expo-task-manager';
const BACKGROUND_FETCH_TASK = 'background-fetch';
const [isRegistered, setIsRegistered] = useState(false);
const [status, setStatus] = useState(null);
//Valor para que se ejecute en IOS
BackgroundFetch.setMinimumIntervalAsync(60 * 15);
// Define the task to execute
TaskManager.defineTask(BACKGROUND_FETCH_TASK, async () => {
const now = Date.now();
console.log(`Got background fetch call at date: ${new Date(now).toISOString()}`);
// Your function or instructions you want
return BackgroundFetch.Result.NewData;
});
// Register the task in BACKGROUND_FETCH_TASK
async function registerBackgroundFetchAsync() {
return BackgroundFetch.registerTaskAsync(BACKGROUND_FETCH_TASK, {
minimumInterval: 60 * 15, // 1 minutes
stopOnTerminate: false, // android only,
startOnBoot: true, // android only
});
}
// Task Status
const checkStatusAsync = async () => {
const status = await BackgroundFetch.getStatusAsync();
const isRegistered = await TaskManager.isTaskRegisteredAsync(
BACKGROUND_FETCH_TASK
);
setStatus(status);
setIsRegistered(isRegistered);
};
// Check if the task is already register
const toggleFetchTask = async () => {
if (isRegistered) {
console.log('Task ready');
} else {
await registerBackgroundFetchAsync();
console.log('Task registered');
}
checkStatusAsync();
};
useEffect(() => {
toggleFetchTask();
}, []);
Hope this isn't too late to be helpful.
I've been dealing with a variety of expo <-> firebase storage integrations recently, and here's some info that might be helpful.
First, I'd recommend not using the uploadBytes / uploadBytesResumable methods from Firebase. This Thread has a long ongoing discussion about it, but basically it's broken in v9. Maybe in the future the Firebase team will solve the issues, but it's pretty broken with Expo right now.
Instead, I'd recommend either going down the route of writing a small Firebase function that either gives a signed-upload-url or handles the upload itself.
Basically, if you can get storage uploads to work via an http endpoint, you can get any kind of upload mechanism working. (e.g. the FileSystem.uploadAsync() method you're probably looking for here, like #brentvatne pointed out, or fetch, or axios. I'll show a basic wiring at the end).
Server Side
Option 1: Signed URL Upload.
Basically, have a small firebase function that returns a signed url. Your app calls a cloud function like /get-signed-upload-url , which returns the url, which you then use. Check out: https://cloud.google.com/storage/docs/access-control/signed-urls for how you'd go about this.
This might work well for your use case. It can be configured just like any httpsCallable function, so it's not much work to set up, compared to option 2.
However, this doesn't work for the firebase storage / functions emulator! For this reason, I don't use this method, because I like to intensively use the emulators, and they only offer a subset of all the functionalities.
Option 2: Upload the file entirely through a function
This is a little hairier, but gives you a lot more fidelity over your uploads, and will work on an emulator! I like this too because it allows doing upload process within the endpoint execution, instead of as a side effect.
For example, you can have a photo-upload endpoint generate thumbnails, and if the endpoint 201's, then you're good! Rather than the traditional Firebase approach of having a listener to cloud storage which would generate thumbnails as a side effect, which then has all kinds of bad race conditions (checking for processing completion via exponentiational backoff? Gross!)
Here are three resources I'd recommend to go about this approach:
https://cloud.google.com/functions/docs/writing/http#multipart_data
https://github.com/firebase/firebase-js-sdk/issues/5848
https://github.com/mscdex/busboy
Basically, if you can make a Firebase cloud endpoint that accepts a File within formdata, you can have busboy parse it, and then you can do anything you want with it... like upload it to Cloud Storage!
an outline of this:
import * as functions from "firebase-functions";
import * as busboy from "busboy";
import * as os from "os";
import * as path from "path";
import * as fs from "fs";
type FieldMap = {
[fieldKey: string]: string;
};
type Upload = {
filepath: string;
mimeType: string;
};
type UploadMap = {
[fileName: string]: Upload;
};
const MAX_FILE_SIZE = 2 * 1024 * 1024; // 2MB
export const uploadPhoto = functions.https.onRequest(async (req, res) => {
verifyRequest(req); // Verify parameters, auth, etc. Better yet, use a middleware system for this like express.
// This object will accumulate all the fields, keyed by their name
const fields: FieldMap = {};
// This object will accumulate all the uploaded files, keyed by their name.
const uploads: UploadMap = {};
// This will accumulator errors during the busboy process, allowing us to end early.
const errors: string[] = [];
const tmpdir = os.tmpdir();
const fileWrites: Promise<unknown>[] = [];
function cleanup() {
Object.entries(uploads).forEach(([filename, { filepath }]) => {
console.log(`unlinking: ${filename} from ${path}`);
fs.unlinkSync(filepath);
});
}
const bb = busboy({
headers: req.headers,
limits: {
files: 1,
fields: 1,
fileSize: MAX_FILE_SIZE,
},
});
bb.on("file", (name, file, info) => {
verifyFile(name, file, info); // Verify your mimeType / filename, etc.
file.on("limit", () => {
console.log("too big of file!");
});
const { filename, mimeType } = info;
// Note: os.tmpdir() points to an in-memory file system on GCF
// Thus, any files in it must fit in the instance's memory.
console.log(`Processed file ${filename}`);
const filepath = path.join(tmpdir, filename);
uploads[filename] = {
filepath,
mimeType,
};
const writeStream = fs.createWriteStream(filepath);
file.pipe(writeStream);
// File was processed by Busboy; wait for it to be written.
// Note: GCF may not persist saved files across invocations.
// Persistent files must be kept in other locations
// (such as Cloud Storage buckets).
const promise = new Promise((resolve, reject) => {
file.on("end", () => {
writeStream.end();
});
writeStream.on("finish", resolve);
writeStream.on("error", reject);
});
fileWrites.push(promise);
});
bb.on("close", async () => {
await Promise.all(fileWrites);
// Fail if errors:
if (errors.length > 0) {
functions.logger.error("Upload failed", errors);
res.status(400).send(errors.join());
} else {
try {
const upload = Object.values(uploads)[0];
if (!upload) {
functions.logger.debug("No upload found");
res.status(400).send("No file uploaded");
return;
}
const { uploadId } = await processUpload(upload, userId);
cleanup();
res.status(201).send({
uploadId,
});
} catch (error) {
cleanup();
functions.logger.error("Error processing file", error);
res.status(500).send("Error processing file");
}
}
});
bb.end(req.rawBody);
});
Then, that processUpload function can do anything you want with the file, like upload it to cloud storage:
async function processUpload({ filepath, mimeType }: Upload, userId: string) {
const fileId = uuidv4();
const bucket = admin.storage().bucket();
await bucket.upload(filepath, {
destination: `users/${userId}/${fileId}`,
{
contentType: mimeType,
},
});
return { fileId };
}
Mobile Side
Then, on the mobile side, you can interact with it like this:
async function uploadFile(uri: string) {
function getFunctionsUrl(): string {
if (USE_EMULATOR) {
const origin =
Constants?.manifest?.debuggerHost?.split(":").shift() || "localhost";
const functionsPort = 5001;
const functionsHost = `http://${origin}:${functionsPort}/{PROJECT_NAME}/${PROJECT_LOCATION}`;
return functionsHost;
} else {
return `https://{PROJECT_LOCATION}-{PROJECT_NAME}.cloudfunctions.net`;
}
}
// The url of your endpoint. Make this as smart as you want.
const url = `${getFunctionsUrl()}/uploadPhoto`;
await FileSystem.uploadAsync(uploadUrl, uri, {
httpMethod: "POST",
uploadType: FileSystem.FileSystemUploadType.MULTIPART,
fieldName: "file", // Important! make sure this matches however you want bussboy to validate the "name" field on file.
mimeType,
headers: {
"content-type": "multipart/form-data",
Authorization: `${idToken}`,
},
});
});
TLDR
Wrap Cloud Storage in your own endpoint, treat it like a normal http upload, everything plays nice.

I am converting Images to pdf using a npm library in react native why it is giving error of null object?

I am using react-native-image-to-pdf library to convert images to pdf in my react native app. from https://www.npmjs.com/package/react-native-image-to-pdf
var photoPath = ['https://images.pexels.com/photos/20787/pexels-photo.jpg?auto=compress&cs=tinysrgb&h=350','https://images.pexels.com/photos/20787/pexels-photo.jpg?auto=compress&cs=tinysrgb&h=350'];
const myAsyncPDFFunction = async () => {
try {
const options = {
imagePaths: photoPath,
name: 'PDFName',
};
const pdf = await RNImageToPdf.createPDFbyImages(options);
console.log(pdf.filePath);
} catch(e) {
console.log(e);
}
}
but this is giving error Error: Attempt to invoke virtual method 'int android.graphics.Bitmap.getWidth()' on a null object reference
I have also tried giving path as ['./assets/a.png', './assets/b.png']
but still getting same error
Based on the usage example, your photoPath needs to be a local file path and not a remote path.
My recommendation is to first use rn-fetch-blob to download the remote image to the device, and then pass your new local image path to react-native-image-to-pdf. Something like:
RNFetchBlob
.config({
// add this option that makes response data to be stored as a file,
// this is much more performant.
fileCache : true,
})
.fetch('GET', 'http://www.example.com/file/example.png', {
//some headers ..
})
.then(async (res) => {
// the temp file path
console.log('The file saved to ', res.path())
const options = {
imagePaths: [res.path()],
name: 'PDFName',
};
const pdf = await RNImageToPdf.createPDFbyImages(options);
})
from file path remove the text 'file://; with empty string('').
const options = {
imagePaths: [uri.replace('file://', '')],
name: 'FileName',
quality: .9, // optional compression paramter
};
replace('file://', '') it's work for me

How to write a mutation resolver that handles image AND form data upload stream?

So, currently I’m able to successfully upload an image through the client to an S3 bucket. However, my question is, how would I go about including additional form fields, such as an accompanying title and/or body field, with the existing image file?
The server component primarily utilizes apollo-server-express, graphql-upload, and aws-sdk while the client just uses apollo-client
My end goal is to essentially be able to take the user’s form data on submit that consists of multiple text fields and an image file, and then upload the contents to the s3 bucket in its own individual directory.
Would it be possible to compile the non-image-file form data as a json, and then upload both the files (the json and the image file) as a batch?
Here are some snippets for some context:
// server/typedefs.js
const typeDefs = gql`
scalar Upload
type File {
id: ID!
filename: String!
mimetype: String!
encoding: String!
}
type Mutation {
singleUploadStream(file: Upload!): File!
}
type Query {
files: [File]
}
`;
// server/resolvers.js
...
...
Mutation: {
singleUploadStream: async (parent, args) => {
const file = await args.file;
const { createReadStream, filename } = file;
const fileStream = createReadStream();
const uploadParams = {
Bucket: process.env.BUCKET_NAME,
Key: `uploads/${filename}/${filename}`,
Body: fileStream,
};
await s3.upload(uploadParams).promise();
return file;
},
}
...
...
// server/index.js
...
...
const app = express();
const server = new ApolloServer({
typeDefs,
resolvers,
uploads: false,
introspection: true,
});
app.use(graphqlUploadExpress());
server.applyMiddleware({ app });
app.listen({ port: 4000 }, () => {
console.log(`🚀 Server ready at http://localhost:4000${server.graphqlPath}`);
});
please don’t hesitate to ask if you need more info or if you’re not sure what I’m asking for!
I really appreciate any help I can get ☹️ Thank you!
My resources:
https://www.apollographql.com/blog/graphql-file-uploads-with-react-hooks-typescript-amazon-s3-tutorial-ef39d21066a2/
https://dev.to/fhpriamo/painless-graphql-file-uploads-with-apollo-server-to-amazon-s3-and-local-filesystem-1bn0
https://www.thomasmaximini.com/upload-images-to-aws-s3-with-react-and-apollo-graphql

Unable to upload Nativescript images to S3

I have a nativescript-vue application where I want to take a camera taken image and upload to my S3 bucket.
I installed the plugin:
tns plugin add nativescript-aws-sdk
I have in my app.js:
import { S3 } from "nativescript-aws-sdk/s3";
S3.init({ endPoint: '/images/person', accessKey: config.AWS_ACCESS_KEY, secretKey: config.AWS_SECRET_KEY, type: 'static' });
Vue.use(S3);
then in my function to upload an image:
uploadPicture() {
console.log('uploadPicture called..');
const s = new S3();
console.log('S3 inited..' + JSON.stringify(s));
But this results in:
CONSOLE LOG file:///node_modules/#nativescript/core/image-source/image-source.js:331:16: 'fromNativeSource() is deprecated. Use ImageSource constructor instead.'
CONSOLE LOG file:///app/components/Photo.vue:303:0: 'uploadPicture called..'
CONSOLE LOG file:///app/components/Photo.vue:304:0: 'S3 const..undefined'
So there is something with the S3.init() that is causing an error. I do not see many examples of this plugin online so either nobody has issues with it or its not being used?
Anyone who can see what I am doing wrong, can you kindly point me in the right direction?
Update:
Moved the import from vue file to app.js:
import { S3 } from "nativescript-aws-sdk/s3";
S3.init({ endPoint: '/images/person', accessKey: config.AWS_ACCESS_KEY, secretKey: config.AWS_SECRET_KEY, type: 'static' });
And in my Vue file:
import { S3 } from 'nativescript-aws-sdk/s3';
const s3 = new S3();
const imageUploaderId = s3.createUpload({
file: that.cameraImage,
bucketName: config.AWS_BUCKET_NAME,
key: `ns_${isIOS ? 'ios' : 'android'}`+fileName,
acl: 'public-read',
completed: (error, success) => {
if (error) {
console.log(`S3 Download Failed :-> ${error.message}`);
}
if (success) {
console.log(`S3 Download Complete :-> ${success.path}`);
}
},
progress: progress => {
console.log(`Progress : ${progress.value}`);
}
}).catch((err) => {
console.error("createUpload() caught error: " + JSON.stringify(err));
});
s3.pause(imageUploaderId);
s3.resume(imageUploaderId);
s3.cancel(imageUploaderId);
But nothing seems to happen - no errors or progress.
You've declared S3 twice, once globally:
import S3 from "nativescript-aws-sdk/s3";
and I'm guessing a local version inside the uploadPicture() method:
const S3 = require('nativescript-aws-sdk/s3').S3;
The import S3 has to do the init, not the later as per: https://market.nativescript.org/plugins/nativescript-aws-sdk