I am using Meteor.js with Amazon S3 Bucket for uploading and storing photos.
I am using the CollectionFS package and Meteor-CollectionFS/packages/s3/.
However, there is no error or response displayed when I try to upload the file.
Client side event handler:
'change .fileInput': function(e, t) {
FS.Utility.eachFile(e, function(file) {
Images.insert(file, function (err, fileObj) {
if (err){
console.log(err);
} else {
console.log("fileObj id: " + fileObj._id);
//Meteor.users.update(userId, {$set: imagesURL});
}
});
});
}
Client side declaration
var imageStore = new FS.Store.S3("imageStore");
Images = new FS.Collection("images", {
stores: [imageStore],
filter: {
allow: {
contentTypes: ['image/*']
}
}
});
Server Side
var imageStore = new FS.Store.S3("imageStore", {
accessKeyId: "xxxx",
secretAccessKey: "xxxx",
bucket: "mybucket"
});
Images = new FS.Collection("images", {
stores: [imageStore],
filter: {
allow: {
contentTypes: ['image/*']
}
}
});
Anyone has any idea what happens?
Related
I am trying to upload a lot of files from S3 to IPFS via Pinata. I haven't found in Pinata documentation something like that.
This is my solution, using the form-data library. I haven't tested it yet (I will do it soon, I need to code some things).
Is it a correct approach? anyone who has done something similar?
async uploadImagesFolder(
items: ItemDocument[],
bucket?: string,
path?: string,
) {
try {
const form = new FormData();
for (const item of items) {
const file = getObjectStream(item.tokenURI, bucket, path);
form.append('file', file, {
filename: item.tokenURI,
});
}
console.log(`Uploading files to IPFS`);
const pinataOptions: PinataOptions = {
cidVersion: 1,
};
const result = await pinata.pinFileToIPFS(form, {
pinataOptions,
});
console.log(`PiƱata Response:`, JSON.stringify(result, null, 2));
return result.IpfsHash;
} catch (e) {
console.error(e);
}
}
I had the same problem
So, I have found this: https://medium.com/pinata/stream-files-from-aws-s3-to-ipfs-a0e23ffb7ae5
But in the article If am not wrong, is used a different version to the JavaScript AWS SDK v3 (nowadays the most recent: https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/index.html).
This is for the Client side with TypeScript:
If you have this version, for me works this code snippet:
export const getStreamObjectInAwsS3 = async (data: YourParamsType) => {
try {
const BUCKET = data.bucketTarget
const KEY = data.key
const client = new S3Client({
region: 'your-region',
credentials: {
accessKeyId: 'your-access-key',
secretAccessKey: 'secret-key'
}
})
const resource = await client.send(new GetObjectCommand({
Bucket: BUCKET,
Key: KEY
}))
const response = resource.Body
if (response) {
return new Response(await response.transformToByteArray()).blob()
}
return null
} catch (error) {
return null
}
}
With the previous code, you can get the Blob Object for pass it to the File object with this method and get the URL resource using the API:
export const uploadFileToIPFS = async(file: Response) => {
const url = `https://api.pinata.cloud/pinning/pinFileToIPFS`
const data = new FormData()
data.append('file', file)
try {
const response = await axios.post(url, data, {
maxBodyLength: Infinity,
headers: {
pinata_api_key: 'your-api',
pinata_secret_api_key: 'your-secret'
},
data: data
})
return {
success: true,
pinataURL: `https://gateway.pinata.cloud/ipfs/${ response.data.IpfsHash }`
}
} catch (error) {
console.log(error)
return null
}
}
I have found this solution from this nice article and you can explore other implementations (including the Node.js side)
How to import s3 bucket JSON data in DynamoDB automatically using NODEJS, DynamoDB, and AWS lambda.
import type { AWS } from '#serverless/typescript';
const serverlessConfiguration: AWS = {
service: 'raj',
frameworkVersion: '2',
custom: {
webpack: {
webpackConfig: './webpack.config.js',
includeModules: true,
},
},
plugins: ['serverless-webpack'],
provider: {
name: 'aws',
runtime: 'nodejs14.x',
profile : 'server',
apiGateway: {
minimumCompressionSize: 1024,
shouldStartNameWithService: true,
},
environment: {
AWS_NODEJS_CONNECTION_REUSE_ENABLED: '1',
},
lambdaHashingVersion: '20201221',
},
// import the function via paths
functions: {
messageAdd : {
handler : "src/now.handler",
events: [
{
http: {
path : 'addData',
method : 'POST',
cors : true,
}
}
]
}
},
};
module.exports = serverlessConfiguration;
const AWS = require('aws-sdk') ;
const docClient = new AWS.DynamoDB.DocumentClient();
// The Lambda handler
exports.handler = async (event) => {
AWS.config.update({
region: 'us-east-1', // use appropriate region
accessKeyId: '', // use your access key
secretAccessKey: '' // user your secret key
});
const s3 = new AWS.S3();
const ddbTable = "s3todyb";
console.log (JSON.stringify(event, null, 2));
console.log('Using DDB table: ', ddbTable);
await Promise.all(
event.Records.map(async (record) => {
try {
console.log('Incoming record: ', record);
// Get original text from object in incoming event
const originalText = await s3.getObject({
Bucket: event.Records[0].s3.bucket.name,
Key: event.Records[0].s3.object.key
}).promise();
// Upload JSON to DynamoDB
const jsonData = JSON.parse(originalText.Body.toString('utf-8'));
await ddbLoader(jsonData);
} catch (err) {
console.error(err);
}
})
);
};
// Load JSON data to DynamoDB table
const ddbLoader = async (data) => {
// Separate into batches for upload
let batches = [];
const BATCH_SIZE = 25;
while (data.length > 0) {
batches.push(data.splice(0, BATCH_SIZE));
}
console.log(`Total batches: ${batches.length}`);
let batchCount = 0;
// Save each batch
await Promise.all(
batches.map(async (item_data) => {
// Set up the params object for the DDB call
const params = {
RequestItems: {}
};
params.RequestItems[ddbTable] = [];
item_data.forEach(item => {
for (let key of Object.keys(item)) {
// An AttributeValue may not contain an empty string
if (item[key] === '')
delete item[key];
}
// Build params
params.RequestItems[ddbTable].push({
PutRequest: {
Item: {
...item
}
}
});
});
// Push to DynamoDB in batches
try {
batchCount++;
console.log('Trying batch: ', batchCount);
const result = await docClient.batchWrite(params).promise();
console.log('Success: ', result);
} catch (err) {
console.error('Error: ', err);
}
})
);
};
Using Adobe PDF Embed API and want to save annotated PDFs in a browser window to Firestore.
The file is uploaded to Firebase but corrupt and only about 9 bytes in size.
Please see the below code. Is there something I need to do with "content" in the callback?
Attached also a picture of the console.log.
const previewConfig = {
embedMode: "FULL_WINDOW",
showAnnotationTools: true,
showDownloadPDF: true,
showPrintPDF: true,
showPageControls: true
}
document.addEventListener("adobe_dc_view_sdk.ready", function () {
var adobeDCView = new AdobeDC.View({
clientId: "2eab88022c63447f8796b580d5058e71",
divId: "adobe-dc-view"
});
adobeDCView.previewFile({
content: { location: { url: decoded } },
metaData: { fileName: decodedTitle }
}, previewConfig);
/* Register save callback */
adobeDCView.registerCallback(
AdobeDC.View.Enum.CallbackType.SAVE_API,
async function (metaData, content, options) {
console.log(metaData);
console.log(content);
var meta = {
contentType: 'application/pdf'
};
var pdfRef = storageRef.child(decodedTitle);
var upload = await pdfRef.put(content, meta);
console.log('Uploaded a file!');
return new Promise(function (resolve, reject) {
/* Dummy implementation of Save API, replace with your business logic */
setTimeout(function () {
var response = {
code: AdobeDC.View.Enum.ApiResponseCode.SUCCESS,
data: {
metaData: Object.assign(metaData, { updatedAt: new Date().getTime() })
},
};
resolve(response);
}, 2000);
});
}
);
});
I was able to use putString() in Firebase Storage to upload the PDF to storage in the end.
Before I was only using put() which ended up having a corrupt file.
I have added meta data by referring How to store files with meta data in LoopBack?
Now I have to check if the filetype is in csv, before uploading it to the server.
Right now, I delete the uploaded file if it is not valid. Is there a better way to solve this?
let filePath;
File.app.models.container.upload(ctx.req, ctx.result, options, function(err, fileObj) {
if (err) {
callback(err);
}
let fileInfo = fileObj.files.file[0];
filePath = path.join("server/storage", fileInfo.container, fileInfo.name);
if (fileInfo.type === "text/csv") {
File.create({
name: fileInfo.name,
type: fileInfo.type,
container: fileInfo.container,
url: path.join(CONTAINERS_URL, fileInfo.container, "/download/",
fileInfo.name)
}, function(err, obj) {
if (err) {
callback(err);
}
callback(null, filePath);
});
} else {
fs.unlinkSync(filePath); //delete if it is not csv
let error = new Error();
error.message = "Please upload only csv file";
error.name = "InvalidFile";
callback(error);
}
});
Here is what I've done,
in middleware.json
"parse": {
"body-parser#json": {},
"body-parser#urlencoded": {"params": { "extended": true }}
},
in server/server.js
var multer = require('multer');
var boot = require('loopback-boot');
var app = module.exports = loopback();
app.use(bodyParser.json()); // for parsing application/json
app.use(bodyParser.urlencoded({ extended: true })); // for parsing application/x-www-form-urlencoded
app.use(multer()); // for parsing multipart/form-data
and remote method as ,
File.remoteMethod(
'upload', {
description: 'Uploads a file with metadata',
accepts: {arg: 'ctx', type: 'object', http: function(ctx) {
console.log(ctx.req.files);
return ctx;
}
},
returns: {arg: 'fileObject', type: 'object', root: true},
http: {verb: 'post'}
}
);
Now ctx can give the mime type..
Update 1:
There is another easier option,
You could define the restriction in datasources.local.js as below, I tested with a filesystem provider
module.exports = {
container: {
root: "./upload",
acl: 'public-read',
allowedContentTypes: ['image/jpg', 'image/jpeg', 'image/png'],
maxFileSize: 10 * 1024 * 1024
}
}
I am trying to upload files to a dropbox app. Using the package CollectionFS/Meteor-CollectionFS with the cfs:dropbox adapter and my problem is that the files being uploaded is 0 bytes. I am not sure what I am missing or doing wrong here.
On server:
var registrationImageStorage = new FS.Store.Dropbox("registrationStorage", {
key: "****",
secret: "****",
token: "****",
transformWrite: function (fileObj, readStream, writeStream) {
gm(readStream, fileObj.name()).stream().pipe(writeStream);
}
});
RegistrationImages = new FS.Collection("registrations", {
stores: [registrationImageStorage],
filter: {
allow: {
contentTypes: ['image/*']
}
}
});
RegistrationImages.allow({
insert: function () {
return true;
},
update: function () {
return true;
}
});
On client:
var registrationImageStorage = new FS.Store.Dropbox("registrationStorage");
RegistrationImages = new FS.Collection("registrations", {
stores: [registrationImageStorage],
filter: {
allow: {
contentTypes: ['image/*']
}
}
});
On client to start the upload:
var file = new FS.File($('#badgeImage').get(0).files[0]);
RegistrationImages.insert(file, function (err, fileObj) {
if (err) {
console.log(err);
} else {
console.log(fileObj);
});
Ok, I did not need this part of the code and after removing it, it worked:
transformWrite: function (fileObj, readStream, writeStream) {
gm(readStream, fileObj.name()).stream().pipe(writeStream);
}