AWS Lambda getting file from S3 then using it - amazon-s3

I need to pull 2 files from my S3 and use them in a HttpAgent and I and struggling to find the correct method. Using SDKv3 which is returning a ReadableStream.
const { Readable } = require('stream');
const { createWriteStream } = require('fs');
const { S3Client, GetObjectCommand } = require('#aws-sdk/client-s3');
async (event, context) => {
let options;
let httpsAgent;
try {
console.log('> Getting content from S3');
const s3Client = new S3Client({
region: 'us-east-2',
});
let command = new GetObjectCommand({
Key: 'lib/myCert.crt',
Bucket: 'mybucket',
});
console.log('>>Getting Cert command');
const hcert = await s3Client.send(command);
command = new GetObjectCommand({
Key: 'lib/myKey.pem',
Bucket: 'mybucket',
});
console.log('Get Key');
const ckey = await s3Client.send(command);
console.log('initializing Agent');
httpsAgent = new https.Agent(
{
key: ckey.Body.pipe(createWriteStream('/tmp/myKey.pem')),
cert: hcert.Body.pipe(createWriteStream('/tmp/myCert.crt')),
// key: fs.readFileSync('./tmp/myKey.pem'),
// cert: fs.readFileSync('./tmp/myCert.crt'),
keepAlive: true
}
);
With the above code I get a ParameterNotFound. I have tried writing to the disk then accessing ( via fs.ReadFileSync) it but same issue.

Related

S3 to IPFS from Pinata

I am trying to upload a lot of files from S3 to IPFS via Pinata. I haven't found in Pinata documentation something like that.
This is my solution, using the form-data library. I haven't tested it yet (I will do it soon, I need to code some things).
Is it a correct approach? anyone who has done something similar?
async uploadImagesFolder(
items: ItemDocument[],
bucket?: string,
path?: string,
) {
try {
const form = new FormData();
for (const item of items) {
const file = getObjectStream(item.tokenURI, bucket, path);
form.append('file', file, {
filename: item.tokenURI,
});
}
console.log(`Uploading files to IPFS`);
const pinataOptions: PinataOptions = {
cidVersion: 1,
};
const result = await pinata.pinFileToIPFS(form, {
pinataOptions,
});
console.log(`PiƱata Response:`, JSON.stringify(result, null, 2));
return result.IpfsHash;
} catch (e) {
console.error(e);
}
}
I had the same problem
So, I have found this: https://medium.com/pinata/stream-files-from-aws-s3-to-ipfs-a0e23ffb7ae5
But in the article If am not wrong, is used a different version to the JavaScript AWS SDK v3 (nowadays the most recent: https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/index.html).
This is for the Client side with TypeScript:
If you have this version, for me works this code snippet:
export const getStreamObjectInAwsS3 = async (data: YourParamsType) => {
try {
const BUCKET = data.bucketTarget
const KEY = data.key
const client = new S3Client({
region: 'your-region',
credentials: {
accessKeyId: 'your-access-key',
secretAccessKey: 'secret-key'
}
})
const resource = await client.send(new GetObjectCommand({
Bucket: BUCKET,
Key: KEY
}))
const response = resource.Body
if (response) {
return new Response(await response.transformToByteArray()).blob()
}
return null
} catch (error) {
return null
}
}
With the previous code, you can get the Blob Object for pass it to the File object with this method and get the URL resource using the API:
export const uploadFileToIPFS = async(file: Response) => {
const url = `https://api.pinata.cloud/pinning/pinFileToIPFS`
const data = new FormData()
data.append('file', file)
try {
const response = await axios.post(url, data, {
maxBodyLength: Infinity,
headers: {
pinata_api_key: 'your-api',
pinata_secret_api_key: 'your-secret'
},
data: data
})
return {
success: true,
pinataURL: `https://gateway.pinata.cloud/ipfs/${ response.data.IpfsHash }`
}
} catch (error) {
console.log(error)
return null
}
}
I have found this solution from this nice article and you can explore other implementations (including the Node.js side)

import automatically s3 bucket data in DynamoDB

How to import s3 bucket JSON data in DynamoDB automatically using NODEJS, DynamoDB, and AWS lambda.
import type { AWS } from '#serverless/typescript';
const serverlessConfiguration: AWS = {
service: 'raj',
frameworkVersion: '2',
custom: {
webpack: {
webpackConfig: './webpack.config.js',
includeModules: true,
},
},
plugins: ['serverless-webpack'],
provider: {
name: 'aws',
runtime: 'nodejs14.x',
profile : 'server',
apiGateway: {
minimumCompressionSize: 1024,
shouldStartNameWithService: true,
},
environment: {
AWS_NODEJS_CONNECTION_REUSE_ENABLED: '1',
},
lambdaHashingVersion: '20201221',
},
// import the function via paths
functions: {
messageAdd : {
handler : "src/now.handler",
events: [
{
http: {
path : 'addData',
method : 'POST',
cors : true,
}
}
]
}
},
};
module.exports = serverlessConfiguration;
const AWS = require('aws-sdk') ;
const docClient = new AWS.DynamoDB.DocumentClient();
// The Lambda handler
exports.handler = async (event) => {
AWS.config.update({
region: 'us-east-1', // use appropriate region
accessKeyId: '', // use your access key
secretAccessKey: '' // user your secret key
});
const s3 = new AWS.S3();
const ddbTable = "s3todyb";
console.log (JSON.stringify(event, null, 2));
console.log('Using DDB table: ', ddbTable);
await Promise.all(
event.Records.map(async (record) => {
try {
console.log('Incoming record: ', record);
// Get original text from object in incoming event
const originalText = await s3.getObject({
Bucket: event.Records[0].s3.bucket.name,
Key: event.Records[0].s3.object.key
}).promise();
// Upload JSON to DynamoDB
const jsonData = JSON.parse(originalText.Body.toString('utf-8'));
await ddbLoader(jsonData);
} catch (err) {
console.error(err);
}
})
);
};
// Load JSON data to DynamoDB table
const ddbLoader = async (data) => {
// Separate into batches for upload
let batches = [];
const BATCH_SIZE = 25;
while (data.length > 0) {
batches.push(data.splice(0, BATCH_SIZE));
}
console.log(`Total batches: ${batches.length}`);
let batchCount = 0;
// Save each batch
await Promise.all(
batches.map(async (item_data) => {
// Set up the params object for the DDB call
const params = {
RequestItems: {}
};
params.RequestItems[ddbTable] = [];
item_data.forEach(item => {
for (let key of Object.keys(item)) {
// An AttributeValue may not contain an empty string
if (item[key] === '')
delete item[key];
}
// Build params
params.RequestItems[ddbTable].push({
PutRequest: {
Item: {
...item
}
}
});
});
// Push to DynamoDB in batches
try {
batchCount++;
console.log('Trying batch: ', batchCount);
const result = await docClient.batchWrite(params).promise();
console.log('Success: ', result);
} catch (err) {
console.error('Error: ', err);
}
})
);
};

upload a pdf to s3 from frontend -> node js -> s3

frontend app:
const readURL = (input) => {
if (input.files && input.files[0]) {
let reader = new FileReader();
reader.fileName = input.files[0].name;
reader.onload = async function (e) {
uploadPhoto(reader, e);
};
reader.readAsDataURL(input.files[0]);
}
};
const uploadPhoto = (reader, e) => {
let client = new ServerData();
client.put("/images/upload", {
imageBase64: reader.result,
name: e.target.fileName,
typeOfUpload: "xxxx-bank",
}).then(uploadResult => {
....
})
};
backend node.js
fileContent = base64Image // directly from frontend
fileContent = Buffer.from(base64Image,'base64'); //tried this as well
let params = {
Bucket: 'bucket',
Key: 'name.pdf',
Body: fileContent,
ContentEncoding: 'base64',
ACL: 'private'
}
let upload = new AWS.S3.ManagedUpload({
params: params
});
notice the fileContent
for images it works and i'm using
Buffer.from(base64Image.replace(/^data:image\/\w+;base64,/, ""),'base64');
the solution was
Buffer.from(base64Image.replace(/^data:.+;base64,/, ""),'base64');

multer file upload - how to get a value from multer in route?

I'm uploading a file using multer with Express.
I'd like access a value from multer's storage object inside the route.
How can I do that?
Multer configuration (right now I only know how to log the key):
const aws = require("aws-sdk");
const multer = require("multer");
const multerS3 = require("multer-s3");
function configureUpload () {
const s3 = new aws.S3({...my credentials...});
const upload = multer({
storage: multerS3({
s3: s3,
bucket: process.env.S3_BUCKET_NAME,
metadata: (req, file, cb) => cb(null, { fieldName: file.fieldname }),
key: (req, file, cb) => {
const key = `${new Date().toISOString()}-${file.originalname}`;
return cb(console.log("KEY: ", key), key); // The string I need to access in route
},
}),
});
return upload;
}
The route:
const express = require("express");
const Person = require("../../../db/models/person");
const configureUpload = require("../../../configureUpload ");
const router = express.Router();
// Saving to MongoDB with mongoose
router.post("/", configureUpload ().any(), async (req, res) => {
Person.create({
...req.body,
files: [] // I want to add the string in multer.storage.key to this array
})
.then((person) => {
...
})
.catch((err) => {
...
});
});
module.exports = router;
This an exapmle of what Tarique Akhtar Ansari already said. Adding your key to the req object so that you can access the key's value in your controller/route like so:
const aws = require("aws-sdk");
const multer = require("multer");
const multerS3 = require("multer-s3");
function configureUpload () {
const s3 = new aws.S3({...my credentials...});
const upload = multer({
storage: multerS3({
s3: s3,
bucket: process.env.S3_BUCKET_NAME,
metadata: (req, file, cb) => {cb(null, { fieldName: file.fieldname })},
key: (req, file, cb) => {
const key = `${new Date().toISOString()}-${file.originalname}`;
req.key = key; // added the key to req object
// return cb(console.log("KEY: ", key), key); // The string I need to access in route
},
}),
});
return upload;
}
Accessing the value of the key inside your controller or route
const express = require("express");
const Person = require("../../../db/models/person");
const configureUpload = require("../../../configureUpload ");
const router = express.Router();
// Saving to MongoDB with mongoose
router.post("/", configureUpload ().any(), async (req, res) => {
console.log('here is the value your key', req.key); // it's that simple.
Person.create({
...req.body,
files: [] // I want to add the string in multer.storage.key to this array
})
.then((person) => {
...
})
.catch((err) => {
...
});
});
module.exports = router;
you can simply add req.key = keyValue
then you can access in next route using req.key name
or you can also access req.file or req.files object in route
In express everything is a middleware so you can easily pass and access in next middleware

Cannot upload image to s3 using serverless framework but it work in offline (buffer issue)

I'm trying to deploy a lambda function allowing me to upload a picture to S3.
The lambda works well in offline but when I'm deploy it to AWS, the function doesn't work.
The first error I encountered was this one :
ERROR (node:7) [DEP0005] DeprecationWarning: Buffer() is deprecated due to security and usability issues. Please use the Buffer.alloc(), Buffer.allocUnsafe(), or Buffer.from() methods instead.
So, I followed the recommendation to use Buffer.from() method instead. But it doesn't work too. The lambda run until the timeout.
Can someone tell me where I was wrong or suggest me another solution ?
Below my lambda function :
const AWS = require("aws-sdk");
const Busboy = require("busboy");
const uuidv4 = require("uuid/v4");
require("dotenv").config();
AWS.config.update({
accessKeyId: process.env.ACCESS_KEY_ID,
secretAccessKey: process.env.SECRET_ACCESS_KEY,
subregion: process.env.SUB_REGION
});
const s3 = new AWS.S3();
const getContentType = event => {
// see the second block of codes
};
const parser = event => {
// see the third block of codes
};
module.exports.main = (event, context, callback) => {
context.callbackWaitsForEmptyEventLoop = false;
const uuid = uuidv4();
const uploadFile = async (image, uuid) =>
new Promise(() => {
// const bitmap = new Buffer(image, "base64"); // <====== deprecated
const bitmap = Buffer.from(image, "base64"); // <======== problem here
const params = {
Bucket: "my_bucket",
Key: `${uuid}.jpeg`,
ACL: "public-read",
Body: bitmap,
ContentType: "image/jpeg"
};
s3.putObject(params, function(err, data) {
if (err) {
return callback(null, "ERROR");
}
return callback(null, "SUCCESS");
});
});
parser(event).then(() => {
uploadFile(event.body.file, uuid);
});
};
getContentType() :
const getContentType = event => {
const contentType = event.headers["content-type"];
if (!contentType) {
return event.headers["Content-Type"];
}
return contentType;
};
parser()
const parser = event =>
new Promise((resolve, reject) => {
const busboy = new Busboy({
headers: {
"content-type": getContentType(event)
}
});
const result = {};
busboy.on("file", (fieldname, file, filename, encoding, mimetype) => {
file.on("data", data => {
result.file = data;
});
file.on("end", () => {
result.filename = filename;
result.contentType = mimetype;
});
});
busboy.on("field", (fieldname, value) => {
result[fieldname] = value;
});
busboy.on("error", error => reject(error));
busboy.on("finish", () => {
event.body = result;
resolve(event);
});
busboy.write(event.body, event.isBase64Encoded ? "base64" : "binary");
busboy.end();
});
new Buffer(number) // Old
Buffer.alloc(number) // New
new Buffer(string) // Old
Buffer.from(string) // New
new Buffer(string, encoding) // Old
Buffer.from(string, encoding) // New
new Buffer(...arguments) // Old
Buffer.from(...arguments) // New
You are using callbackWaitsForEmptyEventLoop which basically let lambda function thinks that the work is not over yet. Also, you are wrapping it in promise but not resolving it. You can simplify this logic using following inbuilt promise function on aws-sdk
module.exports.main = async event => {
const uuid = uuidv4();
await parser(event); // not sure if this needs to be async or not. check
const bitmap = Buffer.from(event.body.file, "base64"); // <======== problem here
const params = {
Bucket: "my_bucket",
Key: `${uuid}.jpeg`,
ACL: "public-read",
Body: bitmap,
ContentType: "image/jpeg"
};
const response = await s3.putObject(params).promise();
return response;
};