Can’t get aws lambda/DynamoDB api to work - api

It is my first time trying the Aws dynamoDB and Lambda node.js. I followed exact instructions on the aws documentation here: https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api-dynamo-db.html#http-api-dynamo-db-attach-integrations
Here’s the function code I used, as gotten from step 10 of the documentation
const AWS = require("aws-sdk");
const dynamo = new AWS.DynamoDB.DocumentClient();
exports.handler = async (event, context) => {
let body;
let statusCode = 200;
const headers = {
"Content-Type": "application/json"
};
try {
switch (event.routeKey) {
case "DELETE /items/{id}":
await dynamo
.delete({
TableName: "http-crud-tutorial-items",
Key: {
id: event.pathParameters.id
}
})
.promise();
body = `Deleted item ${event.pathParameters.id}`;
break;
case "GET /items/{id}":
body = await dynamo
.get({
TableName: "http-crud-tutorial-items",
Key: {
id: event.pathParameters.id
}
})
.promise();
break;
case "GET /items":
body = await dynamo.scan({ TableName: "http-crud-tutorial-items" }).promise();
break;
case "PUT /items":
let requestJSON = JSON.parse(event.body);
await dynamo
.put({
TableName: "http-crud-tutorial-items",
Item: {
id: requestJSON.id,
price: requestJSON.price,
name: requestJSON.name
}
})
.promise();
body = `Put item ${requestJSON.id}`;
break;
default:
throw new Error(`Unsupported route: "${event.routeKey}"`);
}
} catch (err) {
statusCode = 400;
body = err.message;
} finally {
body = JSON.stringify(body);
}
return {
statusCode,
body,
headers
};
};
But each time I try the API’s as it is structured on the documentation, I get ‘internal server error’. I have tried it on both terminal, and from postman, same error. I couldn’t get any helpful links elsewhere to solve this. Is there something I need to do different?

There are many reasons a Lambda would return a 500 exception to APIGW, for example:
https://aws.amazon.com/premiumsupport/knowledge-center/api-gateway-lambda-stage-variable-500/
https://aws.amazon.com/premiumsupport/knowledge-center/api-gateway-internal-server-error/
My suggestion is to strip back your function code, and only return a response to APIGW with no other logic and test that. Then you can decide to work forwards or backwards from there.

Related

Cloudflare worker scheduled response API KV

So previously I tried to store KV when CRON was processed and it works.
Now I tried to store KV based on API responses, and tried the following: https://gist.github.com/viggy28/522c4ed05e2bec051d3838ebaff27258 and Forward body from request to another url for Scheduled (CRON workers) but doesn't seems to work.
What I attempt to do is save response in KV when CRON triggers for access_token and use it in front-end code later on.
Attached is my current attempt:
addEventListener("scheduled", (event) => {
console.log("cron trigger processed..", event);
event.waitUntil(handleRequest());
});
async function handleRequest() {
const url = "<request_url>";
const body = { <credential> };
const init = {
body: JSON.stringify(body),
method: "POST",
headers: {
"content-type": "application/json;charset=UTF-8"
}
};
const response = await fetch(url, init);
const results = await gatherResponse(response);
//#ts-ignore
AUTH.put("attempt1", results);
//#ts-ignore
AUTH.put("attempt2", new Response(results, init));
return new Response(results, init);
}
/**
* gatherResponse awaits and returns a response body as a string.
* Use await gatherResponse(..) in an async function to get the response body
* #param {Response} response
*/
async function gatherResponse(response) {
const { headers } = response
const contentType = headers.get("content-type") || ""
if (contentType.includes("application/json")) {
return JSON.stringify(await response.json())
}
else if (contentType.includes("application/text")) {
return await response.text()
}
else if (contentType.includes("text/html")) {
return await response.text()
}
else {
return await response.text()
}
}
Both attempt doesn't seems to work (KV not saved):
//#ts-ignore
AUTH.put("attempt1", results);
//#ts-ignore
AUTH.put("attempt2", new Response(results, init));
Is there anything I'm missing?
The calls to NAMESPACE.put() return a Promise that should be awaited - I'd assume that since handleRequest() can return before those are completed (since you're missing the await), they're just being cancelled.
https://developers.cloudflare.com/workers/runtime-apis/kv/#writing-key-value-pairs

S3 to IPFS from Pinata

I am trying to upload a lot of files from S3 to IPFS via Pinata. I haven't found in Pinata documentation something like that.
This is my solution, using the form-data library. I haven't tested it yet (I will do it soon, I need to code some things).
Is it a correct approach? anyone who has done something similar?
async uploadImagesFolder(
items: ItemDocument[],
bucket?: string,
path?: string,
) {
try {
const form = new FormData();
for (const item of items) {
const file = getObjectStream(item.tokenURI, bucket, path);
form.append('file', file, {
filename: item.tokenURI,
});
}
console.log(`Uploading files to IPFS`);
const pinataOptions: PinataOptions = {
cidVersion: 1,
};
const result = await pinata.pinFileToIPFS(form, {
pinataOptions,
});
console.log(`Piñata Response:`, JSON.stringify(result, null, 2));
return result.IpfsHash;
} catch (e) {
console.error(e);
}
}
I had the same problem
So, I have found this: https://medium.com/pinata/stream-files-from-aws-s3-to-ipfs-a0e23ffb7ae5
But in the article If am not wrong, is used a different version to the JavaScript AWS SDK v3 (nowadays the most recent: https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/index.html).
This is for the Client side with TypeScript:
If you have this version, for me works this code snippet:
export const getStreamObjectInAwsS3 = async (data: YourParamsType) => {
try {
const BUCKET = data.bucketTarget
const KEY = data.key
const client = new S3Client({
region: 'your-region',
credentials: {
accessKeyId: 'your-access-key',
secretAccessKey: 'secret-key'
}
})
const resource = await client.send(new GetObjectCommand({
Bucket: BUCKET,
Key: KEY
}))
const response = resource.Body
if (response) {
return new Response(await response.transformToByteArray()).blob()
}
return null
} catch (error) {
return null
}
}
With the previous code, you can get the Blob Object for pass it to the File object with this method and get the URL resource using the API:
export const uploadFileToIPFS = async(file: Response) => {
const url = `https://api.pinata.cloud/pinning/pinFileToIPFS`
const data = new FormData()
data.append('file', file)
try {
const response = await axios.post(url, data, {
maxBodyLength: Infinity,
headers: {
pinata_api_key: 'your-api',
pinata_secret_api_key: 'your-secret'
},
data: data
})
return {
success: true,
pinataURL: `https://gateway.pinata.cloud/ipfs/${ response.data.IpfsHash }`
}
} catch (error) {
console.log(error)
return null
}
}
I have found this solution from this nice article and you can explore other implementations (including the Node.js side)

Verify HMAC Hash Using Cloudflare Workers

I'm trying to verify a HMAC signature received from a WebHook. The details of the WebHook are https://cloudconvert.com/api/v2/webhooks#webhooks-events
This says that the HMAC is generated using hash_hmac (PHP) and is a SHA256 hash of the body - which is JSON. An example received is:
c4faebbfb4e81db293801604d0565cf9701d9e896cae588d73ddfef3671e97d7
This looks like lowercase hexits.
I'm trying to use Cloudflare Workers to process the request, however I can't verify the hash. My code is below:
const encoder = new TextEncoder()
addEventListener('fetch', event => {
event.respondWith(handleRequest(event.request))
})
async function handleRequest(request) {
const contentType = request.headers.get('content-type') || ''
const signature = request.headers.get('CloudConvert-Signature')
let data
await S.put('HEADER', signature)
if (contentType.includes('application/json')) {
data = await request.json()
await S.put('EVENT', data.event)
await S.put('TAG', data.job.tag)
await S.put('JSON', JSON.stringify(data))
}
const key2 = await crypto.subtle.importKey(
'raw',
encoder.encode(CCSigningKey2),
{ name: 'HMAC', hash: 'SHA-256' },
false,
['sign']
)
const signed2 = await crypto.subtle.sign(
'HMAC',
key2,
encoder.encode(JSON.stringify(data))
)
await S.put('V22', btoa(String.fromCharCode(...new Uint8Array(signed2))))
return new Response(null, {
status: 204,
headers: {
'Cache-Control': 'no-cache'
}
})
}
This will generate a hash of:
e52613e6ecebdf98bb085f04ca1f91bf9a5cf1dc085f89dcaa3e5fbf5ebf1b06
I've tried use the crypto.subtle.verify method, but that didn't work.
Can anyone see any issues with the code? Or have done this successfully using Cloudflare Workers?
Mark
I finally got this working using the verify method (I had previously tried the verify method, but it didn't work). The main problem seems to the use of request.json() wrapped in JSON.stringify. Changing this to request.text() resolved the issue. I can then use JSON.parse to access the data after verifying the signature. The code is as follows:
const encoder = new TextEncoder()
addEventListener('fetch', event => {
event.respondWith(handleRequest(event.request))
})
async function handleRequest(request) {
const signature = request.headers.get('CloudConvert-Signature')
const key = await crypto.subtle.importKey(
'raw',
encoder.encode(CCSigningKey2),
{ name: 'HMAC', hash: 'SHA-256' },
false,
['verify']
)
const data = await request.text()
const verified = await crypto.subtle.verify(
'HMAC',
key,
hexStringToArrayBuffer(signature),
encoder.encode(data)
)
if (!verified) {
return new Response('Verification failed', {
status: 401,
headers: {
'Cache-Control': 'no-cache'
}
})
}
return new Response(null, {
status: 204,
headers: {
'Cache-Control': 'no-cache'
}
})
}
function hexStringToArrayBuffer(hexString) {
hexString = hexString.replace(/^0x/, '')
if (hexString.length % 2 != 0) {
return
}
if (hexString.match(/[G-Z\s]/i)) {
return
}
return new Uint8Array(
hexString.match(/[\dA-F]{2}/gi).map(function(s) {
return parseInt(s, 16)
})
).buffer
}

Vue.js Axios responseType blob or json object

I have an Axios request to download the .xls file. Problem is that the object returned as a response from backend doesn't always has to be a real file. If I return json object instead of file data. How I would read this json then?
Here is the function:
downloadReport() {
let postConfig = {
headers: {
'X-Requested-With': 'XMLHttpRequest'
},
responseType: 'blob',
} as AxiosRequestConfig
axios
.post(this.urls.exportDiscountReport, this.discount.id, postConfig)
.then((response) => {
let blob = new Blob([response.data], { type: 'application/vnd.ms-excel' });
let url = window['URL'].createObjectURL(blob);
let a = document.createElement('a');
a.href = url;
a.download = this.discount.id + ' discount draft.xlsx';
a.click();
window['URL'].revokeObjectURL(url);
})
.catch(error => {
})
}
I would like to read the response and if it contains some data in it - don't create the blob and initiate the download, instead, just show some message or whatever. If I remove the responseType: 'blob' then the .xls file downloads as unreadable and not valid file.
So the problem is that every returned response becomes blob type and I don't see my returned data in it. Any ideas?
I solved this by reading the blob response and checking if it has JSON parameter status. But this looks like an overengineering to me. Is there a better solution?
let postConfig = {
headers: {
'X-Requested-With': 'XMLHttpRequest'
},
responseType: 'blob',
} as AxiosRequestConfig
axios
.post(this.urls.exportDiscountReport, this.discount.id, postConfig)
.then((responseBlob) => {
const self = this;
let reader = new FileReader();
reader.onload = function() {
let response = { status: true, message: '' };
try {
response = JSON.parse(<string>reader.result);
} catch (e) {}
if (response.status) {
let blob = new Blob([responseBlob.data], { type: 'application/vnd.ms-excel' });
let url = window['URL'].createObjectURL(blob);
let a = document.createElement('a');
a.href = url;
a.download = self.discount.id + ' discount draft.xlsx';
a.click();
window['URL'].revokeObjectURL(url);
} else {
alert(response.message)
}
}
reader.readAsText(responseBlob.data);
})
.catch(error => {
})
I also found the same solution here: https://github.com/axios/axios/issues/815#issuecomment-340972365
Still looks way too hacky.
Have you tried checking the responseBlob.type property? It gives the MIME type of the returned data.
So for example you could have something like this:
const jsonMimeType = 'application/json';
const dataType = responseBlob.type;
// The instanceof Blob check here is redundant, however it's worth making sure
const isBlob = responseBlob instanceof Blob && dataType !== jsonMimeType;
if (isBlob) {
// do your blob download handling
} else {
responseBlob.text().then(text => {
const res = JSON.parse(text);
// do your JSON handling
});
}
This I find is much simpler and works for me, however it depends on your backend setup. The BLOB response is still a text response, but it's handled slightly differently.

AWS Lambda S3 GET/POST - SignatureDoesNotMatch error

I have had a Lambda node.js function up and running for about 6 months without any issue. The function simply takes a object and copies it from one bucket to another.
Today, I have started getting:
"SignatureDoesNotMatch: The request signature we calculated does not
match the signature you provided. Check your key and signing method."
The code I am using is pretty simple, any suggestions on how I could fix this?
var aws = require('aws-sdk');
var s3 = new aws.S3({apiVersion: '2006-03-01'});
exports.handler = function(event, context) {
var to_bucket = 'my_to_bucket/test';
var from_bucket = event.Records[0].s3.bucket.name;
var key = decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, " "));
var size = Math.floor(event.Records[0].s3.object.size / 1024);
s3.getObject({Bucket: from_bucket, Key: key}, function(err, data) {
if (err) {
// send a webhook
}
else {
s3.putObject({Bucket: to_bucket, Key: key, Body: data.Body, ContentType: data.ContentType},
function(err, data) {
if (err) {
// send a webhook
}
else {
// send a webhook
}
});
} // end else
}); // end getobject
};
UPDATE:
I have discovered that if sending to a bucket, it works fine. If I sent to any subfolder of the same bucket, it fails. I do send to a subfolder and simplified the code above initially, but I've updated it to show a subfolder in the to_bucket.
I found a fix for this. After realising it was due to the folder inside the bucket, and not just sending to a bucket root, I searched and found the following post: https://github.com/aws/aws-sdk-go/issues/562
It looks like the bucket should not include the subfolder, instead the key should. Why this has worked until now is a mystery. Here is the replacement code for above:
var aws = require('aws-sdk');
var s3 = new aws.S3({apiVersion: '2006-03-01'});
exports.handler = function(event, context) {
var to_bucket = 'my_to_bucket';
var from_bucket = event.Records[0].s3.bucket.name;
var key = decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, " "));
var size = Math.floor(event.Records[0].s3.object.size / 1024);
s3.getObject({Bucket: from_bucket, Key: key}, function(err, data) {
if (err) {
// send a webhook
}
else {
key = 'subfolder/' + key;
s3.putObject({Bucket: to_bucket, Key: key, Body: data.Body, ContentType: data.ContentType},
function(err, data) {
if (err) {
// send a webhook
}
else {
// send a webhook
}
});
} // end else
}); // end getobject
};