We are using Amazon SNS for our push notification service. It used to work fine, until I've thought to include a TTL - in MessageAttributes. Basically For testing purpose I want to keep the TTL Time as 60 seconds.
I use RestClient. I donot use Amazon SDK.
def publish(endpoint_arn, message,message_attribute)
params = {
:TargetArn => endpoint_arn,
:Message => JSON.dump(message),
:MessageStructure => 'json',
:MessageAttributes => {
message_attribute => {
:DataType => 'String',
:StringValue => '60'
}
}
}
post(:Publish, params)
end
message_attribute is a string that contains, 'AWS.SNS.MOBILE.GCM.TTL'
def post(command, params)
params.merge!(default_post_params(command))
params[:Signature] = calculate_signature(params, 'POST')
response = RestClient.post(SNS_URL, params)
end
Whats wrong in the code above, Amazon document says what i've done is correct.
Amazon SDK API Reference: http://docs.aws.amazon.com/sns/latest/dg/sns-ttl.html
PS: default_post_params, would have a generalised post params, like Access Key, SNS Url. And it works fine. Only after including MessageAttributes key it says,
<Error>
<Type>Sender</Type>
<Code>MalformedQueryString</Code>
<Message>Keys may not contain [</Message>
</Error>
Error is coming due to lack of dictionary key and value.
In message attribute there is two key
1.DataType
2.StringValue
Create a dictionary and add these two keys in dictionary with there value.
Definitely it will work.
Related
I'm trying to generate a presigned URL from within a Lambda function, to get an existing S3 object .
(The Lambda function runs an ExpressJS app, and the code to generate the URL is called on one of its routes.)
I'm getting an error "The AWS Access Key Id you provided does not exist in our records." when I visit the generated URL, though, and Google isn't helping me:
<Error>
<Code>InvalidAccessKeyId</Code>
<Message>The AWS Access Key Id you provided does not exist in our records.</Message>
<AWSAccessKeyId>AKIAJ4LNLEBHJ5LTJZ5A</AWSAccessKeyId>
<RequestId>DKQ55DK3XJBYGKQ6</RequestId>
<HostId>IempRjLRk8iK66ncWcNdiTV0FW1WpGuNv1Eg4Fcq0mqqWUATujYxmXqEMAFHAPyNyQQ5tRxto2U=</HostId>
</Error>
The Lambda function is defined via AWS SAM and given bucket access via the predefined S3CrudPolicy template:
ExpressLambdaFunction:
Type: AWS::Serverless::Function
Properties:
FunctionName: ExpressJSApp
Description: Main website request handler
CodeUri: ../lambda.zip
Handler: lambda.handler
[SNIP]
Policies:
- S3CrudPolicy:
BucketName: my-bucket-name
The URL is generated via the AWS SDK:
const router = require('express').Router();
const AWS = require('aws-sdk');
router.get('/', (req, res) => {
const s3 = new AWS.S3({
region: 'eu-west-1',
signatureVersion: 'v4'
});
const params = {
'Bucket': 'my-bucket-name',
'Key': 'my-file-name'
};
s3.getSignedUrl('getObject', params, (error, url) => {
res.send(`<p>${url}</p>`)
});
});
What's going wrong? Do I need to pass credentials explicitly when calling getSignedUrl() from within a Lambda function? Doesn't the function's execute role supply those? Am I barking up the wrong tree?
tldr; Go sure, to have the correct order of signature_v4 headers/formdata, in your request.
I had the same exact issue.
I am not sure if this is the solution for everyone who is encountering the problem, but I learned the following:
The error message, and other misleading error messages can occur, if you don't use the correct order of security headers. In my case I was using the endpoint to create a presigned url, for posting a file, to upload it. In this case, you need to go sure, that you are having the correct order of security relevant data in your form-data. For signatureVersion 's3v3' it is:
key
x-amz-algorithm
x-amz-credential
x-amz-date
policy
x-amz-security-token
x-amz-signature
In the special case of a POST-Request to a presigned url, to upload a file, it's important to have your file, AFTER the security data.
After that, the request works as expected.
I can't say for certain but I'm guessing this may have something to do with you using the old SDK. Here it is w/ v3 of the SDK. You may need to massage it a little more.
const { getSignedUrl } = require("#aws-sdk/s3-request-presigner");
const { S3Client, GetObjectCommand } = require("#aws-sdk/client-s3");
// ...
const client = new S3Client({ region: 'eu-west-1' });
const params = {
'Bucket': 'my-bucket-name',
'Key': 'my-file-name'
};
const command = new GetObjectCommand(params);
getSignedUrl(client, command(error, url) => {
res.send(`<p>${url}</p>`)
});
I'm using node in a docker container and locally I use my IAM keys for both creating, reading and deleting files to an S3 bucket as well as creating pre-signed posts. When up on a Fargate container, I create a taskRole and attach a policy which gives it full access to S3.
taskRole.attachInlinePolicy(
new iam.Policy(this, `${clientPrefix}-task-policy`, {
statements: [
new iam.PolicyStatement({
effect: iam.Effect.ALLOW,
actions: ['S3:*'],
resources: ['*'],
}),
],
})
);
With that role, I can create, read and delete files with no issues from the API. When the API tries to create a pre-signed post however, I get the error:
Error: Unable to create a POST object policy without a bucket, region, and credentials
It seems super strange to me that I can run the other operations, but it fails with the presignedPOST, especially since my S3 actions are all allowed.
const post: aws.S3.PresignedPost = await s3.createPresignedPost({
Bucket: bucket,
Fields: { key },
Expires: 60,
Conditions: [['content-length-range', 0, 5242880]],
});
Here is the code I use. I am logging the bucket and key so I'm positive that they are valid values. One thought I had was when running locally, I will run aws.configure to set my keys but in Fargate I purposefully omit that. I thought that it was getting the right keys since the other s3 operations work without fail. Am I approaching this right?
When using IAM role credentials with AWS sdk, you must either use the asynchronous (callback) version of createPresignedPost or guarantee that your credentials have been resolved before calling the await version of this method.
Something like this will work with IAM based credentials:
const s3 = new AWS.S3()
const _presign = params => {
return new Promise((res, rej) => {
s3.createPresignedPost(params, (err, data) => {
if (err) return rej(err)
return res(data)
})
})
}
// await _presign(...) <- works
// await s3.createPresignedPost(...) <- won't work
Refer: https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#createPresignedPost-property
I'm trying to use AWS Ampify and can't find a good reference. A guide, I can find, but not a reference. If I make a call to Storage.get, such as the code snippet below, and test.txt doesn't exist, what is returned?
Storage.get('test.txt')
.then(result => console.log(result))
.catch(err => console.log(err));
I'm finding that it returns a URL that results in a 404.
As of Amplify 0.4.7, the intended behaviour is to return a URL that results in a 404.
If you want to avoid the 404, you can check for the presence of the file using Storage.list(). Or you can attempt to pre-load the URL with some exception handling, before actually use it.
This seems like sub-optimal behaviour to me, especially with a framework like angular, so I've submitted a feature request.
I was trying to find out if the object exists in the bucket before creating a new one and this is how I did it, hope it helps.
//make a get request of the object you want calling it by it's name
await Storage.get("key")
.then((response) => { //The response is a url to the s3 object
fetch(response).then((result) => { //fetch the URL
if(result.status === 200) { //if the file exists
console.log("file exists in the bucket");
} else { //if the status is 403 or others, the s3 object doesn't exist
console.log("file doesnt exist")
}
});
})
.catch((err) => console.log(err));
NOTE:
My S3 Bucket permission is public read access. If you have differnt bucket permissions then this solution might not work for you.
I'm a beginner in Amazon's Lambda-API implementations.
I'm just deploying a very simple API: a very simple lambda function with Python 2.7 printing "Hello World" that I trigger with API Gateway. However, when I click on the Invoke URL link, it tells me "{"message": "Internal server error"}".
Thus, I'm trying to see what is wrong here, so I click on the API itself and I can see the following being grey in my Method Execution: "Integration Response: Proxy integrations cannot be configured to transform responses."
I have tested many different configurations but I still face the same error. I have no idea why this step is grey.
I had the same problem when trying to integrate API gateway and lambda function. Basically, after spending a couple of hours, I figure out.
So when you were creating a new resource or method the Use Lambda Proxy integration was set by default.
So you need to remove this. Follow to Integration Request and untick the Use Lambda Proxy integration
you will see the following picture
Then in you Resources, Atction tab, choose Enable CORS
Once this done Deploy your API once again and test function. Also, this topic will explain what's happening under the hood.
Good luck...
The Lambda response should be in a specific format for API gateway to process. You could find details in the post. https://aws.amazon.com/premiumsupport/knowledge-center/malformed-502-api-gateway/
exports.handler = (event, context, callback) => {
var responseBody = {
"key3": "value3",
"key2": "value2",
"key1": "value1"
};
var response = {
"statusCode": 200,
"headers": {
"my_header": "my_value"
},
"body": JSON.stringify(responseBody),
"isBase64Encoded": false
};
callback(null, response);
My API was working in Postman but not locally when I was developing the front end. I was getting the same errors when trying to enable CORS on my resources for GET, POST and OPTIONS and after searching all over #aditya answer got me on the right track but I had to tweak my code slightly.
I needed to add the res.statusCodeand the two headers and it started working.
// GET
// get all myModel
app.get('/models/', (req, res) => {
const query = 'SELECT * FROM MyTable'
pool.query(query, (err, results, fields) => {
//...
const models = [...results]
const response = {
data: models,
message: 'All models successfully retrieved.',
}
//****** needed to add the next 3 lines
res.statusCode = 200;
res.setHeader('content-type', 'application/json');
res.setHeader('Access-Control-Allow-Origin', '*');
res.send(response)
})
})
If you re using terraform for aws resource provision you can set the
"aws_api_gateway_integration" type = "AWS" instead of "AWS_PROXY" and that should resolve your problem.
I have an s3 buckets with a bucket policy to send their objects to glacier after x days of creation. It is working fine by moving the objects to glacier storage.When I go to retrieve those objects later using aws php sdk 3.x api
$result = $client->restoreObject([
'Bucket' => '<string>', // REQUIRED
'Key' => '<string>', // REQUIRED
'RequestPayer' => 'requester',
'RestoreRequest' => [
'Days' => <integer>, // REQUIRED
'GlacierJobParameters' => [
'Tier' => 'Standard|Bulk|Expedited', // REQUIRED
],
],
'VersionId' => '<string>', ])
Normally it may take 3-5 hours to restore the object. So I need to get a sns notification for that. As I am not using the vault for that I am not getting any notification after restored the object. How do I get sns notification after restore completion.
S3 event notification now support s3:ObjectRestore:Completed. See details in AWS documentation. You can configure SNS to send you notification upon Glacier restoration completed.
We will not get sns for the restore completion for that we need to poll using head object api
$result = $s3Client->headObject(array(
'Bucket' => $sourceBucket,
'Key' => "{$archiveKey}/{$sourceKeyname}",
));
and compare the head object requests result
if (isset($res['ongoing-request']) && (strcmp($res['ongoing-request'], '"false"') == 0) && ($result['StorageClass'] == 'GLACIER')) {
$this->log('Survey data id ' . $surveyData['survey_data_id'] . ' in restored state', LogLevel::INFO);}
and if the condition is true we can raise the action