Internal Server Error - S3 integration with API Gateway via POST method - amazon-s3

I'm trying to upload a file into the S3 bucket via AWS API Gateway integration with S3.
I have created an API gateway and integrated it with Amazon S3. I have created both PUT and POST methods. When trying to upload via POST method, I observe internal server error. It works well via PUT method. But I have requirment only for POST method.
I have attached both my Gateway configuration API Gateway configuration and Test results via postman postman test results.

Related

How to block API call if not from my website?

I have an application built around AWS:
a lambda function
an API gateway calling the lambda, must be called with an API key
an S3 bucket as static website, that calls the API gateway
How can I secure the calls to the API gateway so that it cannot be called from anywhere but my S3 bucket ?
Some solutions have already come in my mind like:
proxy : helps hiding the API key, but anyone accessing the proxy can call the API, right ?
IP whitelisting : I can't know the IP range the bucket is using, so I can't do that
Thanks

Write putObject to S3 directly from HTTP API in API Gateway

My intention is to create an HTTP API on Amazon API Gateway that writes a file to S3 using the PutObject action via the S3 API (without calling Lambda in between). This is the PutObject request syntax: https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html#API_PutObject_RequestSyntax
I'm not sure if this is technically possible and I'm getting a 403 Forbidden: ForbiddenException response in Postman.
So far I have:
Created the S3 bucket (with CORS configured)
Created the HTTP API in API Gateway (with CORS configured), with a 'putObject' POST action
Configured an integration on the HTTP API to https://[s3-bucket-name].s3.us-east-1.amazonaws.com
Create a Postman request to the HTTP API 'invoke URL', with 'Host' and 'x-apigw-api-id' set on the headers
The ForbiddenException obviously indicates a permission issue, either on the HTTP API or the S3 API behind it. I did configure a Cloudwatch Log Group on the HTTP API, which is showing no entries, so it seems that it's an HTTP API access issue.
I also suspect that I need to add Parameter Mappings to the HTTP API to pass in all of the necessary headers to the S3 putObject action.
My questions are:
Is this type of HTTP API integration direct to S3 possible?
What is the likely cause of the 403 Forbidden response from the service?
Would I use 'Append' Parameter Mappings in the HTTP API integration configuration to add the standard S3 API parameters (and avoid exposing them to the client)?
I managed to solve this myself. Answers to my own questions:
Is this type of HTTP API integration direct to S3 possible?
Yes. On my HTTP API I used an HTTP PUT integration that points to the S3 service endpoint (including the bucket name in the endpoint is incorrect).
What is the likely cause of the 403 Forbidden response from the service?
I didn't get the request working from Postman, however, when I made the request from the browser it worked. I had to create a Blob in Javascript before sending it as a request via navigator.beacon() to the HTTP API endpoint URL.
Would I use 'Append' Parameter Mappings in the HTTP API integration configuration to add the standard S3 API parameters (and avoid exposing them to the client)?
I did have to use Parameter Mappings to get the S3 PutObject request to work from API Gateway. My configuration is shown below.
Screenshot of Parameter Mapping configuration in my HTTP API
Edit: I have discovered a problem here with this approach: the HTTP API doesn't allow certain security-related headers to be added on the Parameter Mappings. I was trying to set header.x-amz-acl: 'bucket-owner-full-control' but I got the error message below:
Invalid mapping expression specified: Validation Result: warnings : [], errors : [Operations on header x-amz-acl are restricted]
It seems that modifying any security-related S3 API header isn't possible in the HTTP API. This is a major problem for calling the S3 API directly as it means that in order to function, the S3 bucket needs to be public.

Upload json files through Amazon API gateway, S3, SQS and Lambda

I have my APP running on EC2 instance that accept in input a json file and return an elaborated json file as output.
I need to manage many answer to the server, so I'm trying to configure AWS services.
My idea is to create an API Gateway that receive
json file input, write on S3, than SQS read the notification of put and pass the request to the EC2 server, maybe trough a Lambda function.
Than the server write the json elaborated to another S3 bucket and SNS send notification to the client.
Is this a correct way to use AWS services or there is another way?
It seems like very complicated workflow for no good reason. What's the point exactly of using so many services just to for your ec2 instance get that json? You can have direct endpoint to your ec2 instance. If you want API gateway as a wrapper on your endpoints, you can have that too. But just send that json directly to ec2 or api gateway to ec2 instead of api gateway -> s3 -> sqs -> lambda -> ec2.

Unable to consume REST API in WSO2 API Store

Have installed the API Manager 1.10.0 on a single machine and got everything running. Created and published API containing Openstack's Keystone URL. However when i try to consume API via API console in API store i get the MANAGEMENT CONSOLE as i response.
Have looked at the curl sent and the IP is not right.
Curl request from API Console
Keystone API URLs
Why am i not able to use the API? Why is the Production endpoint in the API overview not used? (it works perfectly fine with a REST Client or even with the same Curl request once i change to IP)
When we construct API endpoint URLs we will use following properties defined in API Manager configuration file(api-manager.xml). If you haven't changed anything there then default ports(8280/8243) will appear there. If you can please try this with private browsing window with https session.
And if you replace curl with IP and correct port 8280, 8243 then did it worked as expected?
<GatewayEndpoint>http://${carbon.local.ip}:${http.nio.port},https://${carbon.local.ip}:${https.nio.port}</GatewayEndpoint>
Thanks
sanjeewa.

Monitoring access to AWS API Gateway resources using api-keys

I have built a gateway (using aws api gateway) in front of my rest api. I want to monitor the usage of resources on that api using the api-keys generated by api gateway. By 'usage' I mean which resources were requested and served to clients associated with an api key. Amazon claims that cloudtrail can be used to track gateway requests but the x-api-key header does not show up in cloudtrail logs. Has amazon provided an idiomatic way of doing this? Has anyone implemented this functionality in a custom manner? It seems reasonable that this functionality should be built in, however I cannot find how to do this anywhere.