I have an AWS environment that seems to lose connectivity to a server we are connected to via a static route. I want to be able to monitor the situation by sending an HTTP GET request to this remote server. Is this possible using CloudWatch? I am new to AWS and I have not found anything on this topic. My guess is that I am using the wrong lingo. Any guidance would be appreciated.
You can do it using Route53 health check on HTTP or TCP, you don't need to use Route53 as DNS service, you can simply use Health check service and trigger lambda/Cloudwatch/SNS etc.
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/health-checks-types.html
Related
I have an EC2 instance that runs some very occasionally needed services. The application that uses the services will keep retrying until it finds the server so I was wondering if I can set up a CloudWatch Event Rule that will trigger a Lambda that starts my EC2 when HTTP traffic is directed at the EC2. Any ideas on whether this is possible?
No, but its possible with API Gateway to accept HTTP request and trigger Lambda.
Inside the lambda code your can write logic to start EC2 using AWS SDK.
AWS Guide: https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/ec2-example-managing-instances.html
Example code : https://github.com/awsdocs/aws-doc-sdk-examples/blob/master/javav2/example_code/ec2/src/main/java/com/example/ec2/StartStopInstance.java
Would like to know in API Connect, is there a built-in way to check health of an API in Prod? (Like Springboot actuator/health)
If not, best way recommended to implement health check for each of our APIs that we are about to develop in API Connect
Regards,
Martand
API Connect v2018 portals do not have an inbuilt health check, they removed it in v5. There is a RFE to request this functionality.
For the gateway component the best way is to use a tcp_half_open health check on the IP/Port that your gateway is listening on. Note it has to be tcp_half_open otherwise your appliance will be spammed with TLS handshake errors.
This will confirm that the gateway is up and running and serving requests. You should also check the management port is running, as your gateway might not be synchronizing with the APIM and it's possible that it is serving up an old API.
Health checks on an individual API are a little more difficult, and would need to be added as a separate operation in your spec.
Is there built-in support for enabling SSL on Azure Container Instances? If not, can we hook up to SSL providers like Lets Encrypt?
There is nothing built-in today. You need to load the certs into the container and terminate SSL there. Soon, we will enable support for ACI containers to join an Azure virtual network, at which point you could front your containers with Azure Application Gateway and terminate SSL there.
As said above, no support today for built-in SSL when using ACI. I'm using Azure Application Gateway to publish my container endpoint using the HTTP-to-HTTPS bridge. This way, App Gateway needs a regular HTTPS cert (and you can use whichever model works best for you as long as you can introduce a .PFX file during provisioning or later during configuratiorn) and it will then use HTTP to talk to your (internally facing) ACI-based container. This approach becomes more secure if you bind your ACI-based container to a VNET and restrict traffic from elsewhere.
To use SSL within the ACI-container you'd need to introduce your certification while provisioning the container, and then somehow automate certificate expiration and renewal. As this is not supported in a reasonable way, I chose to use the App Gateway to resolve this. You could also use API Management but that is obviously slightly more expensive and introduces a lot more moving parts.
I blogged about this configuration here and the repo with provisioning scripts is here.
You can add SSL support at the API Gateway and simply configure the underlying API over HTTP.
You will need the secrete key to execute above api method!
You can access the underlying API hosted at the Azure Container Instance. This method does not require jwt token as this is a demo api.
I have been reading the Openshift documentation for secured (SSL) routes.
Since I use a free plan, I can only have an "Edge Termination" route, meaning the SSL is ended when external requests reach the router, with contents being transmitted from the router to the internal service via HTTP.
Is this secure ? I mean, part of the information transmission is done via HTTP in the end.
The connection between where the secure connection is terminated and your application which accepts the proxied plain HTTP request is all internal to the OpenShift cluster. It doesn't travel through any public network in the clear. Further, the way the software defined networking in OpenShift works, it is not possible for any other normal user to see that traffic, nor can applications running in other projects see the traffic.
The only people who might be able to see the traffic are administrators of the OpenShift cluster, but the same people could access your application container also. Any administrators of the system could access your application container even if using a pass through secure connection terminated with your application. So is the same situation as most managed hosting, where you rely on the administrators of the service to do the right thing.
All calls are made through the tyk proxy to access the remote api.
How to make the remote api available only through the tyk proxy?
The two options mentioned here How to secure remote api for calls not coming from tyk? are very good. Either could work. I don't think there is a setting withing Tyk to limit this. Would have to be done from upstream