API gateway works with only publicly open EC2 - api

I have created a HelloWorld SpringBoot app and deployed it on a EC2 instance. I can invoke it through my HTTP browser with http://<My EC2 instance IPv4 address>/tax after I add the following inbound rules
CustomTCP TCP 8080 <my laptop Ip>
SSH TCP 22 <my laptop Ip>
I went ahead to create a API Gateway with GET method with Integration Type as HTTP and with the above URL. But when I test the GET method, I get
"message": "Network error communicating with endpoint".
I tried giving various inbound rules but no success. Finally after creating a open inbound rule of All Traffic for everyone the API worked fine.
Clearly I cannot go ahead with this open inbound rule, what specific inbound rule should I create for my API to work? What IP should I use in the inbound rule? Does API even have an IP?

From what I understand, your application needs to be publicly accessible to be used by API Gateway.
However, you can use SSL certificates to restrict access to your HTTP backend only to API Gateway. This documentation shows how to do that.
See this for a related discussion on AWS forum.

Related

With a Sonos player, adding local service to customSD does not show up Music Services

I have the starting shell of a SMAPI service programmed in Node.js. The service is running off of a local IP address.
When I goto the customSD page for my Sonos Play:1 and add the service details to it and it returns 'Success!' upon clicking submit.
When I open the Sonos Controllers on both my desktop and mobile, the service does not show up in the list of music services you can add.
I have logging on the server turned on to max and there are no connection attempts ( either SSL or HTTP ) at all.
From what I've read, a running service is not needed for it to show up in the list. Once it's added via customSD it should show up in the Music Service settings.
TO NOTE: A self signed cert is being used for https for connection testing only. I have certificate request logging turned on and there is NO connection attempt from the Sonos Play:1 to the local server at all. From everything I've read this shouldn't matter as it should show up in the Music Service list until you try to add it and then it will connect to the service.
The only thing I can think of is if this service needs to be registered with Sonos before it can be added via customSD, however at this time nothing I've read says that.
The other thing is if this service needs to be running off a domain name for some reason. However in the documentation it lists IP address so it would be bad documentation then.
I expect the service added via customSD should show up in the list. It does not appear when clicking on add.
Understood the issue and YvesGrantSonos has updated the documentation.
If you're developing locally a non-https IP address should be used for both the secure and non-secure API URI's.
You should be able enter a local IP address for the SMAPI service. For testing, this should be on the same local network as the Sonos player. Be sure to include the port number that the service is running on (i.e. http://192.168.1.2:8080/musicservice). You can use the same IP and port for secure and insecure connections.

Ant-Media-Server + SSL without Domain

Ant-Media-Server is running on an IPAdress without any domains. We just set up this server to be used for streaming in order to use it from different domains pointing to different servers.
Since all of our domains use ssl, we face the typical connection problem:
mixed Content: The page at 'https://SOMEDOMAIN.com/QUERY' was loaded over HTTPS, but attempted to connect to the insecure WebSocket endpoint 'ws://1.2.3.4:56'. This request has been blocked; this endpoint must be available over WSS.
Ant-Media already offers tutorials on how to install a Let's Encrypt SSL Certificate but sadly it is not available for pure IP-Addresses.
Apart from the Ant-Media Service, the server doesn't has any NGINX, NodeJS, Apache or other http Servers installed - the plan was just to use it for streaming by calling the IP-Address.
Do you have any ideas on how to solve that problem?
Unfortunately, this is not possible.
The goal of having a SSL is ensure you are requesting the right domain name besides encrypting the content between your users and your server.
Here are some alternatives:
create an endpoint in your own app that proxies data to your server.
Instead of playing the IP address, you can play:
/your-proxy-url?stream=http://yourIp.com:port/....
Note that using a proxy will make all the traffic pass through your web app.
As a reference, if you are using PHP on your website, you can have some ideas from here: https://gist.github.com/iovar/9091078
Create a reverse-proxy in front of your web app that redirects the traffic to your IP address.
Both solutions does not change your Ant Media Server, just adds a new resource between your users and your streaming server - adding the SSL on it.

openshift ssl edge termination risk

I have been reading the Openshift documentation for secured (SSL) routes.
Since I use a free plan, I can only have an "Edge Termination" route, meaning the SSL is ended when external requests reach the router, with contents being transmitted from the router to the internal service via HTTP.
Is this secure ? I mean, part of the information transmission is done via HTTP in the end.
The connection between where the secure connection is terminated and your application which accepts the proxied plain HTTP request is all internal to the OpenShift cluster. It doesn't travel through any public network in the clear. Further, the way the software defined networking in OpenShift works, it is not possible for any other normal user to see that traffic, nor can applications running in other projects see the traffic.
The only people who might be able to see the traffic are administrators of the OpenShift cluster, but the same people could access your application container also. Any administrators of the system could access your application container even if using a pass through secure connection terminated with your application. So is the same situation as most managed hosting, where you rely on the administrators of the service to do the right thing.

Relationship between HTTPS Healthchecks and an HTTPS connection to a GCE Instance

I'm setting up HTTPS Load Balancing (LB) on Google Compute Engine (GCE). Key components are outlined in the Overview Diagram.
After successfully creating a HTTP Backend Service where 1 of 1 (GCE) instance is healthy, I decided to do the same for HTTPS. I'm using the Developer Console UI to do this.
The Healtheck "wizard" provides a drop-down menu for protocol with the option HTTP and HTTPS:
The successful HTTP Heathcheck used the path :8080/admin/healthcheck.
Presumably the HTTPS Healtheck will use the path :443/admin/healthcheck. The problem is my HTTPS Healthchecks are failing. This was expected since when visiting https://[INSTANCE_IP]:443/admin/healthcheck in a browser, it could not connect. So I didn't expect the Healthcheck to mark the instance as healthy.
How can I connect to https://[INSTANCE_IP]:443/admin/healthcheck over TLS, do I merely need to upload a certificate and create a Certificate Resource in the Developer Console (I doubt it)?
I think it's a conceptual problem too.
The URL https://[INSTANCE_IP]:443/admin/healthcheck does exist, I think because the instance doesn't implement TLS, the Healthcheck fails.
What is the relationship between a uploading a certificate (i.e. creating Certificate Resource) and a specific GCE instance accepting HTTPS requests such that HTTPS HealthCheck pass?
After re-reading the documentation, it is stated:
The client SSL session terminates at the load balancer. Sessions
between the load balancer and the instance can either be HTTPS
(recommended) or HTTP. If HTTPS, each instance must have a
certificate.
It is the last sentence that I was trying to achieve because HTTPS Healthchecks use a HTTPS URL to check the 'health' of an individual instance:
https://[INSTANCE_IP]:443/admin/healthcheck
Since this was failing, I incorrectly assumed I needed to implement TLS on each instance for the Healthcheck to succeed. However, I do not require each instance to implement TLS (HTTPS), only the Load Balancer.
The final configuration I used involved creating a new HTTPS Target Proxy, which pointed to the same Backend Service used for the HTTP Target Proxy. In other words: 2 Target Proxies (HTTP and HTTPS), but only one Backend Service).
Since Healthchecks are employed by Backend Services, the only Healthcheck required was the (original) unsecure Healthcheck, i.e.
http://[INSTANCE_IP]:8080/admin/healthcheck
The next sentence is important to:
The Beta release of HTTPS load balancing only supports a single SSL
certificate with a single load balancing service.
If the beta release only supports a single SSL certificate, I assume this certificate belongs to the LB, and therefore, on the beta at least, it's not actually possible to secure individual instances.

Foursquare Realtime User API

I have a problem with my app that I want to use as sink for the push POST requests. I programmed it in Java as a straight forward servlet, I verified that I can send POST requests to it, but the test push from my consumer’s admin page says 404.
Is it possible that I can’t run the push sink on another port than 80? My secure Tomcat port is 8888. I don’t see any calls from the Foursquare servers in my Tomcat access log.
Thanks!
As stated in Realtime API self signed certificate 4sq seems currently not to be able to send POST requests to other ports than 443 (standard SSL).
I worked around this by using the mod_jk connector to let Apache2 send requests to a special directory directly to Tomcat7. This works for me.