Amazon Application Load Balancer wss to ws forwarding problem - aws-application-load-balancer

There is a target group in AWS Fargate cluster that manages node.js applications inside Docker containers. Every application serves web socket connections (web socket, not socket.io!).
There is a non-encrypted connection (HTTP / ws) behind the Application Load Balancer. However, outside it’s HTTPS / wss. Thus, when HTTPS request comes to Application Load Balancer, it decrypts the request and forwards HTTP request to a selected container.
The question is - how (and where) is it possible to configure wss->ws forwarding for web socket requests (there is a specific URL)?
HTTPS->HTTP rule does wss->HTTP transformation, which is insanely wrong. How to implement wss->ws transformation and is this possible at all?

Related

Convert an existing http relay server currently deployed as a windows service to handle https requests

I have a custom coded relay server application in VB.NET that is currently deployed as a windows service.
It accepts HTTP web requests from a client using a Tcp Listener, parses the requests and forwards it to another remotely hosted service via socket communication. The result from the service is then sent back by my relay server to the original client as a http response.
This functionality works perfectly as of now, but I would now like to upgrade my relay server to accept HTTPS requests instead of http.
I am not sure how to move ahead with this scenario.
I researched and found the following 2 options but I am not sure which is better and feasible?
One, I explicitly upgrade my current code to handle https handshake, certificate validation etc (if so, how?) or second option, can my current application be hosted on IIS to handle this scenario (if so, how)?
Thanks in advance.
A possible solution is to use IIS with Application Request Routing (ARR) as a reverse proxy in front of your service (its not possible to "Host" your service in IIS as such).
You could setup IIS/ARR with a certificate and suitable HTTPS binding, then configure ARR to proxy the HTTPS requests onto your service listening on HTTP. No changes required to your service's code.
Take a look at the following example: https://learn.microsoft.com/en-us/iis/extensions/url-rewrite-module/reverse-proxy-with-url-rewrite-v2-and-application-request-routing
In your case your service listening on port 80 is no different to a website running in IIS. The above example is more complicated than you need (as its reverse proxy'ing 2 websites based on a prefix of the URL), but it gives you a starting point.
A further possible step to force all traffic to use HTTPS would be to change the port your service uses (eg to 8080 instead of 80), then setup IIS to handle port 80 to perform a redirect to 443, and then use 443 and ARR to proxy your traffic to your service on 8080.2

Is there a way to force an application to post using https instead of http

I have an application that sends a request to a web service. Unfortunately the application is compiled and the link to the web service is embedded in the application as http. (Yes I know how dumb that is, I didn't write it)
Recently, the 3rd party company is no longer allowing http requests, everything must be https.
The application runs as a webapp on Tomcat. The server is a windows server.
Is there a way to intercept the call to this web service and force it to use https?
As you can't change the application's source code (as it is compiled), and you can't change the web service (as it is 3rd party) either, the only way to solve this problem is making a proxy between the application and web service. To do that, you need to (assume the proxy is running in localhost):
As the web service URL is embedded into the compiled application, in order to let application send HTTP request to our proxy, hosts mapping need to change (e.g. /etc/hosts) to override DNS. For example, if the HTTP request in application is GET http://example.com/api/sample, in /etc/hosts, example.com need to be mapped to 127.0.0.1.
Make a proxy web server in localhost and open the same port as the web service. This proxy is a very simple web server (any backend programming tech can do it), it is only responsible for request-forwarding. In this way, when application send HTTP request to example.com, it sends the request to the proxy server.
After receiving HTTP request from application, the proxy server extract the request URL/header/body and send HTTPS request to example.com's real IP address. Please note: in this HTTPS request, a header host whose value is example.com should be added. 3rd party web service may check this header.
After the real response is returned from example.com, proxy will return it to the application.
Of course, you can also use reverse engineering (Java Decompiler) to get the application's "source code", change the web service URL and then compile again to a webapp. However, as the application may need to update/upgrade and it is may not under your control, this reverse engineering method is not recommended.
You could use a proxy script. Write it in any server-side language you want, for example PHP, set the API URL to this script, the script does the https request for you and pass the results back to your app.
You could also use Apache itself as the proxy and use something like: Apache config: how to proxypass http requests to https

SSL Termination at F5 or ZUUl/Eureka/Services?

We have a few services running in our environment with Spring Cloud Netflix, Eureka and Zuul. Also, we use Spring Boot for developing the services.
We also F5 as the hardware load balancer which receives the external requests and routes them to one of ZUUL instances based on the configured rule.
As of now, we use HTTP for communication between the services. We now want to secure all communications via HTTPS.
All the services including ZUUL and Eureka are scaled up with 2 instances in separate machines for failover.
My question is should I setup and enable HTTPS for each of the services including Eureka,ZUUL ad other downstream services (OR) Is it possible to only use HTTPS only for the F5. and leave the other instances in HTTP itself.
I heard of a feature called SSL Termination/off-loading which is provided by most load balancers. I am not sure F5 support it. If it supports would it make sense to only use it for HTTPS and leave the rest in HTTP.
I feel this can reduce the complexity in setting up SSL for each of the instances(which can change in the future based on the load) and also reduce the slowness which will be inherent with SSL decryption and encryption.
Should I secure every instance including eureka/zuul and downstream services or just do ssl-termination at F5 alone.
If the back end endpoints are HTTPS then the load balancers need to load balance at TCP layer, as they cannot inspect the content. If the load balancer endpoints are HTTPS themselves, then there is usually little point of encrypting the internal traffic, and the load balancer can inspect the traffic and do smart decisions where to route the traffic (eg. sticky session). If the application endpoint needs to know that the original request is HTTPS (which is often the case) then a HTTP header is added to the internal leg to advertise this, the de-facto convention being the X-FORWARDED-PROTO header.
If you choose to let the LB-to-app leg on clear, then you need to make sure that the segment is trustworthy and your app endpoints are not reachable directly, bypassing the LB.

Capture Amazon S3 requests from Tomcat using fiddler

My web application sitting in tomcat reads the files in Amazon S3 buckets. Is there a way to capture the request? I am not sure what protocol it uses. (s3?) I would like to capture this request using fiddler.
Any idea?
As far as I know, S3 typically uses HTTP/HTTPS for communication (REST, SOAP). Are you using a library to make your S3 calls? The library may not use the default proxy.
As you know, Configuring Tomcat to communicate through proxy in Localhost - Fiddler has general details on how to configure Tomcat to use the Fiddler proxy.

WCF 4.0 + wsHTTPBinding using F5 Load Balancer

I have a WFP app which is connecting to a back end system through a WCF 4.0 interface using wsHttpBinding. The WCF service is behind an F5 load balancer.
My app works in development (no F5 load balancer), but when i deploy to production, it doesn't work. My F5 load balancer currently only has 1 real web server behind it.
This is a question which is commonly asked, but my specific detail question is the following:
In my scenario, the connection between client and load balancer uses wsHttpBinding, but the connection between the load balancer and the web server uses basic binding. Could this be a cause of the load balancer problem?
I'm not sure what you mean by "basic binding". The F5 should simply redirect the request to the web service without changing the content. The only case where F5 might be a change the message is if you're using HTTPS offloading, where the client and LB talk over SSL, but the connection between the LB and the web service is HTTP or kerberos.
I suspect you've got a F5 setup problem. The way to test this would be to create a simple HTML page and publish with IIS on your web server. Then try to access that page from a browser on the client side of the load balancer. If you see the page, you know F5 redirecting the request properly. If not, you have LB setup issue.
After that, try typing the URL of the web service into a browser and see if you get the WSDL page. If you see the web page over the browser, but don't see the WSDL page, then you know you have a setup problem with your web service.
You can also set up Fiddler on your web server and check the messages coming in to see if there's a difference in the content when you connect to the web service locally versus connecting over F5.