how use multiple outgoing interfaces haproxy - reverse-proxy

Is there a way to implement the following scenario (art of reverse proxy) :
Need a backend interface let say eth0 to listen for incoming HTTP/HTTPS traffic and then forward all incoming HTTP/HTTPS traffic to eth1 till eth8. by using round robin load balancing ?

Related

HAProxy - load balance across different modes/protocols

I have a service that exposes an API over an in-house developed TCP protocol. I am currently in the process of moving that service to a REST API. The service listens for and response to requests for both APIs simultaneously as it may be some time until all the clients move over to the REST API.
Given that the APIs are different protocols, I believe I will need both tcp frontends/backends and http frontend/backends.
I would prefer to not have to deploy separate instances of my service for each protocol. Instead, I would like to have the same set of servers for each of the backends and have HAProxy load balance (leastconn) across them.
As an example
frontend fe_custom
bind :11111
mode tcp
use_backend be_custom
frontend fe_http
bind :80
mode http
use_backend be_http
backend be_custom
mode tcp
balance leastconn
server server1 192.168.10.100:11111
server server2 192.168.10.101:11111
backend be_http
mode http
balance leastconn
server server1 192.168.10.100:80
server server2 192.168.10.101:80
So if a request is sent to my custom protocol on port 11111 and gets sent to be_custom:server1, I would like a subsequent request that comes in for my REST API on port 80 to get load balanced to be_http:server2.
Will this scenario just work if the same server is specified in different backends? If not, is this something that can be done in HAProxy?

Load Balance server HAProxy or alternative

I need load balance server. LB should listen multiple ports and forward to backend servers with the same ports.
Logic for backed should be always send tcp requests to A server with same ports which is defined in LB server, and if A server is down forward to Server B.
Example:
LB port 10202 to Backend port 10202
LB port 10203 to Backend port 10203
Is it possible?
Unless I miss-understand then that sounds fairly simple. Just use a port range. Any port in the range is accepted and DONT specify the port on the backend i.e. keep the one you come in on:
listen L7_HTTP
    bind 10.0.0.20:10202-10203
    server RS001 127.0.127.1 check
    server RS002 127.0.127.2 check

How many total TCP connections are created for web socket call from browser to apache http server to web service

I would like to know how many TCP connections are created when WebSocket call is made from browser to apache http server to backend web service?
Does it create a separate TCP connection from the browser to apache http server and from apache to the web service?
When Apache is proxying websockets, there is 1 TCP connection between the client and Apache and 1 TCP connection between Apache and the backend.
Apache watches both connections for activity and forwards read from one onto the other.
This is the only way it can be in a layer 7 (Application Layer, HTTP) proxy. Something tunnelling at a much lower layer, like a NAT device or MAC forwarding IP sprayer could tunnel a single connection -- but not on the basis of anything higher up in the stack like headers.
The 2nd connection is observable with netstat.
The 2nd connection is opened when mod_proxy_wstunnel calls ap_proxy_connect_to_backend() which calls apr_socket_create() which calls the portable socket() routine. When recent releases of mod_proxy_http handle this tunneling automatically, simialr flow through ap_proxy_acquire_connection.

Gcloud load balancing to the same host for two TCP connections

I'm using GCP like in the following schema:
TCP balancer -> backend-service -> MIG(my app) with auto scaling.
"My app" accepts commands on a TCP port (A) and sends notifications on another TCP port(B) for subscriber.
I'm running my tests against TCP LB's IP - my tests connect to port B on a startup(i.e. one of instances of "my app") and also my tests make a connection to port A for each test.
i.e. I've faced with a case when port A and port B are terminated/connected to different hosts.
I am not sure how to circumvent this case.
I have mitigated the issue using --session-affinity=CLIENT_IP for backend-services configuration, I.e. all connections from one IP are directed to the same target.

How is TLS termination implemented in AWS NLB?

AWS NLB supports TLS termination
https://aws.amazon.com/blogs/aws/new-tls-termination-for-network-load-balancers/
NLB being a Layer 4 load balancer I would expect it to work in a passthrough mode by directing the incoming packets to one of the backends without much of state maintenance (except for the flow tracking)
Are there any details available on how AWS implements the TLS termination in NLB ?
Is it possible to do it with open source tooling (like IPVS or haproxy) or AWS has some secret sauce here ?
The TLS termination itself is just what it says it is. TLS is a generic streaming protocol just like TCP one level up so you can unwrap it at the LB in a generic way. The magic is that they keep the IPs intact probably with very fancy routing magic, but it seems unlikely AWS will tell you how they did it.
In my SO question here, I have an example of how to terminate a TCP session in HAProxy and pass the unencrypted traffic to a backend.
In short, you need to use ssl in the frontend bind section and both frontend and backend configurations require use of tcp mode. Here is an example of terminating on port 443 and forwarding to port 4567.
frontend tcp-proxy
bind :443 ssl crt combined-cert-key.pem
mode tcp
default_backend bk_default
backend bk_default
mode tcp
server server1 1.2.3.4:4567