Delay issue with Websocket over SSL on Amazon's ELB - ssl

I followed the instructions from this link:
How do you get Amazon's ELB with HTTPS/SSL to work with Web Sockets? to set up ELB to work with Websocket (having ELB forward 443 to 8443 on TCP mode). Now I am seeing this issue for wss: server sends message1, client does not receive it; after few seconds, server sends message2, client receives both messages (both messages are around 30 bytes). I can reproduce the issue fairly easily. If I set up port forwarding with iptable on the server and have client connecting directly to the server (port 443), I don't have the problem Also, the issue seems to happen only to wss. ws works fine.
The server is running jetty8.
I checked EC2 forums and did not really find anything. I am wondering if anyone has seen the same issue.
Thanks

From what you describe, this pretty likely is a buffering issue with ELB. Quick research suggests that this actually is the issue.
From the ELB docs:
When you use TCP for both front-end and back-end connections, your
load balancer will forward the request to the back-end instances
without modification to the headers. This configuration will also not
insert cookies for session stickiness or the X-Forwarded-* headers.
When you use HTTP (layer 7) for both front-end and back-end
connections, your load balancer parses the headers in the request and
terminates the connection before re-sending the request to the
registered instance(s). This is the default configuration provided by
Elastic Load Balancing.
From the AWS forums:
I believe this is HTTP/HTTPS specific but not configurable but can't
say I'm sure. You may want to try to use the ELB in just plain TCP
mode on port 80 which I believe will just pass the traffic to the
client and vice versa without buffering.
Can you try to make more measurements and see how this delay depends on the message size?
Now, I am not entirely sure what you already did and what failed and what did not fail. From the docs and the forum post, however, the solution seems to be using the TCP/SSL (Layer 4) ELB type for both, front-end and back-end.

This resonates with "Nagle's algorithm" ... the TCP stack could be configured to bundling requests before sending them over the wire to reduce traffic. This would explain the symptoms, but worth a try

Related

Authentication failed when coturn is behind the udp load balancer like nginx

It may be a really simple question because i am a newbie about turn servers. I would like to run coturn server behind a load balancer such as nginx.
My case is:
I have a nginx load balancer on server which is 192.168.1.10. And listening port 3478 for requests. Also this server has public ip address such as 82.222..
I have a turn server (coturn) which is 192.168.1.11. And runing on port 3478 (this server is in the same network with load balancer)
I'm testing my turn server connectivity with this site: https://webrtc.github.io/samples/src/content/peerconnection/trickle-ice/
My problem is:
If i do nat port forwarding from my public ip address to coturn server without using load balancer, connectivity test is performing successfully. However, if i use nginx udp load balancing method for redirecting request to my turn server, connectivity test is returning "Authentication Failed" error.
Is there any idea about this issue? Any help about this issue is appreciated.
You have not included any specifics about your nginx configuration, example config files, how you tested, etc. This makes it difficult to help point you at the solution to your problem.
Note that the coturn TURN server has some documentation about load balancing; it can be found in the wiki on Github: https://github.com/coturn/coturn/wiki/TURN-Performance-and-Load-Balance
That being said, I must agree with comment from Philipp and say that DNS-based load balancing for TURN servers works very well. This scenario is mentioned briefly in the documentation above.
Hope this helps, and good luck :)

iOS ASIHTTPSRequest switching between HTTP/HTTPS connections

I have an iOS app which is using ASIHTTPRequest to talk to a REST server. The server supports connections on port 80 (HTTP) and port 443 (HTTPS) - I'm using a GeoTrust/RapidSSL certificate on port 443. The user can configure the app to choose what protocol they want to use. I'm monitoring the traffic on the server using WireShark and what I'm finding is that occasionally if the user switches between HTTP and HTTPS, when they next submit a request then I can see traffic for both protocols, then every request after that is for the newly selected protocol only.
Also when the app is shutdown, there are a few packets sent which I guess is some kind of cleanup. The type of these final packets (HTTP/HTTPS) depends on what protocol the app has been using. If the app has been set to use both HTTP and HTTPS during the same app session, then both HTTP and HTTPS packets are sent when the app is shutdown. These scenarios don't seem right to me and suggest that my ASIHTTPRequest is not being completely cleared down. I am getting an occasional error when my request completes with the response 'HTTP/0.9 200 OK' but doesn't return any data and I think this is caused by trying to communicate with port 443 using HTTP.
Can anybody confirm my suspicions are true? Is there some command I should be using after an ASIHTTPRequest to clear it down so the next request can be sent on a different protocol?
What you are seeing is sounds like what HTTP persistent connections are meant to do; see http://en.wikipedia.org/wiki/HTTP_persistent_connection and so on.
There's nothing you need to do, none of this is doing any harm. The few http packets you see when switching protocols is just the old socket getting closed down I believe - I presume you are just seeing packets to TCP port 80, and aren't seeing any packets with data / actual http requests.

disable request buffering in nginx

It seems that nginx buffers requests before passing it to the updstream server,while it is OK for most cases for me it is very bad :)
My case is like this:
I have nginx as a frontend server to proxy 3 different servers:
apache with a typical php app
shaveet(a open source comet server) built by me with python and gevent
a file upload server built again with gevent that proxies the uploads to rackspace cloudfiles
while accepting the upload from the client.
#3 is the problem, right now what I have is that nginx buffers all the request and then sends that to the file upload server which in turn sends it to cloudfiles instead of sending each chunk as it gets it (those making the upload faster as i can push 6-7MB/s to cloudfiles).
The reason I use nginx is to have 3 different domains with one IP if I can't do that I will have to move the fileupload server to another machine.
As soon as this [1] feature is implemented, Nginx is able to act as reverse proxy without buffering for uploads (bug client requests).
It should land in 1.7 which is the current mainline.
[1] http://trac.nginx.org/nginx/ticket/251
Update
This feature is available since 1.7.11 via the flag
proxy_request_buffering on | off;
http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_request_buffering
According to Gunicorn, they suggest you use nginx to actually buffer clients and prevent slowloris attacks. So this buffering is likely a good thing. However, I do see an option further down on that link I provided where it talks about removing the proxy buffer, it's not clear if this is within nginx or not, but it looks as though it is. Of course this is under the assumption you have Gunicorn running, which you do not. Perhaps it's still useful to you.
EDIT: I did some research and that buffer disable in nginx is for outbound, long-polling data. Nginx states on their wiki site that inbound requests have to be buffered before being sent upstream.
"Note that when using the HTTP Proxy Module (or even when using FastCGI), the entire client request will be buffered in nginx before being passed on to the backend proxied servers. As a result, upload progress meters will not function correctly if they work by measuring the data received by the backend servers."
Now available in nginx since version nginx-1.7.11.
See documentation
http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_request_buffering
To disable buffering the upload specify
proxy_request_buffering off;
I'd look into haproxy to fulfill this need.

Can HAProxy front both Web servers and SSL VPN on one IP and port?

I need a Reverse Proxy to front both Lablz Web server and SSL VPN Adito (SSL Explorer fork) by sitting on one IP/port. Failed to achieve that with Nginx. Failed to use Adito as a generic reverse HTTP proxy.
Can HAProxy fall back to being a TCP proxy if it does not sense HTTP traffic?
In other words can it fall back to Layer 4 if its Layer 7 inspection determines this is not HTTP traffic?
Here is my setup
EC2 machine with one public IP (Elastic IP).
Only one port is open - 443.
Stunnel is sitting on 443 and is passing traffic to HAProxy (I do not like to use Stunnel but HAProxy does not have full support for SSL yet, unlike Nginx).
HAProxy must be configured to pass some HTTP traffic to one server (Apache server which fronts the SVN server) and the rest of the HTTP traffic to our Lablz Web/App server.
All non-HTTP traffic must be forwarded to Adito VPN.
This traffic is:
VNC, NX, SMB
... and all other protocols that Adito supports
I can not rely on source IP address or port to split traffic into HTTP and non-HTTP.
So, can such config be accomplished in HAProxy? Can any other reverse proxy be used for this? Let me know if I am not thinking right about HAProxy and an alternative approach is possible.
BTW, Adito SSL VPN is amazing and if this setup works we will be able to provide Lablz developers with a fantastic one-click single-login secure VNC-over-HTTPS access to their boxes in the cloud.
No solution exists for this but via Adito - please prove me wrong. But please do not say that VNC over SSH is better. Yes, VNC-over-SSH is faster, more secure, but also is much harder (for our target user base) to setup and presumes that user is behind the firewall that allows outbound traffic on port 22 (not always the case).
Besides, Adito is much more than the remote access gateway - it is a full blown in-browser VPN, a software distribution platform and more. I am not associated with Adito guys - see my Adito post on our Lablz blog.
OK, first off, I'd use a simple firewall to divide all HTTP from NON-HTTP traffic. What you need is packet inspection to figure out what it is that is coming in.
Neither haproxy or nginx can do that. They are both made for web traffic and I don't see how they could inspect traffic to guess what it is that they are dealing with.
Update: Looked into this it a bit and with iptables you could probably use string matching to devide the traffic. However, that's all tricky, especially with the encrypted nature. A friend of mine discovered l7-filter and this looks like what you need. Let me know if this helps.

Determine SSL connection behind a load balancer

Looking for best practice here. We deal with SSL connection at our load balancer level and hence all the connection from our load balancer to our web servers are http. With that we have no way of telling what kind of connection the client is making to our web server since all connection is through http. We currently have 2 solution, one is to have the load balancer to append a port number in the URL string so that we can determine the kind of request (ex. 80 for http and 443 for https). The other solution is for the load balancer to append a special header when it get https request so the web servers knows the type of connection.
Do you see cons in both solution? Is there any best practice regarding SSL being applied at the load balancer level instead of web server level?
I would prefer the header, I think. Adding something in the URL creates the possibility, however slim, that you'll collide with a query string parameter that an app wants to use. A custom header would be easier.
A third option could be to have ssl connections redirect to a different port, say 8080, so on the back end you know that port 80 connections were http to begin with, and port 8080 connections were 443 to begin with, even though they're both http at that point.
I suggest using the header. A related concept is determining the IP address of the client (for logging purposes), since all requests to your web server appear to originate at the load balancer. The x-forwarded-for header is customarily used here.