I got this question regarding web server (such as nginx, Cherokee or Oracle iPlanet) and Java containers (such as GlassFish): Can we control what happens to the connection if the user drops an unfinished connection?
When a browser opens an HTTP/HTTPS connection to a server, it hits the web server (nginx, Cherokee or Oracle iPlanet) and then reverse proxies to the Java container (GlassFish). The Java application then executes and does quite a lot of things such as calculation and finally needs to write to, say, 3 different databases. If it has finished writing to the 1st database - but not yet to the 2nd and 3rd database - and the user closes the connection (by closing the browser window, or looses a network connection, etc.) what will happen to the process?
Specifically, I would like the process to CONTINUE until it finishes executing all the code. I know of one way is to spin off the process on a new thread, but then this will incur computation costs. So, are there any settings/config I can do to make sure it will continue to execute even though the user has broken the connection?
With nginx, you can set proxy_ignore_client_abort on; and it will not close the connection to the backend if the client closes its connection.
Related
I have a simple webbapp deployed in tomcat8. But some HTTP requests requires access to DB with slow queries. Sometimes HTTP client made reset connection. In the same time i'd like to handle it in my webapp for cancel slow query (which result is no longer interesting).
The main question: "How to catch reset connection from client side in phase awaiting response from server". Is it possible? Interrupting thread - the best way for it because I can easly handle it.
When connection is broken from client side tomcat does not interrupts http-nio-X thread. Why? How to do it?
I have a question on how to make SSE worked in multiple server environments.
In UI, there are two steps:
1. source = new EventSource('http://localhost:3000/stream');
source.addEventListener('open', function(e) {
$("#state").text("Connected")
}, false);
user in UI can post to api to update data
after user post to api, server is sending event to UI to udpate UI
In one server environement, this worked perfect fine, no problem at all.
But in multi server instance environments, this won't be working. For example, I have two server instance, and UI subscribed to server 1, then server 1 is remembering the connection, but data update is from server 2, when data is changed, there is no connection for SSE in server 2. Then in this senario, how can server 2 send SSE to UI?
In order to make SSE working in multiple server environments, do we need to adopt any saving solution to save the connection information so that any server instance can send SSE accurately to UI?
Let me clarify this more:
yes, both service 1 and service 2 are behind load balancer, they do not have to have same URL. UI is pure frontend end application, can even be mobile app. So, if UI is sending a eventSource request to LB of server1, then only this instance can use this connection to send event back to UI, right? But if we have multiple instance of server 1, that means any server 1 instance other than current one can NOT send event back to UI.
I believe this is the limitation of SSE unless the connection can be shared among all the instances. But how.
Thanks
If you have two servers, with different URLs, make one SSE connection (from each client) to each server.
Be aware of CORS restrictions, i.e. the same origin policy. (It works identically to xhr2 CORS, so fairly easy to google; my book also covers it in detail, chapter 9.)
If you have two servers behind a load balancer, which is presenting a single URL to the clients, then you just have to make sure the load balancer is configured correctly. I.e. to always pass through that socket to the correct server. If a back-end server dies, and needs replacing, the load balancer should close the SSE socket; the client will then auto-reconnect, and get a new back-end server.
The multiple servers behind a load balancer, should either be having their own data push socket connections to a master data source, or should all be polling the master data source.
Imagine situation (this is real situation):
There is a WCF client application on laptop.
Laptop is connected by WiFi to internet.
User is doing some stuff (request reply operations) on his laptop at work connected to WCF service.
Then user's laptop is sleep-down and user go home. At home user wake-up his laptop, connect HSPDA/3G modem (different interface & ip) and want to continue on work in client appliaction. Note that application hasn't been closed.
User (client application) should be authenticated and if it is possible, communication should be encrypted.
What are the best practices?
Create new proxy for each operation? This should be very slow when initializing net.tcp connection with authentication.
Is solution basicHttp connection (+HTTPS) with InstanceContextMode.PerCall? Note that speed and higher payload is problem.
Or the best solution is something like "wrapper(Func<>)", which contains while loop until operation is successfully finished (on fail, new connection is created and function is called again).
Thanks you for suggestions
I've always kept the connection open for as long as the unit of work is necessary. Basically, the connection is only open and available while the application is performing some processing (and those processes require a WCF connection). It may be more overhead to keep reconnecting (and depending on connection speed it may add latency) but it's also more secure when it comes to having a connection to work with (least probability of failure) and I'm generally saving those resources for other purposes.
However, this all depends on what the application does; If the client is dumb and the service is doing all the work it may make sense to keep the connection as every function executes a method on the service. Though with that comes some failure checking and re-establishing should the connection be unexpectedly severed.
Also, netTcp is going to be a lot faster than wsHttp. And I personally haven't see a lot of latency on establishing a netTcp connection (though I don't know what kind of authentication you're doing [mine has generally implemented windows authentication])
I am trying to simulate a slow http read attack against apache server running on my localhost.
But it seems like, the server does not complain and simply waits forever for the client to read.
This is what I do:
Request a huge file (say ~1MB) from the http server
Read the response from the server in a loop waiting 100 secs before successive reads
Since the file is huge and the client receive buffer is small, the server has to send the file in multiple chunks. But, at the client side, I wait for 100 secs between successive reads. As a result, the server often polls the client and finds that, the receive window size of the client is zero since the client has not yet read the receive buffer.
But it looks like the server does not bother to break the connection and it silently keeps polling the client. Server sends the data when the client window size is > 0 and again goes back to wait for the client.
I want to know whether there are any apache config parameters that I can set to break the connection from the server side after waiting sometime for the client to read the data.
Perhaps this would be more useful to you, (simpler and saves you time): http://ha.ckers.org/slowloris/ which is a Perl script that sends partial HTTP requests, the Apache server leaves the connection open (now unavailable to new users) and if executed on a Linux environment, (Linux does not limit threads beyond hardware capability) you can effectively block all open sockets, and in turn prevent other users from accessing the server. It uses minimal bandwidth because it does not "flood" the server with requests, it simply slowly takes the sockets hostage. You can download the file here: http://ha.ckers.org/slowloris/slowloris.pl
To prevent an attack like this (well, mitigate) see here: https://serverfault.com/questions/32361/how-to-best-defend-against-a-slowloris-dos-attack-against-an-apache-web-server
You could also use a load-balancer or round-robin setup.
Try slowhttptest to test the slow read attack you're describing. (It can also be used to test slow sending of headers.)
We have a fairly busy website (1 million page views/day) using Apache mod proxy that keeps getting overloaded with connections (>1,000) in the TIME_WAIT state. The connections are to port 3306 (mysql), but mysql only shows a few connections (show process list) and is performing fine.
We have tried changing a bunch of things (keep alive on/off), but nothing seems to help. All other system resources are within reasonable range.
I've searched around, which seems to indicate changing the tcp_time_wait_interval. But that seems a bit drastic. I've worked on busy website before, but never had this problem.
Any suggestions?
Each time_wait connection is a connection that has been closed.
You're probably connecting to mysql, issuing a query, then disconnecting. Repeat for each query on the page. Consider using a connection pooling tool, or at very least, a global variable that holds on to your database connection. If you use a global, you'll have to close the connection at the end of the page. Hopefully you have someplace common you can put that, like a footer include.
As a bonus, you should get a faster page load. MySQL is quick to connect, but not having to re-connect is even faster.
If your client applications are using JDBC, you might be hitting this bug:
http://bugs.mysql.com/bug.php?id=56979
I believe that php has the same problem
Cheers,
Gilles.
We had a similar problem, where our web servers all froze up because our php was making connections to a mysql server that was set up to do reverse host lookups on incoming connections.
When things were slow it worked fine, but under load the responstimes shot through the roof and all the apache servers got stuck in time_wait.
The way we figured the problem out was through using xdebug to create profiling data on the scripts under high load, and looking at that. the mysql_connect calls took up 80-90% of the execution time.