What are the implications of disabling the "Limit the Kernel request queue".
If you removed the limit your server will queue unlimited number of requests until the server runs out of memory which obviously never good!
Microsoft recommend that the limit is never removed on a production server which I believe is good advice.
*Edit, it's helpful if you accept peoples answers...
Related
i'm confused i tried to understand, i googled days and every day i get more confused about concurrent connections
what i know :
concurrent connections: is number of open TCP connections for every visitor send a request to my website count one concurrent connections, please bear with me and if i'm wrong correct me thanks.
what i want to know:
is every server have a its own concurrent connections related to the server capability (CPU, RAM, DRIVE..)?
i read that nginx HTTP server it can handle up to 1024 concurrent connections????
it's that mean that every server can only handle 1024 simultaneous connections that doesn't make any sense to me, know servers can handle C10K there's servers that can handle up to C100M connections , is this 1024 has anything limit with file descriptors?
i read that nginx work with event driven, non-bloking I/O?
i read also that nginx create one thread that handles requests with event loop : (a thread loops on file descriptors looking for ready I/O instead of getting blocking waiting of I/O to ready instead instead it's goes to handle anther request)
about 1024 concurrent connections is nginx can only create 1024 thread per process??
i read that server max concurrent connections is 65535 connections of 16-bit of port????
please bear with me and if i'm wrong correct me
if you know can you explain to me with details how nginx handle request from create process, threads, threads pool and how nginx handle non-blocking I/O is nginx use select() with FD_SETSIZE = 1024
and what is worker_connections and why is 1024 what is this 1024???????
and please tell me about how Apache works to with details process to thread to handle every request
Thanks for help this really going help me with my project create a non-blocking HTTP server
Why need to set limit with http request header?
Everyone knows that the request header is limited( tomcat 8k...),
but I can't find any information why it should be.
Is it related to buffer overflow attacks?
Thank you.
HTTP doesn't impose limits.
However, if server doesn't impose limits it means that a client could (for example) make a HTTP requests with a header that's many gigabytes in size.
If the server did not set a limit, it would mean that the server has to wait until the client is done sending the header, and in the meanwhile the server has to collect this header in memory, perhaps even exceeding the total memory of the server.
If this were possible, clients could construct HTTP requests that crash servers. To prevent this, servers set limits.
I think I found the answer to my curiosity in the apache security book.
Properly configured limits mitigate buffer overflow exploits, preventing Denial of Service (DoS) attacks.
I have RabbitMQ Server 3.6.0 installed on Windows (I know it's time to upgrade, I've already done that on the other server node).
Heartbeats are enabled on both server and client side (heartbeat interval 60s).
I have had a resource alarm (RAM limit), and after that I have observed the raise of amount of TCP connections to RMQ Server.
At the moment there're 18000 connections while normal amount is 6000.
Via management plugin I can see there is a lot of connections with 0 channels, while our "normal" connection have at least 1 channel.
And even RMQ Server restart won't help: all connections would re-establish.
1. Does that mean all of them are really alive?
Similar issue was described here https://github.com/rabbitmq/rabbitmq-server/issues/384, but as I can see it was fixed exactly in v3.6.0.
2. Do I understand right that before RMQ Server v3.6.0 the behavior after resource alarm was like that: several TCP connections could hang on server side per 1 real client autorecovery connection?
Maybe important: we have haProxy between the server and the clients.
3. Could haProxy be an explanation for this extra connections? Maybe it prevents client from receiving a signal the connection was closed due to resource alarm?
Are all of them alive?
Only you can answer this, but I would ask - how is it that you are ending up with many thousands of connections? Really, you should only create one connection per logical process. So if you really have 6,000 logical processes connecting to the server, that might be a reason for that many connections, but in my opinion, you're well beyond reasonable design limits even in that case.
To check, see how many connections decrease when you kill one of your logical processes.
Do I understand right that before RMQ Server v3.6.0 the behavior after resource alarm was like that: several TCP connections could hang on server side per 1 real client autorecovery connection?
As far as I can tell, yes. It looks like the developer in this case ran across a common problem in sockets, and that is the detection of dropped connections. If I had a dollar for every time someone misunderstood how TCP works, I'd have more money than Bezos. So, what they found is that someone made some bad assumptions, when actually read or write is required to detect a dead socket, and the developer wrote code to (attempt) to handle it properly. It is important to note that this does not look like a very comprehensive fix, so if the conceptual design problem had been introduced to another part of the code, then this bug might still be around in some form. Searching for bug reports might give you a more detailed answer, or asking someone on that support list.
Could haProxy be an explanation for this extra connections?
That depends. In theory, haProxy as is just a pass-through. For the connection to be recognized by the broker, it's got to go through a handshake, which is a deliberate process and cannot happen inadvertently. Closing a connection also requires a handshake, which is where haProxy might be the culprit. If haProxy thinks the connection is dead and drops it without that process, then it could be a contributing cause. But it is not in and of itself making these new connections.
The RabbitMQ team monitors this mailing list and only sometimes answers questions on StackOverflow.
I recommended that this user upgrade from Erlang 18, which has known TCP connection issues -
https://groups.google.com/d/msg/rabbitmq-users/R3700QdIVJs/taDYKI6bAgAJ
I've managed to reproduce the problem: in the end it was a bug in the way our client used RMQ connections.
It created 1 auto-recovery connection (that's all fine with that) and sometimes it created a separate simple connection for "temporary" purposes.
Step to reproduce my problem were:
Reach memory alarm in RabbitMQ (e.g. set up an easily reached RAM
limit and push a lot of big messages). Connections would be in state
"blocking".
Start sending message from our client with this new "temp" connection.
Ensure the connection is in state "blocked".
Without eliminating resource alarm, restart RabbitMQ node.
The "temp" connection itself was here! Despite the fact auto-recovery
was not enabled for it. And it continued sending heartbeats so the
server didn't close it.
We will fix the client to use one and the only connection always.
Plus we of course will upgrade Erlang.
I am using Apache HTTP Server 1.3.29
I am currently with an Apache server that is experiencing the error:
Internal Server Error 500
Exception: EWebBrokerException
Message: Maximum number of concurrent connections exceeded. Please try again later
This message appears when many users are using the system, but do not know the number of connections to cause this.
I need help optimize server to support more connections / access
Here is the link to the server httpd.conf view (only the important parts):
http://www.codesend.com/view/8fd87e7d6cc1c94eee30a8c45981e162/
Thanks!
It's not for lack of machine resources. The server has 16GB of RAM and a great processor, the problem occurs when the consumer is not even 30%, maybe some adjustment in Apache, this is the help you seek here.
Occasionally when a user tries to connect to the myPHP web interface on one of our web servers, the request will time out before they're prompted to login.
Is the timeout time configured on the server side or within their web browser?
Can you tell me how to increase the amount of time it waits before timing out when this happens?
Also, what logs can I look at to see why their request takes so long from time to time?
This happens on all browsers. They are connecting to myPHP in a LAMP configuration on CentOS 5.6.
Normally when you hit a limit on execution time with LAMP, it's actually PHPs own execution timeout that needs to be adjusted, since both Apache's default and the browsers' defaults are much higher.
Edit: There are a couple more settings of interest to avoid certain other problems re: memory use and parsing time, they can be found at this link.
Typically speaking, if PHP is timing out on the defaults, you have larger problems than the timeout itself (problems connecting to the server itself, poor coding with long loops).
Joachim is right concerning the PHP timeouts though, you'll need to edit the php.ini to increase the timeout of PHP itself before troubleshooting anything else on the server; however, I would suggest trying to find out why people are hitting the timeout in the first place.
max_execution_time = 30;