I have two rails app which are running on linode. OS is ubuntu, nginx server. The subdomain instance giving problem. It is getting down just after 1 day. On restarting the server, it is working fine.
The error log says- "*1 upstream timed out (110: Connection timed out) while reading response header from upstream,".
I googled for the problem and found that increasing proxy_read_timeout value will solve the problem. But I am unable to find the reason.
Is there an issue of over utilization of resources? I have 24 GB of storage and 512 MB of RAM as shown in linode manager. I have 10 cron jobs in total (5 in each app). They all start at the same time. Can that be the issue?
Please tell me the reason and solution for it.
It definitely sounds like a resource issue... Or perhaps something else is killing / hogging your app. Generally the upstream request is a request from the web server to the app server, so if your app is doing something wonky, this would cause the timeout to occur. I'm not sure what the default timeout is, but I'm guessing it's rather short. Increasing the timeout would at least buy you time to look at the system resources the process stack to try to figure out what's going on.
Related
Env:
Rails 5.0.5
Redis server v=4.0.1
There is nothing special on the server side. The problem is that a user have many pings for the same message (duplicates?).
Redis.new(url: 'redis://:auth#ip:port/db_number').pubsub('channels', 'action_cable/*') doesn't show extra connections.
Where can be the problem? Redis? Or something wrong with the settings of the app?
I restarted Redis server and the problem went away (it wasn't easy to do before cause the problem existed only in the production server).
I have some Rabbitmq server issues installed in Centos 7 when starting and stopping it. The server works well when connected to internet. It only takes 1 or 2 seconds to start it. But when disconnected to internet, it takes about 3~5 minutes to either start or stop it. There seems nothing wrong on the access log.
I also checked what to set up in the configuration file. But I couldn't find any settings about starting server. Can you guys give some help for me?
I am using Apache HTTP Server 1.3.29
I am currently with an Apache server that is experiencing the error:
Internal Server Error 500
Exception: EWebBrokerException
Message: Maximum number of concurrent connections exceeded. Please try again later
This message appears when many users are using the system, but do not know the number of connections to cause this.
I need help optimize server to support more connections / access
Here is the link to the server httpd.conf view (only the important parts):
http://www.codesend.com/view/8fd87e7d6cc1c94eee30a8c45981e162/
Thanks!
It's not for lack of machine resources. The server has 16GB of RAM and a great processor, the problem occurs when the consumer is not even 30%, maybe some adjustment in Apache, this is the help you seek here.
We are facing problem with one of our Drupal sites hosted in apache. Today suddenly, there are 150 httpd2-prefork processes running. This causes site to be down. After every apache restart it comes up for a short while. Then again, number of processes threads grows to max and site goes down. Can anyone else help here?
Look at your server logs and see who your visitors are (real people? bots?), where they're coming from (IP), and what pages they're visiting.
If it's just a few offenders, you can try blocking them. If it's a distributed attack, it's going to be more difficult.
Do you have any modules installed to help block bad people/things? I recommend: Honeypot and Bad Behavior.
Occasionally when a user tries to connect to the myPHP web interface on one of our web servers, the request will time out before they're prompted to login.
Is the timeout time configured on the server side or within their web browser?
Can you tell me how to increase the amount of time it waits before timing out when this happens?
Also, what logs can I look at to see why their request takes so long from time to time?
This happens on all browsers. They are connecting to myPHP in a LAMP configuration on CentOS 5.6.
Normally when you hit a limit on execution time with LAMP, it's actually PHPs own execution timeout that needs to be adjusted, since both Apache's default and the browsers' defaults are much higher.
Edit: There are a couple more settings of interest to avoid certain other problems re: memory use and parsing time, they can be found at this link.
Typically speaking, if PHP is timing out on the defaults, you have larger problems than the timeout itself (problems connecting to the server itself, poor coding with long loops).
Joachim is right concerning the PHP timeouts though, you'll need to edit the php.ini to increase the timeout of PHP itself before troubleshooting anything else on the server; however, I would suggest trying to find out why people are hitting the timeout in the first place.
max_execution_time = 30;