This question is a follow up to my previous question about scaling in heroku. What I've noticed is that when I use my app it doesn't feel quite so smooth - I've used plugins like YSlow that consistently tell me that the majority of time is being spent on the server side in generating the HTML. New Relic seems to show that my app is spending a lot of time in Request Queuing as shown here:
and here:
However I also have this bit of information showing me:
That seems like a really, really big discrepancy between a 10.7 ms processing time on the server vs 1.3 sec response time that the user is experiencing. What does this mean? What is the best way to reduce the latency for the user? (Again I'm a complete newbie and all help is much appreciated)
You should change to using unicorn. This will only work on the Cedar stack! It will give you more dyno essentially without buying anymore.
Quick Synopsis on how:
In your Gemfile:
gem 'unicorn'
Create Rails.root/config/unicorn.rb
worker_processes 4 # amount of unicorn workers to spin up
timeout 30 # restarts workers that hang for 30 seconds
Create Rails.root/Procfile
web: bundle exec unicorn -p $PORT -c ./config/unicorn.rb
Commit and push it out to heroku and then you need to tune it (experment with the worker_process number). There is a memory limit per dyno. If you have too many workers and hit that limit then the dyno will slow down to a crawl.
Reference this article here for more detail: http://michaelvanrooijen.com/articles/2011/06/01-more-concurrency-on-a-single-heroku-dyno-with-the-new-celadon-cedar-stack/
Related
As part of the performance testing of a Drupal web application and its infrastructure, I installed sensors on all hardwares and middleware bricks, but I dont know how to collect php metrics from mod_php (response time between apache and mod_php, ...).
Thank you for your help
You've not said what you are rying to acheive with this monitoring / testing. Mod_php (like all apache modules) is a shared library, hence there is no measurable response time between apache and mod_php.
Measuring request response time on the webserver is a good idea- it gives some insight into into infrastrcture problems but tells you very little about what performance the client is experiencing (at least it shouldn't). Hint: you should be logging and analysing %D. This is where RUM comes in - although this is hard to implement on a test environment.
It is possible to project data from PHP into Apache using apache_note() (e.g. for logging memory_get_peak_usage()).
Other things you should plumb in:
an opcode cache manager
SAR
mod_status
...and capture the data.
I have this error in error.log of my apache server:
[error] (12)Cannot allocate memory: fork: Unable to fork new process
I don't know where to start to find the problem!?
How know how many fork process are started?
How know what script is running in each fork process?
How know memory cost for each fork process?
Other idea to find a solution?
This error occur regularly. I restart server and problem is fixed, but it comes back shortly after, so I need to find a better solution.
Error : "Unable to fork: Cannot allocate memory" while loggin to VPS,
You usually get that error when your VPS runs out of resources especially RAM.
At this moment what you can do is restart the VPS to get the RAM usage down to temporarily login.
I had same problem to fix it there is 2 options:
1- move from micro instances to small and this was the change that solved the problem (micro instances on amazon tend to have large cpu steal time)
2- tune the mysql database server configuration and my apache configuration to use a lot less memory.
3- some sugest that it caused by insufficient swap file space. Without it, the system has to refuse fork operations even if it has sufficient free RAM
tuning guide for a low memory situation such as this one: http://www.narga.net/optimizing-apachephpmysql-low-memory-server/ (But don't use the suggestion of MyISAM tables - horrible...)
this 2 options will make the problem much much less happening .. I am still looking for better solution to close the process that are done and kill the ones that hang in there .
In my case, it was my apache log are too big and not enouth space are free on disk...
Have to think about archiving log !
I am running a simple server app to receive uploads from a fine-uploader web client. It is based on the fine-uploader Java example and is running in Tomcat6 with Apache sitting in front of it and using ProxyPass to route the requests. I am running into an occasional problem where the upload gets to 100% but ultimately fails. In the server logs, as well as on the client, I can see that Apache is timing out on the proxy with a 502 error.
After trying and seeing this myself, I realized the problem occurs with really large files. The Java server app was taking longer than 30 seconds to reassemble the chunks into a single file and so Apache would kill the connection and stop waiting. I have increased Apache Timeout to 300 seconds which should largely correct the problem but the potential remains.
Any ideas on other ways to handle this so that the connection between Apache and Tomcat is not killed while the app is assembling the chunks on the server? I am currently using 2 MB chunks and was thinking maybe I should use a larger chunk size. Perhaps with fewer chunks to assemble the server code could do it faster. I could test that but unless the speedup is dramatic it seems like the potential for problems remain and will just be waiting for a large enough upload to come along to trigger them.
It seems like you have two options:
Remove the timeout in Apache.
Delegate the chunk-combination effort to a separate thread, and return a response to the request as soon as possible.
With the latter approach, you will not be able to let Fine Uploader know if the chunk combination operation failed, but perhaps you can perform a few quick sanity checks before responding, such as determining if all chunks are accessible.
There's nothing Fine Uploader can do here, the issue is server side. After Fine Uploader sends the request, its job is done until your server responds.
As you mentioned, it may be reasonable to increase the chunk size or make other changes to speed up the chunk combination operation to lessen the chance of a timeout (if #1 or #2 above are not desirable).
I have a small app on Heroku's cedar stack that uses two processes. One runs the Sinatra server and the other collects tweets and inserts them into a database. This project is still in development and while Heroku offers one process for free, the second process costs money.
I'd like to keep the Sinatra server running but suspend the tweet collector process from time to time. If you run heroku stop tweet_collector.1 it will temporarily stop the process but then it appears the Procfile restarts it. I haven't found a way to comment out processes in the Procfile so I've simply deleted the process from the file and pushed it.
Can you override the Procfile from commandline and stop a process? If not, how can you comment out a process in the Procfile so it's not read?
I believe you can scale any of your Procfile entries to zero using heroku scale:
heroku scale web=0
More information here: http://devcenter.heroku.com/articles/procfile
When using XAMPP (1.7.5 Beta) under Windows 7 (Ultimate, version 6.1, build 7600), it takes several seconds before pages actually show up. During these seconds, the browser shows "Waiting for site.localhost.com..." and Apache (httpd.exe, version 2.2.17) has 99% CPU load.
I have already tried to speed things up in several ways:
Uncommented "Win32DisableAcceptEx" in xampp\apache\conf\extra\httpd-mpm.conf
Uncommented "EnableMMAP Off" and "EnableSendfile Off" in xampp\apache\conf\httpd.conf
Disabled all firewall and antivirus software (Windows Defender/Windows Firewall, Norton AntiVirus).
In the hosts file, commented out "::1 localhost" and uncommented "127.0.0.1 localhost".
Executed (via cmd): netsh; interface; portproxy; add v6tov4 listenport=80 connectport=80.
Even disabled IPv6 completely, by following these instructions.
The only place where "HostnameLookups" is set, is in xampp\apache\conf\httpd-default.conf, to: Off.
Tried PHP in CGI mode by commenting out (in httpd-xampp.conf): LoadFile "C:/xampp/php/php5ts.dll" and LoadModule php5_module modules/php5apache2_2.dll.
None of these possible solutions had any noticeable effect on the speed. Does Apache have difficulty trying to find the destination host ('gethostbyname')? What else could I try to speed things up?
Read over Magento's Optimization White Paper, although it mentions enterprise the same methodologies will and should be applied. Magento is by no means simplistic and can be very resource intensive. Like some others mentioned I normally run within a Virtual Machine on a LAMP stack and have all my optimization's (both at server application levels and on a Magento level) preset on a base install of Magento. Running an Opcode cache like eAccelerator or APC can help improve load times. Keeping Magento's caching layers enabled can help as well but can cripple development if you forget its enabled during development, however there are lots of tools available that can clear this for you from a single command line or a tool like Alan Storms eCommerce Bug.
EDIT
Optimization Whitepaper link:
https://info2.magento.com/Optimizing_Magento_for_Peak_Performance.html
Also, with PHP7 now including OpCache, enabling it with default settings with date/time checks along with AOE_ClassPathCache can help disk I/O Performance.
If you are using an IDE with Class lookups, keeping a local copy of the code base you are working on can greatly speed up indexing in such IDEs like PHPStorm/NetBeans/etc. Atwix has a good article on Docker with Magento:
https://www.atwix.com/magento/docker-development-environment/
Some good tools for local Magento 1.x development:
https://github.com/magespecialist/mage-chrome-toolbar
https://github.com/EcomDev/EcomDev_LayoutCompiler.git
https://github.com/SchumacherFM/Magento-OpCache.git
https://github.com/netz98/n98-magerun
Use a connection profiler like Chrome's to see whether this is actually a lookup issue, or whether you are waiting for the site to return content. Since you tagged this question Magento, which is known for slowness before you optimize it, I'm guessing the latter.
Apache runs some very major sites on the internets, and they don't have several second delays, so the answer to your question about Apache is most likely no. Furthermore, DNS lookup happens between your browser and a DNS server, not the target host. Once the request is sent to the target host, you wait for a rendered response from it.
Take a look at the several questions about optimizing Magento sites on SO and you should get some ideas on how to speed your site up.