Nginx vs Cherokee replacement for Apache - apache

How do you compare Nginx and Cherokee in terms of memory usage and performance? My VPS serves Drupal 6.16, magento 1.4.1 and CS-CART 2.0.15 with apache2 prefork-mpm. Apache2 eats my memory even though my sites are pretty low traffic profile (htop shows that each apache process eats %18 memory) . If I change apache to nginx or cherokee will I face any compatibility issues with magento, cs-cart and drupal? Which one is the most compatible with? I reallly appreciate any production system experience.Thanks.

You can greatly reduce the memory consumption of your VPS by installing a PHP accelerator such a eAccelerator. In most cases Apache web server will perform just fine. You might need to tweak it to optimize for your specific set up. You need to do some reading up on that though, since there is no silver bullet when it comes to that.

Try Hiawatha: http://www.hiawatha-webserver.org/. Its UrlToolkit is far more advanced than Apache's mod_rewrite. Yes, the frameworks you mentioned will run fine with Hiawatha. Tested it myself.

Take a look at the post below for some memory related measurements for Apache, Cherokee and Nginx. You might google around for similar results. However, I would recommend running such tests with typical cases in mind to see how it fits your use case.
Benchmark of django deployment techniques

Related

Varnish: how many req per second peak to (reasonably) expect?

We're experiencing a strange problem with our current Varnish configuration.
4x Web Servers (IIS 6.5 on Windows 2003 Server, each installed on a Intel(R) Xeon(R) CPU E5450 # 3.00GHz Quad Core, 4GB RAM)
3x Varnish Servers (varnish-3.0.3 revision 9e6a70f on Ubuntu 12.04.2 LTS - 64 bit/precise, Kernel Linux 3.2.0-29-generic, each installed on a Intel(R) Xeon(R) CPU E5450 # 3.00GHz Quad Core, 4GB RAM)
The 3 Varnish Servers have a pretty much standard, vanilla cfg: the only thing we changed was the vcl_recv and vcl_fetch in order to handle the session cookies. They are currently configured to use in-memory cache, but we already tried switching to HDD cache using an high-performance Raid Drive with the same exact results.
We had put this in place almost two years ago without problems on our old web farm, and everything worked like a blast. Now, using the machines described above and after a clean reinstall, our customers are experiencing a lot of connection problems (pending request on clients, 404 errors, missing files, etc.) when our websites are under heavy traffic. From the console log we can clearly see that these issues start happening when each Varnish reaches roughly 700 request per seconds: it just seems like they can't handle anything more. We can easily reproduce the critical scenario at any tme by shutting down one or two Varnish Servers, and see how the others react: they always start to skip beats everytime the req per seconds count reaches 700. Considering what we've experienced in the past, and looking to the Varnish specs, this doesn't seem to be normal at all.
We're trying to improve our Varnish servers performances and/or understand where the problem actually is: in order to do that, we could really use some kind of "benchmark" from other companies who are using it in a similar fashion in order to help us understand how far we are from the expected performances (I assume we are).
EDIT (added CFG files):
This is our default.vcl file.
This is the output of varnishadm >param.show output console cmd.
I'll also try to post a small part of our varnishlog file.
Thanks in advance,
To answer the question in the headline: A single Varnish server with the specifications you describe should easily serve 20k+ requests/sec with no other tuning than increasing the number of threads.
You don't give enough information (vcl, varnishlog) to answer your remaining questions.
My guess would be that you somehow end up serialising the backend requests. Check out your hit_for_pass objects and make sure they have a valid TTL set. (120s is fine)

Bottle WSGI server vs Apache

I don't actually have any problem, just a bit curious of things.
I make a python web framework based on bottle (http://bottlepy.org/). Today I try to do a bit comparison to compare bottle WSGI server and apache server performance. I work on lubuntu 12.04, using apache 2, python 2.7, bottle development version (0.12) and get this surprising result:
As stated in the bottle documentation, the included WSGI Server is only intended for development purpose. The question is, why the development server is faster than the deployment one (apache)?
As far as I know, development server is usually slower, since it provide some "debugging" features.
Also, I never has any response in less than 100 ms when developing PHP application. But look, it is just 13 ms in bottle.
Can anybody please explain this? This is just doesn't make sense for me. A deployment server should be faster than the development one.
Development servers are not necessarily faster than production grade servers, so such an answer is a bit misleading.
The real reason in this case is likely going to be due to lazy loading of your web application on the first request that hits a process. Especially if you don't configure Apache correctly, you could hit this lazy loading quite a bit if your site doesn't get much traffic.
I would suggest you go watch my PyCon talk which deals with some of these issues.
http://lanyrd.com/2013/pycon/scdyzk/
Especially make sure you aren't using prefork MPM. Use mod_wsgi daemon mode in preference.
A deployment server should be faster than the development one.
True. And it generally is faster... in a "typical" web server environment. To test this, try spinning up 20 concurrent clients and have them make continuous requests to each version of your server. You see, you've only tested 1 request at a time--certainly not a typical web environment. I suspect you'll see different results (we're thinking of both latency AND throughput here) with tens or hundreds of concurrent requests per second.
To put it another way: At 10, 20, 100 requests per second, you might still see ~200ms latency from Apache, but you'd see much worse latency from Bottle's server.
Incidentally, the Bottle docs do refer to concurrency:
The built-in default server is based on wsgiref WSGIServer. This
non-threading HTTP server is perfectly fine for development and early
production, but may become a performance bottleneck when server load
increases.
It's also worth noting that Apache is doing a lot more than the Bottle reference server is (checking .htaccess files, dispatching to child process/thread, robust logging, etc.) and all those features necessarily add to request latency.
Finally, I'd ask whether you tuned the Apache installation. It's possible that you could configure it to be faster than it is now, e.g. by tuning the MPM, simplifying logging, disabling .htaccess checks.
Hope this helps. And if you do run a concurrent benchmark, please do share the results with us.

Server Setup: Based on Apache and Tomcat needs

I'm trying to setup a server based on our needs for a new website. Basically, I need to build a website based on social engine, and according to the platform's requirements (found here: http://www.socialengine.net/support/documentation/article?q=152&question=SocialEngine-Requirements) it requires the webserver to be Apache based.
Now my issue comes with the addition of a web application that needs to be included in the site. The web application requires the server to be capable of Asynchronous Request Processing, and is currently only supported by Tomcat or GlassFish.
I found a couple tutorials such as this one http://www.serverwatch.com/tutorials/article.php/2203891/Integrating-Tomcat-with-Apache.htm that explain how to "integrate" Tomcat into Apache. Would a server running Tomcat alone be able to handle the applet needs as well as serve the Apache (assuming HTTP) needs from the Social Engine platform? Are there any hosting providers any of you would reccomend?
Although I've done alot of front end stuff before, this is the first time i have to deal with any of the back end details, so my knowledge of server side functionality is really garbage. Please let me know if I'm not asking the right questions.
Thanks
You wouldn't really be able to use Tomcat for both apps, since the other one needs PHP. It's pretty common to have both Tomcat and Apache running on the same server. You might want to look up more recent documentation on mixing them, even this but definitely have a look at mod_proxy_ajp.
What's the other application? It's a little tricky to set up Asynchronous Request Processing if you are new to server apps, but there is also a lot of documentation, so if you're game, you can probably figure it out OK. You might also want to see if that app would work with node.js (hosting info here)
If you want to set it all up yourself, you could get a virtual private server from Rackspace Cloud or similar host or get a shared host that has the required apps already set up, which would limit your ability to customize the environment and may require 2 hosting plans, but would be easier to set up. It also somewhat depends on if both apps need to be on the same machine for any reason and/or on the same domain.
A regular LAMP stack will run SE4 just fine, however, you will need to do some tuning to get the page loads under 3 seconds. You will want to remove any Apache modules that you aren't using with a2dismod. For instance, if you're not using any Ruby on the site, a2dismod ruby. This will help get memory usage under control. APC is a must.
For a much more in depth read on tuning php/apache, please read this: Performance tuning on Apache, PHP, MySQL, WordPress v1.1 – Updated

Can I Replace Apache with Node.js?

I have a website running on CentOS using the usual suspects (Apache, MySQL, and PHP). Since the time this website was originally launched, it has evolved quite a bit and now I'd like to do fancier things with it—namely real-time notifications. From what I've read, Apache handles this poorly. I'm wondering if I can replace just Apache with Node.js (so instead of "LAMP" it would "LNMP").
I've tried searching online for a solution, but haven't found one. If I'm correctly interpreting the things that I've read, it seems that most people are saying that Node.js can replace both Apache and PHP together. I have a lot of existing PHP code, though, so I'd prefer to keep it.
In case it's not already obvious, I'm pretty confused and could use some enlightenment. Thanks very much!
If you're prepared to re-write your PHP in JavaScript, then yes, Node.js can replace your Apache.
If you place an Apache or NGINX instance running in reverse-proxy mode between your servers and your clients, you could handle some requests in JavaScript on Node.js and some requests in your Apache-hosted PHP, until you can completely replace all your PHP with JavaScript code. This might be the happy medium: do your WebSockets work in Node.js, more mundane work in Apache + PHP.
Node.js may be faster than Apache thanks to it's evented/non-blocking architecture, but you may have problems finding modules/libraries which substitute some of Apache functionality.
Node.js itself is a lightweight low-level framework which enables you to relatively quickly build server-side stuff and real-time parts of your web applications, but Apache offers much broader configuration options and "classical" web server oriented features.
I would say that unless you want to replace PHP with node.js based web application framework like express.js then you should stay with Apache (or think about migrating to Nginx if you have performance problems).
I believe Node.js is the future in web serving, but if you have a lot of existing PHP code, Apache/MySQL are your best bet. Apache can be configured to proxy requests to Node.js, or Node.js can proxy requests to Apache, but I believe some performance is lost in both cases, especially in the first one. Not a big deal if you aren't running a very high traffic website though.
I just registered to stackoverflow, and I can't comment on the accepted answer yet, but today I created a simple Node.js script that actually uses sendfile() to serve files through the HTTP protocol. (The existing example that the accepted answer links to only uses bare TCP protocol to send the file, and I could not find an example for HTTP, so I wrote it myself.)
So I thought someone might find this useful. Serving files through the sendfile() OS call is not necessarily faster than when data is copied through "user land", but it ends up utilizing the CPU and RAM less, thus being able to handle larger number of connections than the classic way.
The link: https://gist.github.com/1350901
Previous SO post describing exactly what im saying (php + socket.io + node)
I think you could put up a node server on somehost:8000 with socket.io and slap the socket.io client code into tags and with minimal work get your existing app rocking with socket.io (realtime baby) without a ton of work.
While node can be your only backend server remember that node likes to live up to it's name and become a node. I checked out a talk awhile back that Ryan Dahl gave to a PHP Users's group and he mentioned the name node relating to a vision of several node processes doing work and talking with each other.
Its LAMP versus MEAN nowadays. For a direct comparison see http://tamas.io/what-is-the-mean-stack.
Of course M, E and A are somewhat variable. For example the more recent koa may replace (E)xpress.
However, just replacing Apache with Node.js is probably not the right way to modernize your web stack.

Which to use on low spec CPU?

I have a dedicated server where i'm only the user in it.
Processor : AMD Sempron 3100+
Memory : 1GB DDR I
I'm using PHP for website. Its mostly used for downloading stuff and uploading and so.
I currently using apache, it eats too much processor.
So i came across few better then apache. I need to know which one of this good for downloading/uploading, nginx, lighttpd or litespeed?
Thanks
Its hard to beat apache in my opinion, perhaps look at enabling disable mod_deflate etc might speed things up for you.
Take a look at the benchmarks for lighttpd vs apache
I have used PHP in machines as “low end” as an AMD Geode LX800 (500 MHz, 256 MiB of RAM), using a stock Debian install and the Apache 2, PHP5 and PostgreSQL packages provided by Debian. In general, most things work well, but you want to take care of lenghty operations (e.g. avoid resizing big images with the GD extension) and always be aware of the implied cost of operations which usually seems “easy”. My particular application was serving about 25 simultaneous clients without performance problems, and in my tests it maintained a decent time-per-request up to a hundred of simultaneous clients.
You may find that installing APC will help a lot. Without it, or another byte-code cache, Apache will have to re-compile the PHP files on every invocation. While it doesn't take much effort, it does add up surprsingly quickly. You'll be surprised how useful 64MB for APC (which out of 1024Mb is not too much) will help your system, depending on how much code you are actually running (you may only need half or a quarter of that given to APC).
If it's a busy site, then optimising it with Yslow will also help, as will taking the static content (like image) away from having Apache server them. It's here that Nginx can make a small, fast improvement to page times, and memory use. I've used just that technique of a separate image server myself, and to excellent effect.
You might want to try Nginx reverse-proxying requests to a php-cgi instance. Doesn't get any more spartan than that.
But I agree with Paul, Apache is hard to beat as far as maintainability / configurability goes.
My guess is that your performance problems are related to the PHP code and not Apache. So look if you can optimize your PHP code instead.
Zeus is a high-performance web server aimed at the *Ahem* 'Static Content' industry. It will serve biblical volumes of files with minimal resources. I believe it uses asynchronous I/O, and is very quick on modest hardware.
I would recommend Apache but only 2.2.x
Here's a small benchmark that was done. and as you can see, serving php, Apache 2.2.2 is better than lighty
Definitely, I suggest lighttpd. I'm using it on different heavy load servers and it helped a lot!