Which to use on low spec CPU? - apache

I have a dedicated server where i'm only the user in it.
Processor : AMD Sempron 3100+
Memory : 1GB DDR I
I'm using PHP for website. Its mostly used for downloading stuff and uploading and so.
I currently using apache, it eats too much processor.
So i came across few better then apache. I need to know which one of this good for downloading/uploading, nginx, lighttpd or litespeed?
Thanks

Its hard to beat apache in my opinion, perhaps look at enabling disable mod_deflate etc might speed things up for you.

Take a look at the benchmarks for lighttpd vs apache

I have used PHP in machines as “low end” as an AMD Geode LX800 (500 MHz, 256 MiB of RAM), using a stock Debian install and the Apache 2, PHP5 and PostgreSQL packages provided by Debian. In general, most things work well, but you want to take care of lenghty operations (e.g. avoid resizing big images with the GD extension) and always be aware of the implied cost of operations which usually seems “easy”. My particular application was serving about 25 simultaneous clients without performance problems, and in my tests it maintained a decent time-per-request up to a hundred of simultaneous clients.

You may find that installing APC will help a lot. Without it, or another byte-code cache, Apache will have to re-compile the PHP files on every invocation. While it doesn't take much effort, it does add up surprsingly quickly. You'll be surprised how useful 64MB for APC (which out of 1024Mb is not too much) will help your system, depending on how much code you are actually running (you may only need half or a quarter of that given to APC).
If it's a busy site, then optimising it with Yslow will also help, as will taking the static content (like image) away from having Apache server them. It's here that Nginx can make a small, fast improvement to page times, and memory use. I've used just that technique of a separate image server myself, and to excellent effect.

You might want to try Nginx reverse-proxying requests to a php-cgi instance. Doesn't get any more spartan than that.
But I agree with Paul, Apache is hard to beat as far as maintainability / configurability goes.

My guess is that your performance problems are related to the PHP code and not Apache. So look if you can optimize your PHP code instead.

Zeus is a high-performance web server aimed at the *Ahem* 'Static Content' industry. It will serve biblical volumes of files with minimal resources. I believe it uses asynchronous I/O, and is very quick on modest hardware.

I would recommend Apache but only 2.2.x
Here's a small benchmark that was done. and as you can see, serving php, Apache 2.2.2 is better than lighty

Definitely, I suggest lighttpd. I'm using it on different heavy load servers and it helped a lot!

Related

Ultra small http server with support for Lua scripts

I have a Renesas R5F571M processor that has 4MB of flash and 512K of RAM.
I need to run FreeRTOS and also have a web server that can run Lua scripts in order to interface to the hardware with custom C code.
Can anyone suggest a very compact HTTP+Lua server that I could use.
The Barracuda Application Server looks ideal but at around $20K is way out of my reach.
I would love to be able to use Nginx and PHP but the resource constraints preclude that option.
I once upon a time, worked with the Lighttpd web server. You could compile it under certain conditions to a binary as small as, ~400KB size (400KB << 4MB). On the backend you could interface it to the fastCGI C-library. The backend then you can write in C.
You can skip the Lua scripts in my opinion. Or if you still want to use them, you could use the Lighttpd mod_magnet module which can work directly with Lua, so you can skip the FastCGI library. It also has a smaller memory footprint than Nginx, although I am not sure if it is small enough to fit in the 512KB RAM.
p.s. Lighttpd is free.
on the compact side:
bozohttpd http://www.eterna.com.au/bozohttpd/ HTTP/1.x webserver that can run lua scripts but forks with every request so it's stateless in that sense
lhttpd https://github.com/danristea/lhttpd HTTP/1.x HTTP/2 webserver that was inspired by bozohttpd but integrates lua coroutines with an event system (kqueue/epoll) to achieve non-blocking stateful execution.
(disclaimer: I am the author of lhttpd)

Varnish: how many req per second peak to (reasonably) expect?

We're experiencing a strange problem with our current Varnish configuration.
4x Web Servers (IIS 6.5 on Windows 2003 Server, each installed on a Intel(R) Xeon(R) CPU E5450 # 3.00GHz Quad Core, 4GB RAM)
3x Varnish Servers (varnish-3.0.3 revision 9e6a70f on Ubuntu 12.04.2 LTS - 64 bit/precise, Kernel Linux 3.2.0-29-generic, each installed on a Intel(R) Xeon(R) CPU E5450 # 3.00GHz Quad Core, 4GB RAM)
The 3 Varnish Servers have a pretty much standard, vanilla cfg: the only thing we changed was the vcl_recv and vcl_fetch in order to handle the session cookies. They are currently configured to use in-memory cache, but we already tried switching to HDD cache using an high-performance Raid Drive with the same exact results.
We had put this in place almost two years ago without problems on our old web farm, and everything worked like a blast. Now, using the machines described above and after a clean reinstall, our customers are experiencing a lot of connection problems (pending request on clients, 404 errors, missing files, etc.) when our websites are under heavy traffic. From the console log we can clearly see that these issues start happening when each Varnish reaches roughly 700 request per seconds: it just seems like they can't handle anything more. We can easily reproduce the critical scenario at any tme by shutting down one or two Varnish Servers, and see how the others react: they always start to skip beats everytime the req per seconds count reaches 700. Considering what we've experienced in the past, and looking to the Varnish specs, this doesn't seem to be normal at all.
We're trying to improve our Varnish servers performances and/or understand where the problem actually is: in order to do that, we could really use some kind of "benchmark" from other companies who are using it in a similar fashion in order to help us understand how far we are from the expected performances (I assume we are).
EDIT (added CFG files):
This is our default.vcl file.
This is the output of varnishadm >param.show output console cmd.
I'll also try to post a small part of our varnishlog file.
Thanks in advance,
To answer the question in the headline: A single Varnish server with the specifications you describe should easily serve 20k+ requests/sec with no other tuning than increasing the number of threads.
You don't give enough information (vcl, varnishlog) to answer your remaining questions.
My guess would be that you somehow end up serialising the backend requests. Check out your hit_for_pass objects and make sure they have a valid TTL set. (120s is fine)

Bottle WSGI server vs Apache

I don't actually have any problem, just a bit curious of things.
I make a python web framework based on bottle (http://bottlepy.org/). Today I try to do a bit comparison to compare bottle WSGI server and apache server performance. I work on lubuntu 12.04, using apache 2, python 2.7, bottle development version (0.12) and get this surprising result:
As stated in the bottle documentation, the included WSGI Server is only intended for development purpose. The question is, why the development server is faster than the deployment one (apache)?
As far as I know, development server is usually slower, since it provide some "debugging" features.
Also, I never has any response in less than 100 ms when developing PHP application. But look, it is just 13 ms in bottle.
Can anybody please explain this? This is just doesn't make sense for me. A deployment server should be faster than the development one.
Development servers are not necessarily faster than production grade servers, so such an answer is a bit misleading.
The real reason in this case is likely going to be due to lazy loading of your web application on the first request that hits a process. Especially if you don't configure Apache correctly, you could hit this lazy loading quite a bit if your site doesn't get much traffic.
I would suggest you go watch my PyCon talk which deals with some of these issues.
http://lanyrd.com/2013/pycon/scdyzk/
Especially make sure you aren't using prefork MPM. Use mod_wsgi daemon mode in preference.
A deployment server should be faster than the development one.
True. And it generally is faster... in a "typical" web server environment. To test this, try spinning up 20 concurrent clients and have them make continuous requests to each version of your server. You see, you've only tested 1 request at a time--certainly not a typical web environment. I suspect you'll see different results (we're thinking of both latency AND throughput here) with tens or hundreds of concurrent requests per second.
To put it another way: At 10, 20, 100 requests per second, you might still see ~200ms latency from Apache, but you'd see much worse latency from Bottle's server.
Incidentally, the Bottle docs do refer to concurrency:
The built-in default server is based on wsgiref WSGIServer. This
non-threading HTTP server is perfectly fine for development and early
production, but may become a performance bottleneck when server load
increases.
It's also worth noting that Apache is doing a lot more than the Bottle reference server is (checking .htaccess files, dispatching to child process/thread, robust logging, etc.) and all those features necessarily add to request latency.
Finally, I'd ask whether you tuned the Apache installation. It's possible that you could configure it to be faster than it is now, e.g. by tuning the MPM, simplifying logging, disabling .htaccess checks.
Hope this helps. And if you do run a concurrent benchmark, please do share the results with us.

Apache Tomcat 6.0.35 is taking 100% CPU in prodcution

I have been using apache-tomcat-6.0.35 in production environment. Our application is hosted on Amazon EC2 using Small Instance. The problem we are facing is that the apache tomcat is using 100% CPU. We have verified it by running htop and it shows multiple threads of tomcat running.
Out application has been developed in Grails 2.0.1.
We are puzzled that why it is happening? Can any body suggest any solutions?
Thanks
Probable Cause
Most likely this has been caused by the recent Leap Second and its impact on quite some unaware/unprepared IT systems, including parts of Linux, MySQL, Java and indeed Tomcat - see the Wired article about the ‘Leap Second’ Bug Wreaks Havoc Across Web for the whole story:
[...], saying it experienced the leap bug problem with the
Java-happy Tomcat web servers it uses to serve up its site. “Our web
servers running tomcat came close to zero response (we were able to
handle some requests),” read an e-mail from a site spokesman. “We were
able to connect to servers in order to reset them. Only rebooting the
servers cleared up the issue.” [emphasis mine]
Workaround / Fix
Accordingly, the solution usually boils down to turning it off and on again, i.e. restarting the server in question, though you might be able to avoid this by simply setting the date, as suggested e.g. in the context of:
Linux/Tomcat, see July 1 2012 Linux problems? High CPU/Load? Probably caused by the Leap Second!:
Apparently, simply forcing a reset of the date is enough to fix the
problem:
date -s "`date`"
MySQL, see MySQL and the Leap Second, High CPU and the Fix (also linked from the comments on wwwhizz' answer to MySQL high CPU usage, where you'll find two specific variations how to do this depending on your OS):
The fix is quite simple – simply set the date. Alternatively, you can
restart the machine, which also works. Restarting MySQL (or Java, or
whatever) does NOT fix the problem.
Background / Proposed Solutions
Please note that while the underlying issue is utterly tricky, it is all but unknown in principle, hence there have been prominent posts/users warning about and explaining this and offering suggestions on how to deal with it in principle, in particular:
An humble attempt to work around the leap second by Marco Marongiu
Time, technology and leaping seconds by Christopher Pascoe
We can't say anything for sure with the information provided. For performance issue, I would recommend a profiler, especially JProfiler, to investigate the cause of this problem. By this way you will be able to locate where the problem is.
This program has a trial license, I think that's enough for a quick look.
UPDATE: after carefully read your question, I see that you have many tomcat instance running for a website? It means that the previous tomcat instances failed to stop; they still run and hog up all the resources. This is possible. You must kill all the old tomcat process before trying to start a new one.
You can kill the processes by hand by "kill -9 " if you are on Linux, before trying to start the server again.

Nginx vs Cherokee replacement for Apache

How do you compare Nginx and Cherokee in terms of memory usage and performance? My VPS serves Drupal 6.16, magento 1.4.1 and CS-CART 2.0.15 with apache2 prefork-mpm. Apache2 eats my memory even though my sites are pretty low traffic profile (htop shows that each apache process eats %18 memory) . If I change apache to nginx or cherokee will I face any compatibility issues with magento, cs-cart and drupal? Which one is the most compatible with? I reallly appreciate any production system experience.Thanks.
You can greatly reduce the memory consumption of your VPS by installing a PHP accelerator such a eAccelerator. In most cases Apache web server will perform just fine. You might need to tweak it to optimize for your specific set up. You need to do some reading up on that though, since there is no silver bullet when it comes to that.
Try Hiawatha: http://www.hiawatha-webserver.org/. Its UrlToolkit is far more advanced than Apache's mod_rewrite. Yes, the frameworks you mentioned will run fine with Hiawatha. Tested it myself.
Take a look at the post below for some memory related measurements for Apache, Cherokee and Nginx. You might google around for similar results. However, I would recommend running such tests with typical cases in mind to see how it fits your use case.
Benchmark of django deployment techniques