If my site is used by many users, it becomes slow. How do I make my site to use two or more PCs?
Usually httpd is not aware of any load balancing in front of it. Check out things like LVS (linux virtual server) or haproxy or even nginx for examples of things you can put in front of httpd to scale it horizontally.
Related
i want to ask something about dedicated server.
i have dedicated server and a cPanel website with heavy load, when i check the server load, all parameter didn't go up to 60% usage. but the apache work is high.
so i wonder if i can do this.
i buy dedicated server(DS) and install 2 cPanel on same DS. i know that cPanel need an IP to bind the license so i add 1 additional IP to my DS.
what i am trying to archieve here is to split workload in same website, and to split the traffic i use loadbalancer from CF.
so i have abc.com with 2 different IPs and use LoadBalancer to split the load.
here is why i need to do this
Server load relative low (under 80%)
Apache load relative high 3-10 req/s
There is a problem in your problem definition
What do you mean by Apache work?
if you want have more threads and processes of Apache httpd on the same server, you dont need to install two Cpanel instances, you could tune your Apache httpd worker configuration for a better performance and resource utilization.
you can even use litespeed or nginx web servers on cpanel.
Title pretty much explains it. I've been told this is a bad idea before by a buddy, are they correct? It's one dedicated server with multiple domain names forwarding to different sites on said server. Is this something I should avoid doing, or are there going to be major security concerns?
It not inherently a bad idea but there are things to take into consideration.
How powerful is your server?
How much traffic do expect to serve?
Will your site be resource intensive?
If you have a minimal server hosting sites where you expect a large amount of traffic, then you may not want to host multiple sites on a single server, but if you have a decently powered server and expect a moderate amount of traffic; hosting multiple sites should be fine.
If you site is going be running resource intensive processes you should certainly consider the power of your server.
Apache is actually designed to host multiple sites using virtual hosts. Here's some guides on setting up virtual hosts on Apache.
http://httpd.apache.org/docs/2.2/vhosts/examples.html
http://www.rackspace.com/knowledge_center/article/how-to-serve-multiple-domains-using-virtual-hosts
I've run single server configurations that host approximate 20 sites without issue. If you're concerned with server scalability, a better option is two identical servers behind a load balancer so you can simply add additional servers to handle increases in traffic.
I have TWO below simple scenarios to choose which are making me hard to decide:
Scenario (1)
Load-balancer: nginx (On 1 separated Machine)
Web Servers: Apache (On multiple separated Machine)
Scenario (2)
Load-balancer: Apache (On 1 separated Machine)
Web Servers: Apache (On multiple separated Machine)
** Lets say i only have nginx OR Apache only two as the choices. (No Varnish, etc)
** Then obviously my question there are:
Which one is better to load balance the multiple Apache Web Servers?
For the Load-balancer, is NGINX still better then Apache (even for the Apache Web Servers below)?
Please kindly help me be adviced.
Thank you.
This is kinda a 'what is better' kind of question, which are frowned upon over here. check the don't ask page.
I'll tell you one simple thing which is obviously only my personal opinion, nginx is basically a proxy server, so it's built to do this kind of stuff, and it's lighter, so I would recommend nginx as a load balancer instead of apache.
I was wondering if it would be okay to run Tomcat as both the web server and container? On the other hand, it seems that the right way to go about scaling your webapp is to use Apache HTTP listening on port 80 and connecting that to Tomcat listening on another port?
Are both ways acceptable? What is being used nowdays? Whats the prime difference? How do most major websites go about this?
Thanks.
Placing an Apache (or any other webserver) in front of your application server(s) (Tomcat) is a good thing for a number of reasons.
First consideration is about static resources and caching.
Tomcat will probably serve also a lot of static content, or even on dynamic content it will send some caching directives to browsers. However, each browser that hits your tomcat for the first time will cause tomcat to send the static file. Since processing a request is a bit more expensive in Tomcat than it is in Apache (because of Apache being super-optimized and exploiting very low level stuff not always available in Tomcat, because Tomcat extracting much more informations from the request than Apache needs etc...), it may be better for the static files to be server by Apache.
Since however configuring Apache to serve part of the content and Tomcat for the rest or the URL space is a daunting task, it is usually easier to have Tomcat serve everything with the right cache headers, and Apache in front of it capturing the content, serving it to the requiring browser, and caching it so that other browser hitting the same file will get served directly from Apache without even disturbing Tomcat.
Other than static files, also many dynamic stuff may not need to be updated every millisecond. For example, a json loaded by the homepage that tells the user how much stuff is in your database, is an expensive query performed thousands of times that can safely be performed each hour or so without making your users angry. So, tomcat may serve the json with proper one hour caching directive, Apache will cache the json fragment and serve it to any browser requiring it for one hour. There are obviously a ton of other ways to implement it (a caching filter, a JPA cache that caches the query etc...), but sending proper cache headers and using Apache as a reverse proxy is quite easy, REST compliant and scales well.
Another consideration is load balancing. Apache comes with a nice load balancing module, that can help you scale your application on a number of Tomcat instances, supposed that your application can scale horizontally or run on a cluster.
A third consideration is about ulrs, headers etc.. From time to time you may need to change some urls, or remove or override some headers. For example, before a major update you may want to disable caching on browsers for some hours to avoid browsers keep using stale data (same as lowering the DNS TTL before switching servers), or move the old application on another url space, or rewrite old URLs to new ones when possible. While reconfiguring the servlets inside your web.xml files is possible, and filters can do wonders, if you are using a framework that interprets the URLs you may need to do a lot of work on your sitemap files or similar stuff.
Having Apache or another web server in front of Tomcat may help a lot changing only Apache configuration files with modules like mod_rewrite.
So, I always recommend having Apache httpd in front of Tomcat. The small overhead on connection handling is usually recovered thanks to caching of resources, and the additional configuration works is regained the first time you need to move URLs or handle some headers.
It depends on your network and how you wish to have security set up.
If you have a two-firewall DMZ, with applications deployed inside the second firewall, it makes sense to have an Apache or IIS instance in between the two firewalls to handle security and proxy calls into the app server. If it's acceptable to put the Tomcat instance in the DMZ you're free to do so. The only downside that I see is that you'll have to open a port in the second firewall to access a database inside. That might put the database at risk.
Another consideration is traffic. You don't say anything about traffic, sizing servers, and possible load balancing and clustering. A load balancer in front of a cluster of app servers is more likely to be kept inside the second firewall. The Tomcat instance is capable of handling traffic on its own, but there are always volume limitations depending on the hardware it's deployed on and what the application is doing with each request. It's almost impossible to give a yes or no answer without more detailed, application-specific information.
Search the site for "tomcat without apache" - it's been asked before. I voted to close before finding duplicates.
We use Apache with Nginx(as reverse proxy) for more concurrency level because of the way that Nginx handles static contents and use fewer connections something that Apache lacks.
The question now is that is there any difference between the above scenario and using another server for serving static content (css,js,images,etc) with nginX and your primary server with Apache installed?
In my project there are millions of user with avatar,banner and ofcourse photo gallery. Project is nearly ready, and I want to make sure I'm on the right direction. Which scenario is the best?
EDIT:
What would happen if slow clients cause Apache to keep threads busy for longer than needed in the primary server?
One of the main purposes of nginx behind Apache is to handle slow clients to ensure that Apache doesn't have to keep its threads busy for this.
btw, I think it's relevant to the topic http://www.aosabook.org/en/nginx.html