Google Load balancer vs httpd apache load balancer - apache

We use apache httpd load balancer for our project.
We were looking at Google Load balancer and may be shift to it.
But i dont find any comparison of both as in pro/con of one over another, so that we can decide on what suits as best.
Can we get a list of pro/con?

If scalability and performance is critical to you, definitely choose GCE load balancer. The traditional model of load balancing is basically "proxy + backends", in which the proxy quickly becomes the bottleneck. This is not the case for GCE load balancer, which has no proxy at all, the load balancing is implemented by the underlying infrastructure.
But GCE load balancer is not free, see pricing here https://cloud.google.com/compute/pricing#lb.

Related

Does EC2 Elastic Load Balancer remove the need for apache/nginx?

I am striving for a very simple cloud based architecture on Amazon AWS. I would like to have an app layer of several "elastic" EC2 instances where my application (and application servers) run, but I'm wondering what the load balancing will look like.
If I choose to use ELB, does it remove the need for Apache or Nginx?
No. All the loadbalancer does is just that, distributes load across instances. Whatever your stack is running on each instance will still need a nginx or apache or whatever service you want to respond back to the request routed through the load balancer.
I'm assuming you're running a web stack needing some type of server like nginx, apache, or java needing tomcat or something.
However, if you want AWS to take care of nginx and/or apache, look into running as a ElasticBeanstalk application: https://aws.amazon.com/elasticbeanstalk/

Difference Between Load Balancing and Load Balancer

I need to know the difference between a load balancer and load balancing.
Load balancing is the functionality provided by a Load balancer :).
In software architecture, a load balancer proxies client requests to a pool of application server, using an algorithm, with the objective of balancing the load of client requests evenly across the pool
Load balancing refers to efficiently distributing incoming network traffic across a group of backend servers, also known as a server farm or server pool.
A load balancer acts as the “traffic cop” sitting in front of your servers and routing client requests across all servers capable of fulfilling those requests in a manner that maximizes speed and capacity utilization and ensures that no one server is overworked, which could degrade performance. If a single server goes down, the load balancer redirects traffic to the remaining online servers. When a new server is added to the server group, the load balancer automatically starts to send requests to it.
refer - https://www.nginx.com/resources/glossary/load-balancing/
Load Balancing helps spread incoming request traffic across cluster of servers. If a server is not availble to take a request, load balancer passes this request to another server.
Load Balancer in turn are the ones which achieve above, they could come in between :-
User - webserver
Webserver - internal application servers
Internal servers - database servers
Application servers - cache servers
Different types of Load Balancers:
Smart Client - Adding load balance achievability by It is a client which takes a pool of service hosts and balances load across them, detects downed hosts and avoids sending requests their way.
Hardware Load Balancer - Buy your own dedicated high performance server eg. Citrix NetScaler.
Software Load Balancer - Buy a software load balancer to overcome all the pain of building your own smart client or if you not ready spending on dedicated server. Cost effective than above two is buying a software load balancer eg. VmWare, HAProxy etc
As per my knowledge both are same but you can say that the load balancer is the device used for balancing the traffic as per the availability of the server and load balancing is nothing but theoretical explanation for how to achieve this.
Please correct me if I'm wrong!

Weblogic vs Apache load balancer

In our typical production environment, Apache web server works as proxy to our application server like weblogic. I have question about load balancing. Both apache and web logic provide its own functionality of load balancing. If apache can balance the load, what is the use of web logic load balancer.
As mentioned in the oracle doc Load Balancing, there are many ways of doing load balancing for weblogic. Should you already have an Apache web server, it is better to use that instead of having Weblogic do the load balancing. The load balancer must typically be off the JVM because the should there be higher traffic, weblogic must have reserve resources for these incidents. Apache does load balancing very easily but weblogic requires more effort as it is an additional feature. Its basically like a boat in water and a car that can also float (the car being weblogic).

NGINX as a Web Server + Load Balancer with Cacheing Enabled

We currently run a SaaS application on apache which server ecommerce websites (its a store builder). We currently host over 1000 clients on that application and are now running into scalability issues (CPU going over 90% even on a fairly large 20 core 80GB ram + all SSD disk server).
We're looking for help from an nginx expert who can:
1. Explain the difference between running nginx as a web server vs. using it like a reverse proxy. What are the benefits?
2. We also want to use nginx as a load balancer (and have that already setup in testing), but we haven't enabled cacheing on the load balancer. So while its helping redirect requests, its not really serving any traffic directly and it simply passes through everything to one of the two apache servers.
The question is that we have a lot of user-generated content coming from the apache servers, how do we invalidate the cache for only certain pages that are being cached by nginx? If we setup a cron to clear this cache every 1 minute or so, it wouldn't be that useful... as cache would then be virtually non existent.
--
Also need an overall word on what is the best architecture to build for given the above scenarios.
Is it
NGINX Load Balancer + Cacheing ==> Nginx Web Server
NGINX Load Balancer ==> Nginx Web Server + Cacheing ?
NGINX Load Balancer + Cacheing ==> Apache Web Server
NGINX Load Balancer == > Apache Web Server (unlikely)
Please help!
Scaling horizontally to support more clients is a good option. Its recommended to first evaluate what is causing the bottleneck, memory within the application, long running requests etc.
Nginx Vs other web servers: Nginx is a HTTP server and not a servlet engine. Given that, you can check if it fits your needs.
It is a fast web server. You need to evaluate the benefits of using it as a single stand alone webserver against other web servers. Speed and memory could help.
Nginx as a load balancer:
You can have multiple web server instances behind nginx.
It supports load balancing algorithms like round robin, weighted etc so the load can be distributed based on the resource availability.
It helps in terminating ssl at Nginx, filter requests, modify headers,
compression, application upgrades wihtout downtime, serve cached content etc. This frees up resources on the server running the application. Also separation of concerns.
This setup is a reverse proxy and the benefits to it.
You can handle cache expiry with nginx. nginx documentaion has good details http://nginx.com/resources/admin-guide/caching/

What's the most scalable and high performing Amazon Web Service (AWS) configuration for a RESTful web service?

I'm building an asynchronous RESTful web service and I'm trying to figure out what the most scalable and high performing solution is. Originally, I planned to use the FriendFeed configuration, using one machine running nginx to host static content, act as a load balancer, and act as a reverse proxy to four machines running the Tornado web server for dynamic content. It's recommended to run nginx on a quad-core machine and each Tornado server on a single core machine. Amazon Web Services (AWS) seems to be the most economical and flexible hosting provider, so here are my questions:
1a.) On AWS, I can only find c1.medium (dual core CPU and 1.7 GB memory) instance types. So does this mean I should have one nginx instance running on c1.medium and two Tornado servers on m1.small (single core CPU and 1.7 GB memory) instances?
1b.) If I needed to scale up, how would I chain these three instances to another three instances in the same configuration?
2a.) It makes more sense to host static content in an S3 bucket. Would nginx still be hosting these files?
2b.) If not, would performance suffer from not having nginx host them?
2c.) If nginx won't be hosting the static content, it's really only acting as a load balancer. There's a great paper here that compares the performance of different cloud configurations, and says this about load balancers: "Both HaProxy and Nginx forward traffic at layer 7, so they are less scalable because of SSL termination and SSL renegotiation. In comparison, Rock forwards traffic at layer 4 without the SSL processing overhead." Would you recommend replacing nginx as a load balancer by one that operates on layer 4, or is Amazon's Elastic Load Balancer sufficiently high performing?
1a) Nginx is asynchronous server (event based), with single worker itself they can handle lots of simultaneous connection (max_clients = worker_processes * worker_connections/4 ref) and still perform well. I myself tested around 20K simultaneous connection on c1.medium kind of box (not in aws). Here you set workers to two (one for each cpu) and run 4 backend (you can even test with more to see where it breaks). Only if this gives you more problem then go for one more similar setups and chain them via an elastic load balancer
1b) As said in (1a) use elastic load balancer. See somebody tested ELB for 20K reqs/sec and this is not the limit as he gave up as they lost interest.
2a) Host static content in cloudfront, its CDN and meant for exactly this (Cheaper and faster then S3, and it can pull content from s3 bucket or your own server). Its highly scalable.
2b) Obviously with nginx serving static files, it will now have to serve more requests to same number of users. Taking that load away will reduce work of accepting connections and sending the files across (less bandwidth usage).
2c). Avoiding nginx altogether looks good solution (one less middle man). Elastic Load balancer will handle SSL termination and reduce SSL load on your backend servers (This will improve performance of backends). From above experiments it showed around 20K and since its elastic it should stretch more then software LB (See this nice document on its working)