Does EC2 Elastic Load Balancer remove the need for apache/nginx? - apache

I am striving for a very simple cloud based architecture on Amazon AWS. I would like to have an app layer of several "elastic" EC2 instances where my application (and application servers) run, but I'm wondering what the load balancing will look like.
If I choose to use ELB, does it remove the need for Apache or Nginx?

No. All the loadbalancer does is just that, distributes load across instances. Whatever your stack is running on each instance will still need a nginx or apache or whatever service you want to respond back to the request routed through the load balancer.
I'm assuming you're running a web stack needing some type of server like nginx, apache, or java needing tomcat or something.
However, if you want AWS to take care of nginx and/or apache, look into running as a ElasticBeanstalk application: https://aws.amazon.com/elasticbeanstalk/

Related

Is fronting Tomcat by Apache http server meaningful when we are using aws application load balancer

I know fronting Tomcat by Apache has some security benefits when we are using aws classic load balancer. However, I was wondering if it is meaningful to front Tomcat by Apache http server, when we are using aws application load balancer? Because an application load balancer operates at Layer 7 of the OSI model, and deals with application-level content.
It depends on your intention. Fronting tomcat is not an issue when you go with AWS and ALBs are good way to handle and route your traffic. Therefore, stick to ALB rather than ELB and maintaining an Apache layer.

Weblogic load balancing and request re-routing to another server

I'm totally new to clustering and load balancing.
What I'm trying to do is "Deploy Application on a Cluster which contains 2 managed servers. Now, If one of the managed server goes down, request should be redirected to another server which is Up."
For Example:
I've 2 managed servers (M1:7021 and M2:7022)
And I've a Cluster C1 having M1 and M2.
And I've an Application App1 deployed on C1 and a Data Source deployed on C1.
Application App1 is working fine.
The way through which I'm accessing application is:
http://10.184.111.11:7021/App1/
AND
http://10.184.111.11:7022/App1/
Now, Suppose if M1(7021) goes down, and request is coming like
:7021/App1/
Then, it should be redirected to :7022/App1/
Any help is highly appreciated. Thanks!
I believe you will need a load balancer (or a software equivalent) to sit above the weblogic servers and direct traffic down to those servers.
The idea being that you access your application on http://loadBalancer.com/App and then the Load Balancer forwards your request onto either one of weblogic servers. Meanwhile in the background the load balancer is continually performing health checks on the two weblogic servers to see if they are running.
In the event that one of the weblogic servers go down, the load balancer will mark it as inactive and forward all traffic to the weblogic server still running. Once the failed weblogic server has come back online the load balancer will begin routing traffic back through it.
#Garreth Well, in fact WebLogic DOES provide an internal load balancer. You are supposed to use OHS or Apache for load balancing in production environments, but for development, httpclusterservlet works great.

sshing in aws load balancer and configuring it for subdomain routing?

We want to use Amazon Elastic BeanStalk service for deployment in EC2 Boxes.
We want to deploy our Ruby on Rails Application in such a way that we can do sub-domain based routing to different rails app.
And we want to use single SSL Certificate for our load balancer and want to configure our load balancer in susch a away tha subdomain based routing takes place.
HA Proxy does this work well but when we are trying to use Amazon Elastic BeanStalk service for our deployment, aws creates a load balancer but didn't associate it with any Key-Pair.
So we are not able to ssh in load balancer and add our configuration for subdomain based routing.
Can someone please point me to some solution ?
Thanks,
Ankit.
You don't SSH into AWS load balancers, they are basically a black box that you have only a limited set of configuration options for. You probably need to look at the Route 53 services for DNS routing.
Your configuration would have routing based on domain DNS to different load balancers, one for each separate service you need. You can't have a single ELB route traffic to different EC2 instances based on domain or URI fragments.

Glassfish 3.1 loadbalancing setup

I am very new to server setup. I have a cluster with 2 instances in GF.
instance1:28081
instance2:28082
I am running my GF in Amazon Linux EC2 instance. What are the options to create a load balancer setup that directs traffic to these instances when I try to access my EC2 instance http 80 port?
1) Do I need to have a webserver to direct traffic to these instances?
2) Is there any options in Glass fish which can handle load balancing without a webserver on these instances? I couldn't find load balancing configuration on my admin console.
3) Is there a way to use Amazon Load balancing to distribute traffic to these cluster instances which resides in a single ec2 instance?
If some one can provide step by step instructions/link reference that would be helpful.
I did a nice write up on loadbalancing/failover and proxy options for GlassFish.
have a look: http://blog.eisele.net/2012/01/throwing-light-on-glassfish-webserver.html

What's the most scalable and high performing Amazon Web Service (AWS) configuration for a RESTful web service?

I'm building an asynchronous RESTful web service and I'm trying to figure out what the most scalable and high performing solution is. Originally, I planned to use the FriendFeed configuration, using one machine running nginx to host static content, act as a load balancer, and act as a reverse proxy to four machines running the Tornado web server for dynamic content. It's recommended to run nginx on a quad-core machine and each Tornado server on a single core machine. Amazon Web Services (AWS) seems to be the most economical and flexible hosting provider, so here are my questions:
1a.) On AWS, I can only find c1.medium (dual core CPU and 1.7 GB memory) instance types. So does this mean I should have one nginx instance running on c1.medium and two Tornado servers on m1.small (single core CPU and 1.7 GB memory) instances?
1b.) If I needed to scale up, how would I chain these three instances to another three instances in the same configuration?
2a.) It makes more sense to host static content in an S3 bucket. Would nginx still be hosting these files?
2b.) If not, would performance suffer from not having nginx host them?
2c.) If nginx won't be hosting the static content, it's really only acting as a load balancer. There's a great paper here that compares the performance of different cloud configurations, and says this about load balancers: "Both HaProxy and Nginx forward traffic at layer 7, so they are less scalable because of SSL termination and SSL renegotiation. In comparison, Rock forwards traffic at layer 4 without the SSL processing overhead." Would you recommend replacing nginx as a load balancer by one that operates on layer 4, or is Amazon's Elastic Load Balancer sufficiently high performing?
1a) Nginx is asynchronous server (event based), with single worker itself they can handle lots of simultaneous connection (max_clients = worker_processes * worker_connections/4 ref) and still perform well. I myself tested around 20K simultaneous connection on c1.medium kind of box (not in aws). Here you set workers to two (one for each cpu) and run 4 backend (you can even test with more to see where it breaks). Only if this gives you more problem then go for one more similar setups and chain them via an elastic load balancer
1b) As said in (1a) use elastic load balancer. See somebody tested ELB for 20K reqs/sec and this is not the limit as he gave up as they lost interest.
2a) Host static content in cloudfront, its CDN and meant for exactly this (Cheaper and faster then S3, and it can pull content from s3 bucket or your own server). Its highly scalable.
2b) Obviously with nginx serving static files, it will now have to serve more requests to same number of users. Taking that load away will reduce work of accepting connections and sending the files across (less bandwidth usage).
2c). Avoiding nginx altogether looks good solution (one less middle man). Elastic Load balancer will handle SSL termination and reduce SSL load on your backend servers (This will improve performance of backends). From above experiments it showed around 20K and since its elastic it should stretch more then software LB (See this nice document on its working)