I am working on a very big project of social networking in YII framework where the load balancing is very important issue that is arising.
What I need is :
I want to keep all the three layers ( models, views, controllers) on different EC2 amazon servers so that load balancing can be done in an efficient way.
What can I do for that in YII ?
Any help ?
For your load balancing you should not seperate the application on 3 different instances.
You should have the same app (with all the models, views and controllers) on several servers and then depending on each server's CPU and RAM usage the load balancer will redirect the end user on the appropriate server.
I don't even know if separate the app is doable, and if it is the user will have to wait much longer:
The front controller will call some models => One or several calls to the model server = some time
The front controller have to send the datas to the view => more time
At the end the user will have waited more than on a loaded server!
I'd highly recommend Amazon's Elastic Beanstalk service, as I'm using it for a project I'm developing which is also based on the Yii Framework.
The solution i use, is to deploy my application on 3 servers and keep them in sync from a deployment server with rsync. My static content comes from a 4rth server, but that would put all your code on 3 servers, as 3 exact clones. Imo this is the best because with amazone you can just spawn more clones if you need to scale your balancing.
Load balancing means that you will server a portion of the users on 1 server and another portion on a 2nd server and so on..
But if you split up your models/controllers/views you do not understand what load balancing is about.
Related
lets say there are 2 web services. The goal is, that the app gateway routes the requests to both of them. If one of them is down, it should cache all the requests. Once it is up again, which can happen hours later, all the requests cached in the meantime should be send to it in the correct sequence. This is to preserve both services in the same state. Is something like this possible with an application gateway? Or with any other webserver/tool?
Thanks!
u can do that but u need some configuration HTTP Load Balancing
Load Balancer Overview
The capacity of a single server is limited. Once a website gains more and more attraction the instance serving the site comes to a point where it can not handle any more users. The website starts to slow down or even become unavailable as the server goes down from the traffic.
This is the point where a load balancer enters the game. It allows to spread the “load” that all those visitors and their requests create to be “balanced” over a series of different instances.
In case of increasing load on a setup, capacity can easily be increased by adding more instances to the load balancers backend. This allows to scale your infrastructure without any downtime or delays whilst waiting for DNS zones to be updated.
Considering a three-tier app ( web server, app server and database).
[Apache Web server -> Tomcat app server -> database]
How to build an app stack (leaving out the database part) that has no single point of failure and is fault tolerant ?
IMHO, this is quite an open-ended question. How specific is a single point of failure - single app server, single physical server, single data centre, network?
A starting point would be to run your Tomcat and Apache servers in clusters. Alternatively, you can run separate instances, fronted with a load balancer such as HAProxy - except, to avoid a single point of failure, you will need redundancy on the load balancer as well. I recently worked on a project where we had two instances of a load balancer, fronted with a virtual IP (VIP). The load balancers communicated with two different app server instances, using a round-robin approach. Clients connected to the VIP in order to use the application, and they were completely oblivious to the fact that there were multiple servers behind it.
As an additional comment, you may also want to look at space-based architecture - https://en.wikipedia.org/wiki/Space-based_architecture.
HI, just now i download the Elastic Load Balance 2.1.0 from WSO2 ,It
is running on terminal side of Linux ubuntu, but it is not showing the
Management console url. If it is not showing url where can i get UI
of Elastic Load Balance.
i have a multiple esb server with same configuration.if my a1 server
go down that time data load will shift to my a2 server .Is this use of
Elasticloadbalance will you explain me about this what is the exactly
use of this .
No, there is no UI component for ELB. Everything has to be done through configuring physical files.
Elastic LoadBalancer 2.1.0 is based on Hazlecast dependent clustering. This has two parts, one is load balancing and the other is elasticity. Load Balancing is simply distributing workload among a number of endpoints configured in a static or dynamic manner. Elasticity is simply scaling, ie monitoring load on worker nodes and starts or terminates nodes based on need on an IaaS environment.
Not only manages when a node goes down but also depending on load it can spawn new nodes to handle and if the load is low it can kill unwanted instances in an IaaS environment.
I have a Apache web server in front of 2 tomcats which are connected to the same MySQL backend database.
I need to load balance the incoming requests between two tomcats based on a URL parameter named "projectid". For example all even project ids may be served with tomcat 1 and odd requests with tomcat 2.
This is required because the user may start jobs in a project of tomcat 1 which tomcat 2 won't be aware of and these jobs are currently not stored in the database.
Is there a way to achieve this using mod-proxy-load-balancing?
I'm not aware of such a load algorithm being already present. However, keep in mind that the most common loadbalancing outcome (especially when you have server-side state as you obviously have) is a sticky session: You're only balancing the initial request. After that, all requests are typically directed to the same server.
I typically recommend against distributing the session data as it adds some commonly unnecessary performance hit onto each request, negating the improved performance that you can get with clustering. This is subject to be changed in actual installations though and just a first rule of thumb.
You might be able to create your own loadbalancing algorithm with mod-proxy-load-balancing (you'll need to configure the algorithm in the config file), but I believe your time is better spent fixing your implementation, or implement business specific logic to check all cluster machines for running jobs.
I've written a simple server application which will run distributed on several machines.
My question is how does a network load balancer works, in general?
I've heard of round-robin and other algorithms, but what I haven't got answer to is how does the process really goes? In socket terms.
The client connects to one of the load balancer machines, asks for a "free-to-connect-to" server and simply connects to it?
That's the simpliest way I can think of.
.. or, does it use the load balancer as a proxy (that implies that all the NBs must be always connected to the application servers, and data is transferred through them)?
It's more of a general question. How would you do this?
Thank you all!
There are several different ways to load balance an application. Some are physical devices that sit between your router and the servers; some are software based with a bit of code that runs on each of the load balanced devices.
Microsoft has load balancing built into Windows which is all software based. It's pretty good and easy to set up.
However, I'll cover the physical route.
There are several algorithms here, but the main one is Round Robin with an option for "sticky" sessions. Sticky in this case means that the load balancer will try to keep a history of clients and forward requests from the same client to the same machine. This means the load balancer needs to keep a list of clients and where it directed those clients. Depending on cache size, clients may fall off the list and on future requests they may be forwarded to a different server.
Round Robin is a pretty simple idea. For each request that comes in send it to the next server in the list. More complicated algorithms might take into account how many requests go to a particular server and how long are those requests taking; then try to rebalance new requests to favor faster servers. This part is complicated though.