Weblogic load balancing and request re-routing to another server - weblogic

I'm totally new to clustering and load balancing.
What I'm trying to do is "Deploy Application on a Cluster which contains 2 managed servers. Now, If one of the managed server goes down, request should be redirected to another server which is Up."
For Example:
I've 2 managed servers (M1:7021 and M2:7022)
And I've a Cluster C1 having M1 and M2.
And I've an Application App1 deployed on C1 and a Data Source deployed on C1.
Application App1 is working fine.
The way through which I'm accessing application is:
http://10.184.111.11:7021/App1/
AND
http://10.184.111.11:7022/App1/
Now, Suppose if M1(7021) goes down, and request is coming like
:7021/App1/
Then, it should be redirected to :7022/App1/
Any help is highly appreciated. Thanks!

I believe you will need a load balancer (or a software equivalent) to sit above the weblogic servers and direct traffic down to those servers.
The idea being that you access your application on http://loadBalancer.com/App and then the Load Balancer forwards your request onto either one of weblogic servers. Meanwhile in the background the load balancer is continually performing health checks on the two weblogic servers to see if they are running.
In the event that one of the weblogic servers go down, the load balancer will mark it as inactive and forward all traffic to the weblogic server still running. Once the failed weblogic server has come back online the load balancer will begin routing traffic back through it.

#Garreth Well, in fact WebLogic DOES provide an internal load balancer. You are supposed to use OHS or Apache for load balancing in production environments, but for development, httpclusterservlet works great.

Related

SSL Configuration in Clustered environment

We have an Oracle application (Agile PLM) which is deployed in a clustered environment. We have one admin node and two managed nodes supporting our application, where admin and 1 managed nodes are on the same server. We also have Load balancer which manages the traffic between the cluster.
We want to configure SSL in our application so that the application URL will be accessible over https only. We have already configured SSL at Load Balancer level(by installing security certificates in weblogic server which is the admin server) but want to know if we have to configure SSL on the managed server as well or bringing Load Balancer on https is sufficient?
All the users access the application using the Load Balancer URL only but since I am from the development team, so is only aware of the fact that we can also connect to the application with Managed server URLs, which are still running on http. Is it must to bring Managed servers also on https or it is just a good practice but not necessary?
It's not necessary, though probably a good practice.
I believe I have read in Oracle's installation guide that the recommended way is HTTP on the managed servers and terminating SSL on the load balancer. This may have changed.
For what it's worth, I leave HTTP on the managed servers.

Google compute engine load balancing not routing properly

I am new to Google compute engine and I am try to setup network load balancing having 2 VMs for serving web pages.
For ex, I have 2 VMs - app1 and app2 - both having apache server and serves simple web page.
Both VMs are running with Red Hat Enterprise Linux Server release 7.0 (Maipo)
I am able to access both web pages through the IP in browser.
I created network load balancing setup and both apps are showing in green in target pool which means load balancer is able to connect to both VMs.
But, when I hit the IP of load balancer, it is rendering page from only one server. If I manually stop the server in the VM, load balancer IP redirects to other app. I believe load balancer is able to identify health of both VMs and able to redirect.
But it is not balancing the traffic. Can anyone help me to solve this issue?
I think that the network load balancer doesn't forward the traffic on a round-robin basis. I was able to test it with the load balancer setup that I have. As per the documentation:
By default, to distribute traffic to instances, Google Compute Engine picks an instance based on a hash of the source IP and port and the destination IP and port.
HTTP/S load balancing will proxy requests in a round-robin fashion. https://cloud.google.com/compute/docs/load-balancing/http/

Difference Between Load Balancing and Load Balancer

I need to know the difference between a load balancer and load balancing.
Load balancing is the functionality provided by a Load balancer :).
In software architecture, a load balancer proxies client requests to a pool of application server, using an algorithm, with the objective of balancing the load of client requests evenly across the pool
Load balancing refers to efficiently distributing incoming network traffic across a group of backend servers, also known as a server farm or server pool.
A load balancer acts as the “traffic cop” sitting in front of your servers and routing client requests across all servers capable of fulfilling those requests in a manner that maximizes speed and capacity utilization and ensures that no one server is overworked, which could degrade performance. If a single server goes down, the load balancer redirects traffic to the remaining online servers. When a new server is added to the server group, the load balancer automatically starts to send requests to it.
refer - https://www.nginx.com/resources/glossary/load-balancing/
Load Balancing helps spread incoming request traffic across cluster of servers. If a server is not availble to take a request, load balancer passes this request to another server.
Load Balancer in turn are the ones which achieve above, they could come in between :-
User - webserver
Webserver - internal application servers
Internal servers - database servers
Application servers - cache servers
Different types of Load Balancers:
Smart Client - Adding load balance achievability by It is a client which takes a pool of service hosts and balances load across them, detects downed hosts and avoids sending requests their way.
Hardware Load Balancer - Buy your own dedicated high performance server eg. Citrix NetScaler.
Software Load Balancer - Buy a software load balancer to overcome all the pain of building your own smart client or if you not ready spending on dedicated server. Cost effective than above two is buying a software load balancer eg. VmWare, HAProxy etc
As per my knowledge both are same but you can say that the load balancer is the device used for balancing the traffic as per the availability of the server and load balancing is nothing but theoretical explanation for how to achieve this.
Please correct me if I'm wrong!

Coherence web and load balancing

We had 2 managed servers sitting behind a Citrix Netscaler loadabalncer with sticky session enabled, so a request will be forwarded to the same managed server.
Now we configured a coherence*web cluster with 2 managed servers and a Citrix Netscaler as load balancer sitting in the front. How do we call the coherence cluster from the loadbalancer without calling the managed servers? Is there any IP address for the coherence cluster that we need to call from the netscaler or how to call the cluster without calling individual servers?
Thanks a lot.
You keep the same config that you had before for load-balancing to the web or application servers.
Coherence*Web makes sure that the data in the sessions will be shared between those servers (even if you add and remove servers dynamically!), and will not be lost if one server dies.

Load balancing server

I would like to know about load balancing servers.
I am having an application which is having load balanced server.
When i made some changes to the data, in my application how it is taking effect?
Also, when we restart the application , what are all the steps that are happening, to a load balanced server?
well, the load balancer is separate from the application code, basically it is just routing the requests to one of a number of set up servers (a.k.a. downstream servers, for instance web application servers, apache/nginx+php, etc) that handles the actual request. So to update the application (i.e. java servlet, JSP, PHP page, static HTML page, image, etc) all the downstream servers will have to be updated. As for data (i.e. articles, user database, etc) this will usually be stored in a database that all the downstream servers connect to
As for restarting the application, when you do that on each of the downstream servers it will temporarily be unable to service requests, the load balancer will thus get an "unable to connect" issue when trying to send requests to the server with the application being restarted, and will then try to send the request to the next server in the list of downstream servers. Depending on how the load balancer is set up it will automatically retry sending new requests to the previously restarted server and when the restarted downstream server is up again it will again service requests. So to update the applications you basically just update one downstream server at the time, as the other servers take over the load while it is restarted it will be no downtime, and the clients will be none the wiser
Is this a hardware appliance or at server running HAProxy/nginx/other?