WebLogic load balancing - weblogic

I'm currently developing a project supported on a WebLogic clustered environment. I've successfully set up the cluster, but now I want a load-balancing solution (currently, only for testing purposes, I'm using WebLogic's HttpClusterServlet with round-robin load-balancing).
Is there any documentation that gives a clear comparison (with pros and cons) of the various ways of providing load-balancing for WebLogic?
These are the main topics I want to cover:
Performance (normal and on failover);
What failures can be detected and how fast is the failover recovery;
Transparency to failure (e.g., ability to automatically retry an idempotent request);
How well is each load-balancing solution adapted to various topologies (N-tier, clustering)
Thanks in advance for your help.

Is there any documentation that gives a clear comparison (with pros and cons) of the various ways of providing load-balancing for WebLogic?
It's not clear what kind of application you are building and what kind of technologies are involved. But...
You will find useful information in Failover and Replication in a Cluster and Load Balancing in a Cluster (also look at Cluster Implementation Procedures) but, no real comparison between the different options, at least not to my knowledge. But, the choice isn't that complex: 1. Hardware load balancers will perform better than software load balancers and 2. If you go for software load balancers, then WebLogic plugin for Apache is the recommended (by BEA) choice for production. Actually, for web apps, its pretty usual to put the static files on a web server and thus to use the Apache mod_wl plugin. See the Installing and Configuring the Apache HTTP Server Plug-In chapter.
These are the main topics I want to cover:
Performance (normal and on failover): If this question is about persistent session, WebLogic uses in memory replication by default and this works pretty well with a relatively low overhead.
What failures can be detected and how fast is the failover recovery: It is unclear which protocols you're using. But see Connection Errors and Clustering Failover.
Transparency to failure (e.g., ability to automatically retry an idempotent request): Clarifying the protocols you are using would make answering easier. If this question is about HTTP requests, then see Figure 3-1 Connection Failover.
How well is each load-balancing solution adapted to various topologies (N-tier, clustering): The question is unclear and too vague (for me). But maybe have a look at Cluster Architectures.
Oh, by the way, another nice chapter that you must read Clustering Best Practices.

Related

Failover with Spring AMQP and RabbitMQ HA

There are multiple articles suggesting that load-balancer should be used in front of RabbitMQ cluster.
However, there are also multiple references that Spring AMQP is using some
failover implementation like connection reset when broker comes back to life.
I have several questions regarding this topic (given that those articles are more or less old and it's 2018 today)
When using Spring AMQP, is it load-balancing for still required?
If load-balancing is still suggested, how would I solve affinity of primary queue to its node? There would be much inter-connect between cluster nodes, because round-robin load-balancer would have 1-(1/n) success rate of hitting correct cluster node
Does Spring AMQP support some kind of topology awareness, which would allow it to consume from correct node?
There were some articles suggesting that clients should publish/consume to nodes respecting locality of queues. Does this still apply? How does this all fits together given load-balancing, Spring AMQP failover and CachingConnectionFactory?
Can anybody please provide answers to those topics and also provide relevant references, which would provide additional information for verification?
Thanks a lot
For each of your bullets:
a load balancer makes little sense with default configuration of Spring AMQP since it opens a single, long-lived, connection that is shared across all consumers. In, 2.0, you can configure the RabbitTemplate to use a separate connections; this is because it is a recommended configuration to use a different connection for publishers/consumers; this will be default in 2.1.
It might make sense to use a load balancer if you configure the connection factory to cache connections (instead of just channels) since, then, each component gets its own connection.
See next bullet.
See Queue Affinity and the LocalizedQueueConnectionFactory. It uses the management plugin to determine which node currently hosts the queue and connects to that. It will not work with a load balancer since it needs to connect to the actual node.
It is my understanding from several discussions that queue affinity is only needed in the most extreme environments and that, in most environments, the difference is immeasurable. However, environments/networks differ so much, YMMV so you may want to test. My general rule of thumb is to avoid premature optimization since the added complexity of the configuration may simply not be worth the benefit (and you may not have a problem in the first place).

Embedded BrokerService vs installed ActiveMQ broker

I would like to know are there feature wise same or different? Could you also mention any pros and cons about both of these? Also please mention real-world use case for both Embedded BrokerService vs installed ActiveMQ broker. Thanks in advance!
ActiveMQ is just a Java application, and the embedded version offers essentially the same features as the stand-alone version. In fact, you can configure an embedded broker to take its configuration from an XML file, in which case it will look very similar to the stand-alone broker.
Embedding a broker is a reasonable thing to do if you need the benefit of programmatic configuration; that is, you want to configure things according to rules which are hard to implement in an XML file. It also makes sense if you want close-coupled operation between the broker and the application components, with message data being passed in memory. This might be the situation if you're using JMS as an inter-module communication mechanism within the application.
Embedding a broker has the disadvantage -- and it can be a profound one -- of making it difficult to disentangle problems in the broker from problems in your application. Figuring out the cause of, say, runaway memory consumption could be very difficult. You can get commercial support for ActiveMQ, should you need it, but it will be hard for any commercial organization to support a hybrid broker+application installation.

Designing scaling services with high available load-balancing

When running self-healing, scalable stateless services in frameworks such as marathon, the "affirmed" pattern is to have a tool for service discovery (e.g., bamboo) that feeds a load-balancer (e.g., HAProxy), preferrably with some automatic configuration, so that users can be proxied to services when hitting the load-balancer.
I don't seem to find much material about how to make the load-balancer itself highly available.
If the host that runs the load-balancer dies, I would like the services to still be accessed on the same URIs without downtimes.
What I desire can be achieved with Pacemaker/Corosync, but the fact that this specific point is often omitted in the various tutorials and blog posts, makes me think that maybe there is a simpler pattern or that I am overlooking the problem.
Do you have any suggestions?
One common approach is to run multiple machines with Bamboo/HAProxy on them pulling container locations from the Marathon/Mesos masters. In an AWS/Cloud-du-jour environment, these proxy machines can go behind an ELB (or even in an Autoscaling Group if your automation is good enough) to give you true HA functionality.

Real world example of Apache Helix, Zookeeper, Mesos and Erlang?

I am new in
Apache ZooKeeper : ZooKeeper is a centralized service for maintaining configuration information, naming, providing distributed synchronization, and providing group services.
Apache Mesos : Apache Mesos is a cluster manager that simplifies the complexity of running applications on a shared pool of servers.
Apache Helix : Apache Helix is a generic cluster management framework used for the automatic management of partitioned, replicated and distributed resources hosted on a cluster of nodes.
Erlang Langauge : Erlang is a programming language used to build massively scalable soft real-time systems with requirements on high availability.
It sounds to me that Helix and Mesos both are useful for Clustering management System. How they are related to ZooKeeper? It'd better if someone give me a real world example for their usage.
I am curious to know How [BOINC][1] are distributing tasks to their clients? Are they using any of the above technologies? (Forget about Erlang).
I just need a brief view on it :)
Erlang was built by Ericsson, designed for use in phone systems. By design, it runs hundreds, thousands, or even 10s of thousands of small processes to handle tasks by sending information between them instead of sharing memory or state. This enables all sorts of interesting features that are great for high availability distributed systems such as:
hot code reloading. Each process is paused, it's relevant module code is swapped out, and it is resumed where it left off, so deploys can happen without restarting or causing significant interruption.
Easy distributed messaging and clustering. Sending a message to a local process or a remote one is fairly seamless in most instances.
Process-local GC. Garbage collection happens in each process independently instead of a global stop-the-world even like java, aiding in low-latency results.
Supervision trees and complex process hierarchy and monitoring/managing.
A few concrete real-world examples that makes great use of Erlang would be:
MongooseIM A highly performant and incredibly scalable, distributed XMPP / Chat server
Riak A distributed key/value store.
Mesos, on the other hand, you can sort of think of as a platform effectively for turning a datacenter of servers into a platform for teams and developers. If I, say as a company, own a datacenter with 10,000 physical servers, and I have 1,000 engineers developing hundreds of services, a good way to allow the engineers to deploy and manage services across that hardware without them needing to worry about the servers directly. It's an abstraction layer over-top of the physical servers to that allows you to share and intelligently allocate resources.
As a user of Mesos, I might say that I have Service X. It's an executable bundle that lives in location Y. Each instance of Service X needs 4 GB of RAM and 2 cores. And I need 8 instances which will be attached to a load balancer. You can specify this in configuration and deploy based on that config. Mesos will find hardware that has enough ram and CPU capacity available to handle each instance of that service and start it running in each of those locations.
It can handle a lot of other more complex topics about the orchestration of them as well, but that's probably a bit in-depth for this :)
Zookeepers most common use cases are Service Discover and configuration management. You can think of it, fundamentally, a bit like a nested key value store, where services can look at pre-defined paths to see where other services currently live.
A simple example is that I have a web service using a shared database cluster. I know a simple name for that database cluster and where the configuration for it lives in zookeeper. I can look up (or repeatedly poll) that path in zookeeper to check what the addresses of the active database hosts are. And on the other side, if I take a database node out of rotation and replace it with a new one, the config in zookeeper gets updated with the new address, and anything continually looking at it will detect this change and change where it's connected to.
A more complex use case for zookeeper is how Kafka uses it (or did at the time that I last used Kafka). Kafka has streams, and streams have many shards. Each consumer of each stream use zookeeper to save checkpoints in each shard after they have read and processed up to a certain point in the stream. That way if the consumer crashes or is restarted, it knows where to pick up in the stream.
I dont know about Meos and Earlang language. But this article might help you with Helix and Zookeeper.
This article tells us:
Zookeeper is responsible for gluing all parts together where Helix is cluster management component that registers all cluster details (cluster itself, nodes, resources).
The article is related to clustering in JBPM using helix and zookeeper.But with this you will get a basic idea on what helix and zookeeper is used for.
And from most of the articles i read online it seems like zookeeper and helix are used together.
Apache Zookeeper can be installed on a single machine or on a cluster.
It can be used to keep track of logs. It can provide various services on a distributed platform.
Storm and Kafka rely on Zookeeper.
Storm uses Zookeeper to store all state so that it can recover from an outage in any of its (distributed) component services.
Kafka queue consumers can use Zookeeper to store information on what has been consumed from the queue.

Embedded or an External ActiveMQ Broker with Glassfish

This would be my first time using ActiveMQ (instead of the out-of-the-box OpenMQ in GF) and I am trying to determine which approach is better in terms of scaling and maintaining an ActiveMQ environment. We do have experience in setting up and maintaining Glassfish clusters and deploy applications to it. But we are contemplating on what approach is better as we don't want to go down a rabbit hole that we can't get out of because we built environments around it and seeing towards the end that the infrastructure we had setup wouldn't scale.
Has anybody tried using both approaches? Even if anybody implemented one of the approaches with Glassfish, telling us their experience (gains and pains) would be very helpful and appreciated.
For 99% of cases, it's usually better to deploy a standalone broker - this way you're treating your messaging as just another layer of the infrastructure, much like a database. When a broker is standalone, you can set it up as highly available, upgrade it at will without modifying your applications (a broker can be upgraded without upgrading the client libraries), and can scale it out as appropriate later on if you need to (most projects don't).
I have seen people deploy brokers as embedded, with a convoluted network of brokers to get all the boxes in a cluster talking to each other. This usually ends in tears and reverting back to a separate master-slave pair of brokers. Which is all they needed all along.