How can i manage session replication in weblogic clustering - weblogic

I have developed an application and want to deploy it on a cluster environment.But I am not sure how replicate session when one server goes down.
What are the things I need to do for session replication?
Any suggestion would be greatly appreciated!

Try using this
It is always better to have session replication off of weblogic and on the web tier end as the target is weblogic.
You can also look into session persistence and how it can be achieved is hazelcast.
I'm sure a proper cluster with caching such as coherence can maintain sessions over multiple machines and provide high availability in weblogic. From an infrastructure stand point, I would go with Coherence or "Coherence-like" products to achieve session replication and persistence.

WebLogic has built-in clustering, so you don't need some other thing to do this. Also, WebLogic Suite includes Coherence, and with WebLogic Suite you can turn on Coherence session clustering just by clicking a check-box in the console.
Please read this

Related

How can I setup Redis Cluster mode or master slave mode in PCF?

This is regarding the use case where we are trying to use the Redis in PCF (Pivotal Cloud Foundry). In our use case, we will refresh the Redis cache daily once or twice with the required data and then API will query Redis and then provide the response.
One thing of particular concern for us is that we want API queries to happen from Redis only that means Redis to be available at all times. But whenever we are refreshing the Redis DB, Redis would not be able to serve the APIs since it is refreshing the keys. To avoid that we wanted to setup a Redis in cluster mode or master-slave mode so if one instance is being written another can be read from.
How can we setup Redis cluster or master-slave mode in PCF and then fulfil our requirement?
Please provide any other suggestions as well that you may have.
At the time I write this, the Redis for Pivotal Platform product does not support clustering. See Availability, in the docs here -> https://docs.pivotal.io/redis/2-3/erc.html#offerings.
All Redis for Pivotal Platform services are single VMs without clustering capabilities. This means that planned maintenance jobs (e.g., upgrades) can result in 2–10 minutes of downtime, depending on the nature of the upgrade. Unplanned downtime (e.g., VM failure) also affects the Redis service.
Redis for Pivotal Platform has been used successfully in enterprise-ready apps that can tolerate downtime. Pre-existing data is not lost during downtime with the default persistence configuration. Successful apps include those where the downtime is passively handled or where the app handles failover logic.
If you require clustered Redis, you'd need to look at a different offering. Redis Labs has some offerings that integrate with PCF, you could use a Cloud Provider's Redis offering, or you could host your own.
If the solution you use isn't integrated into PCF, you can create a user-provided service with cf cups and provide the Redis credentials to your application that way. It will function just like a Redis service instance created through the marketplace.

Prometheus target management

We are using prometheus in our production envirment recently. Before we only have 30-40 nodes for each service and those servers not change very often, so we just write it in the prometheus.yml, but right now it become too long to hold in one file and change much frequently then before, so my question is should i use file_sd_config to put those server list out of yml file and change those config files sepearately, or using consul for service discovery(same much easy to handle changes).
I have install 3 nodes consul cluster in data center and as i can see if i change to use consul to slove this problem , i also need to install consul client in each server(node) and define its services info. Is that correct? or does anyone have good advise.
Thanks
I totally advocate the use of a service discovery system. It may be a bit hard to deploy at first but surely it will worth it in the future.
That said, Prometheus comes with a lot of service discovery integrations. It's possible that you don't need a Consul cluster. If your servers are in a cloud provider like AWS, GCP, Azure, Openstack, etc, prometheus are able to autodiscover the instances.
If you keep running with Consul, the answer is yes, the agent must be running in every node. You can also register services and nodes via API but it's easier to deploy the agent.

Redis on Azure VM vs Azure Redis Cache

We have checked both Redis installed in Azure VM and Azure Redis Cache both are working same I can't see a difference in the performance Have anyone used both in large scale application if so can anyone share the performance and durability of both ?
Have analysed the following
Monitoring
In-zone replication
Multi-zone replication
Auto fail-over
Data persistence
Backup
Pricing
SSL Authentication & Encryption
All the above Azure redis have the upper hand
Still I want make sure which one is the best
Does using VM has any bottlenecks ?
I would go for Azure Redis Cache. Mainly because its fully managed. At the end of the day you do have nodes under the hood. But why should you care for maintaining a VM? Hotfixes? Patches, Seucirty Updates ..etc ..etc.
I would ask the question the other way around. Why should you use VMs at all?
MG

Weblogic cache replication with clusters

Is it possible to implement a cache in weblogic (10.3.5.0) which is accessible on every instance of a cluster ?
Does weblogic API offers some API with RMI who offers this possibility ?
Is there a framework like ehcache who offers this possibility ?
Oracle Coherence can handle this situation and it comes bundled with WebLogic 10.3.5.
http://docs.oracle.com/cd/E21764_01/apirefs.1111/e13952/taskhelp/coherence/CreateCoherenceServers.html
"Coherence servers (also known as Coherence data nodes) are stand-alone cache servers, dedicated JVM instances responsible for maintaining and managing cached data."

Weblogic http session failover

Currently I have the following setup:
Hardware load balancer directing traffic to two physical servers each with 2 instances of weblogic running.
Works ok. I'd like to be able to shutdown one of the servers without dropping active sessions. Right now if I shutdown one of the physical servers any traffic that was going there gets bounced back to a login screen.
I'm looking for the simplest way of accomplishing this with the smallest performance hit.
Things I've considered so far:
1. See if I can somehow store the session information on the Load Balancer and through some Load Balancer magic have it notice a server is dead and try another one with the same session information (not sure this is possible)
2. Configure weblogic clustering. Not sure what the performance hit would be. Im guessing this is what I'll end up with, but still fishing for alternatives.
3. ?
What I currently have is an overly designed DR solution (which was the requirement), but I'd like to move it more in the direction of HA (for the flexibility)
edit Also is it worthwhile to create 2 clusters and replicate the sessions between them (I was thinking one cluster per site, sites are close enough). This would cover the event of one cluster failing.
You could try setting up a JDBC Session Storage pointing (of course) both instances to the same datasource without setting up a cluster, but I think the right approach would be setting up a Weblogic Cluster.
A nice thing about clustering Weblogic Servers is that - (from the link above, emphasis mine):
Sessions can be shared across clustered WebLogic Servers. Note that session persistence is no longer a requirement in a WebLogic Cluster. Instead, you can use in-memory replication of state. For more information, see Using WebLogic Server Clusters.
We've got a write up of this on our blog http://blog.c2b2.co.uk/2012/10/basic-clustering-with-weblogic-12c-and.html which provides step by step instructions on setting up web session failover in a cluster.
Clusters are not heavyweight assuming you don't store huge amounts of data in the cluster as it will be replicated.