First I would like to thanks a lot for this forum.
I have a doubt about the cluster configuration with Hazelcast and Load Balance.
In the documentation https://docs.wso2.com/display/CLUSTER420/Clustering+the+Gateway
in section load balance configuration appear:
upstream wso2.am.com {
**sticky cookie JSESSIONID;**
server xxx.xxx.xxx.xx4:9763;
server xxx.xxx.xxx.xx5:9763;
}
Why to use sticky if the cluster already make the session control?
My understanding is wrong?
Thanks a lot.
In APIM, hazelcast clustering is used for cache invalidation among nodes, inter-node communication for dep-sync etc, but not for session replication. Therefore you need sticky sessions.
Related
We are using kong api gateway as a single gateway for all apis. we are facing latency issue with few of our api's (1500-2000ms). later when we checked, latency was being created because of the "rate limiting" plugin. When we disable the plugin, latency improves and the response is same as what we get directly from IP (close to 300ms approx).
I m trying to setup redis node to cache database queries, I m not sure how we can configure kong to read from redis itself. how we can cache the database queries to redis node.
We are using postgresql for kong.
I think maybe you are trying to do a couple different things at once.
First, rate-limiting: what is the value for your config.policy parameter? The Kong documentation has three values: "local (counters will be stored locally in-memory on the node), cluster (counters are stored in the datastore and shared across the nodes) and redis (counters are stored on a Redis server and will be shared across the nodes)."
If you are seeing high latency, and your config.policy is set to cluster or redis, it might be due to latency between Kong and postgres/redis (depending on what policy you're using). If you are using rate-limiting just to prevent abuse, using the 'local' policy is faster. (There's more about this at the Kong documentation.)
The other question is about caching: Kong Enterprise has a built-in caching plugin, but for Kong Community, since it's built on top of Nginx, you can do caching with Nginx. This link might help you.
There is a community custom plugin out there that enables the use of caching with redis without the need to use the Kong Enterprise -> https://github.com/globocom/kong-plugin-proxy-cache
Maybe you could combine that with rate limiting to achieve the desired latency performance or use this plugin as inspiration.
I would like to know answers for below questions:
1) In case if Ignite server is restarted, I need to restart the client (web applications). Is there any way client can reconnect to server on server restart. I know when server restarts it allocates a different ID and because of this the current existing connection becomes stale. Is there way to overcome this problem and if so, which version of Ignite supports this feature. Currently I utilize version 1.7
2) Can I have client cache like how Ehcache provides. I don’t want client cache as a front–end to a distributed cache. When I looked at the Near Cache API, it doesn’t have cache name properties like cache configuration and it acts only as a front-end to a distributed cache. Is it possible to create client only cache in Ignite
3) If I have a large object to cache, I find Serialization and Deserialization takes a longer time in Ignite and retrieving from distributed cache is slow. Is there any way we can speed up large objects retrieval from Ignite DataGrid.
This topic is discussed on Apache Ignite users mailing list: http://apache-ignite-users.70518.x6.nabble.com/Questions-on-Client-Reconnect-and-Client-Cache-td10018.html
I am currently setting up an infrastructure for an App in AWS. App is written in Django and is using Redis for some transactions. High availability is key for this application and I am having a hard time trying to get my head around how to configure Redis for High availability.
Application level changes are not an option.
Ideally I would like to have a redis setup, to which I can write and read and replicate and scale when required.
Current Setup is a Redis Fail-over scenario with HAProxy --> Redis Master --> Replica Slave.
Could someone guide me understand various options ? and how to scale redis for high availability !
Use AWS ElastiCache Redis Cluster with Multi-AZ. They provides automatic fail-over. It provides endpoint to access master node.
If master goes down AWS route your endpoint to another node. everything happens automatically, you don't have to do anything.
Just make sure that if you are doing DNS to IP caching in your application, its set to 60 seconds or so instead of default.
http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/AutoFailover.html
Thanks,
KS
I have developed an application and want to deploy it on a cluster environment.But I am not sure how replicate session when one server goes down.
What are the things I need to do for session replication?
Any suggestion would be greatly appreciated!
Try using this
It is always better to have session replication off of weblogic and on the web tier end as the target is weblogic.
You can also look into session persistence and how it can be achieved is hazelcast.
I'm sure a proper cluster with caching such as coherence can maintain sessions over multiple machines and provide high availability in weblogic. From an infrastructure stand point, I would go with Coherence or "Coherence-like" products to achieve session replication and persistence.
WebLogic has built-in clustering, so you don't need some other thing to do this. Also, WebLogic Suite includes Coherence, and with WebLogic Suite you can turn on Coherence session clustering just by clicking a check-box in the console.
Please read this
I have a Glassfish v2u2 cluster with two instances and I want to to fail-over between them. Every document that I read on this subject says that I should use a load balancer in front of Glassfish, like Apache httpd. In this scenario failover works, but I again have a single point of failure.
Is Glassfish able to do that fail-over without a load balancer in front?
The we solved this is that we have two IP addresses which both respond to the URL. The DNS provider (DNS Made Easy) will round robin between the two. Setting the timeout low will ensure that if one server fails the other will answer. When one server stops responding, DNS Made Easy will only send the other host as the server to respond to this URL. You will have to trust the DNS provider, but you can buy service with extremely high availability of the DNS lookup
As for high availability, you can have cluster setup which allows for session replication so that the user won't loose more than potentially one request which fails.
Hmm.. JBoss can do failover without a load balancer according to the docs (http://docs.jboss.org/jbossas/jboss4guide/r4/html/cluster.chapt.html) Chapter 16.1.2.1. Client-side interceptor.
As far as I know glassfish the cluster provides in-memory session replication between nodes. If I use Suns Glassfish Enterprise Application Server I can use HADB which promisses 99.999% of availability.
No, you can't do it at the application level.
Your options are:
Round-robin DNS - expose both your servers to the internet and let the client do the load-balancing - this is quite attractive as it will definitely enable fail-over.
Use a different layer 3 load balancing system - such as "Windows network load balancing" , "Linux Network Load balancing" or the one I wrote called "Fluffy Linux cluster"
Use a separate load-balancer that has a failover hot spare
In any of these cases you still need to ensure that your database and session data etc, are available and in sync between the members of your cluster, which in practice is much harder.