JDBC Connection Pooling in a Tomcat Cluster Environment - apache

I'm relatively very new to this, but I have a Tomcat cluster set up (using mod_proxy from httpd) with session replication (separate redis server) for fault-tolerance.
I have a couple of questions about this setup:
My application (spring/hibernate) has a different database per user. So the problem here is that the data source (using spring along with hibernate for persistence) is created at Tomcat level. Thus, whatever connection pooling I do will be at server level.
As per the cluster configuration the Tomcat instances will create their own Connection Pool.
I'd like to know if connection pooling is possible at a cluster level using Tomcat i.e. is there a way to make sure that all the servers in the cluster are using the shared Connection Pool?
I do not want to configure a DataSource on every Tomcat instance because of performance issues. Before the cluster setup, the application was deployed on a single server and the DataSource was configured such that it allowed only a few (50) connections in a connection pool per DataSource.
Now in a clustered environment, I cannot afford to create or split those number of connections on every Tomcat, and also dynamic registration of nodes will create further problems. I'd also like to know is there some alternative solution to this problem if connection pooling is not possible or inefficient?

I'm going to handle your questions in reverse order, since the second one is more simple.
Database connection pooling in Tomcat cannot be configured cluster-wide: you have to configure a separate pool for each node in the cluster. But this doesn't have to be bad news... there's nothing wrong with configuring a node to have 5 or 10 or 100 connections in the connection pool on each node.
It's true, you might end up with a situation where you have too many users connecting to the database at a single time which overwhelms your database, but that could also happen with a single node as well. There isn't anything conceptually different about multiple-nodes that wouldn't also be true for a single node.
the key is to make sure that your cluster balances users appropriately so that you don't have a limit of e.g. 5 database connections per node, but 100 users end up on one node while the other nodes only have 5 users per node. In that case, the popular node (100 users) will have to share those 5 connections while on the other nodes, each user gets a connection all to themselves.
Back to your first item, which is more complicated. If you have a separate database per user, then connection-pooling is an impossible thing to accomplish because you will absolutely have to establish a new connection for every user every time. Those connections aren't poolable, at least not without being quite careful about it. It sounds like you have an architectural issue that you might have to solve before you can identify a technical solution to that issue.

Related

Too many database connection objects - sql server

My application is built on java and uses a sql server database.
There would be roughly around 50 people using the application. But i get up to a 900 connection objects in the database and the db goes down during peak times. How to effectively use the connection objects and keep the number of connections on control.
To Control the DB connections below are the Points to Follow.
Check the code if every connection which is created is being
destroyed after use. E.g. in Every DB Transaction should be on below
Pattern, if you are using some ORM make sure you are opening and
closing connections as per ORM provider recommendation.
openDBConnection();
Try{
PerformDBTransaction();
}Finally
{CloseDBConnection();}
Check if you enabled connection Pooling. If yes then check the
minimum Connections Property in the connection string or ORM
Configuration because whenever a single transaction is done on SQL
Server, it creates the same number of connection that are mentioned
in minimum Connections property so may be in your application 900
connections are mentioned.
If your application is a web app or communicating to web services
then check if you have enabled Web Gardening(Multiple Processes
handling web requests on the same Server) or Web Farming(Multiple
Servers are used by using load balancing to handle the requests). If
one or both of exists then be informed that below is the Formula for
Connections creation for SQL Server. Total Number of Pooled
Connections = Total Number of Processes (Web Gardening Number) *
Total Number of Web/App Servers (Web Farming Number) * Min
Connections Parameter Value in Connection String.
For Effective Use:
Connection Pooling is already an effective way to use connections but what should be the value of connection pooling it depends on your requirement and DB Server Hardware Capabilities. If you are sure that only 50 users will do transactions one transaction by each in a unit of time then simply you need configure connection pool so that only 50 connections can be created.

Redis Read-Replicas On Web Servers

I am currently developing a system that makes heavy use of redis for a series of web services.
One of the key criteria of this system is fast responses.
At present the layout (ignoring load balancers etc) is as follows:
2 x Front End Play Framework 2.x Servers
2 x Job Handling/Persistence Play Framework 2.x Servers
1 x MySQL Server
2 x Redis Servers, 1 master, 1 slave
In this setup, redis serves 2 tasks - as a shared cache and also as a message bus.
Currently the front end servers host a service which interacts in its entirety with Redis.
The front end servers try to balance reads across the pool of read servers (currently the master and 1 slave), but being Redis they need to make their writes to the master server. They handle cache updates etc by sending messages on the queues, which are picked up by the job handling servers.
The job handling servers do blocking listens (BLPOP) to the Redis write server and process tasks when necessary. They have the only connection to MySQL.
At present the read replica server is a dedicated server - more there to be able to switch it to write master if the current master fails.
I was thinking of putting a read replica slave of redis on each of the front end servers which means that read latency would be even less, and writes (messages for queues) get pushed to the write server on a separate connection.
If I need to scale, I could just add more front end servers with read slaves.
It sounds like a win/win to me as even if the write server temporarily drops out, the front end servers can still read data at least from their local slave and act accordingly.
Can anyone think of reasons why this might not be such a good idea?
I understand the advantages of this approach... but consider this: what happens when you need to scale just one component (i.e. FE server or Redis) but not the other? For example, more traffic could mean you'll need more app servers to handle it while the Redises will be considerably less loaded. On the other hand, if your dataset grows and/or more load is put on the Redises - you'll need to scale these and not the app.
The design should fit your requirements, and the simplicity of your suggested setup has a definite appeal (i.e. to scale, just add another identical lego block) but from my meager experience - anything that sounds too good to be true usually is. In the longer run, even if this works for you now, you may find yourself in a jam down the road. My advice - separate your Redis(es) from you app servers, deal with and/or work around the network and make sure each layer is available and scalable on its own right.

Redis active-active replication

I am using redis version 2.8.3. I want to build a redis cluster. But in this cluster there should be multiple master. This means I need multiple nodes that has write access and applying ability to all other nodes.
I could build a cluster with a master and multiple slaves. I just configured slaves redis.conf files and added that ;
slaveof myMasterIp myMasterPort
Thats all. Than I try to write something into db via master. It is replicated to all slaves and I really like it.
But when I try to write via a slave, it told me that slaves have no right to write. After that I just set read-only status of slave in redis.conf file to false. Hence, I could write something into db.
But I realize that, it is not replicated to my master replication so it is not replicated to all other slave neigther.
This means I could'not build an active-active cluster.
I tried to find something whether redis has active-active cluster capability. But I could not find exact answer about it.
Is it available to build active-active cluster with redis?
If it is, How can I do it ?
Thank you!
Redis v2.8.3 does not support multi-master setups. The real question, however, is why do you want to set one up? Put differently, what challenge/problem are you trying to solve?
It looks like the challenge you're trying to solve is how to reduce the network load (more on that below) by eliminating over-the-net reads. Since Redis isn't multi-master (yet), the only way to do it is by setting up each app server with a master and a slave (to the other master) - i.e. grand total of 4 Redis instances (and twice the RAM).
The simple scenario is when each app updates only a mutually-exclusive subset of the database's keys. In that scenario this kind of setup may actually be beneficial (at least in the short term). If, however, both apps can touch all keys or if even just one key is "shared" for writes between the apps, then you'll need to bake locking/conflict resolution/etc... logic into your apps to consolidate local master and slave differences (and that may be a bit of an overkill). In either case, however, you'll end up with too many (i.e. more than 1) Redises, which means more admin effort at the very least.
Also note that by colocating app and database on the same server you're setting yourself for near-certain scalability failure. What will happen when you need more compute resources for your apps or Redis? How will you add yet another app server to the mix?
Which brings me back to the actual problem you are trying to solve - network load. Why exactly is that an issue? Are your apps so throughput-heavy or is the network so thin that you are willing to go to such lengths? Or maybe latency is the issue that you want to resolve? Be the case as it may be, I recommended that you consider a time-proven design instead, namely separating Redis from the apps and putting it on its own resources. True, network will hit you in the face and you'll have to work around/with it (which is what everybody else does). On the other hand, you'll have more flexibility and control over your much simpler setup and that, in my book, is a huge gain.
Redis Enterprise has had this feature for quite a while, but if you are looking for an open source solution KeyDB is a fork with Active Active support (called Active Replica).
Setting it up is just a little more work than standard replication:
Both servers must have "active-replica yes" in their respective configuration files
On server B execute the command "replicaof [A address] [A port]"
Server B will drop its database and load server A's dataset
On server A execute the command "replicaof [B address] [B port]"
Server A will drop its database and load server B's dataset (including the data it just transferred in the prior step)
Both servers will now propagate writes to each other. You can test this by writing to a key on Server A and ensuring it is visible on B and vice versa.
https://github.com/JohnSully/KeyDB/wiki/KeyDB-(Redis-Fork):-Active-Replica-Support

Couchbase node failure

My understanding could be amiss here. As I understand it, Couchbase uses a smart client to automatically select which node to write to or read from in a cluster. What I DON'T understand is, when this data is written/read, is it also immediately written to all other nodes? If so, in the event of a node failure, how does Couchbase know to use a different node from the one that was 'marked as the master' for the current operation/key? Do you lose data in the event that one of your nodes fails?
This sentence from the Couchbase Server Manual gives me the impression that you do lose data (which would make Couchbase unsuitable for high availability requirements):
With fewer larger nodes, in case of a node failure the impact to the
application will be greater
Thank you in advance for your time :)
By default when data is written into couchbase client returns success just after that data is written to one node's memory. After that couchbase save it to disk and does replication.
If you want to ensure that data is persisted to disk in most client libs there is functions that allow you to do that. With help of those functions you can also enshure that data is replicated to another node. This function is called observe.
When one node goes down, it should be failovered. Couchbase server could do that automatically when Auto failover timeout is set in server settings. I.e. if you have 3 nodes cluster and stored data has 2 replicas and one node goes down, you'll not lose data. If the second node fails you'll also not lose all data - it will be available on last node.
If one node that was Master goes down and failover - other alive node becames Master. In your client you point to all servers in cluster, so if it unable to retreive data from one node, it tries to get it from another.
Also if you have 2 nodes in your disposal you can install 2 separate couchbase servers and configure XDCR (cross datacenter replication) and manually check servers availability with HA proxies or something else. In that way you'll get only one ip to connect (proxy's ip) which will automatically get data from alive server.
Hopefully Couchbase is a good system for HA systems.
Let me explain in few sentence how it works, suppose you have a 5 nodes cluster. The applications, using the Client API/SDK, is always aware of the topology of the cluster (and any change in the topology).
When you set/get a document in the cluster the Client API uses the same algorithm than the server, to chose on which node it should be written. So the client select using a CRC32 hash the node, write on this node. Then asynchronously the cluster will copy 1 or more replicas to the other nodes (depending of your configuration).
Couchbase has only 1 active copy of a document at the time. So it is easy to be consistent. So the applications get and set from this active document.
In case of failure, the server has some work to do, once the failure is discovered (automatically or by a monitoring system), a "fail over" occurs. This means that the replicas are promoted as active and it is know possible to work like before. Usually you do a rebalance of the node to balance the cluster properly.
The sentence you are commenting is simply to say that the less number of node you have, the bigger will be the impact in case of failure/rebalance, since you will have to route the same number of request to a smaller number of nodes. Hopefully you do not lose data ;)
You can find some very detailed information about this way of working on Couchbase CTO blog:
http://damienkatz.net/2013/05/dynamo_sure_works_hard.html
Note: I am working as developer evangelist at Couchbase

Persistent DB Connections - Yea or Nay?

I'm using PHP's PDO layer for data access in a project, and I've been reading up on it and seeing that it has good innate support for persistent DB connections. I'm wondering when/if I should use them. Would I see performance benefits in a CRUD-heavy app? Are there downsides to consider, perhaps related to security?
If it matters to you, I'm using MySQL 5.x.
You could use this as a rough "ruleset":
YES, use persistent connections, if:
There are only few applications/users accessing the database, i.e. you will not result in 200 open (but probably idle) connections, because there are 200 different users shared on the same host.
The database is running on another server that you are accessing over the network
An (one) application accesses the database very often
NO, don't use persistent connections, if:
Your application only needs to access the database 100 times an hour.
You have many webservers accessing one database server
You're using Apache in prefork mode. It uses one connection for each child process, which can ramp up fairly quickly. (via #Powerlord in the comments)
Using persistent connections is considerable faster, especially if you are accessing the database over a network. It doesn't make so much difference if the database is running on the same machine, but it is still a little bit faster. However - as the name says - the connection is persistent, i.e. it stays open, even if it is not used.
The problem with that is, that in "default configuration", MySQL only allows 1000 parallel "open channels". After that, new connections are refused (You can tweak this setting). So if you have - say - 20 Webservers with each 100 Clients on them, and every one of them has just one page access per hour, simple math will show you that you'll need 2000 parallel connections to the database. That won't work.
Ergo: Only use it for applications with lots of requests.
In brief, my experience says that persistent connections should be avoided as far as possible.
Note that mysql_close is a no-operation (no-op) for connections that are created using mysql_pconnect. This means persistent connection cannot be closed by client at will. Such connection will be closed by mysqldb server when no activity occurs on the connection for duration more than wait_timeout. If wait_timeout is large value (say 30 min) then mysql db server can easily reach max_connections limit. In such case, mysql db will not accept any future connection request. This is when your pager starts beeping.
In order to avoid reaching max_connections limit, use of Persistent connection need careful balancing of following variables...
Number of apache processes on one host
Total number of hosts running apache
wait_timout variable in mysql db server
max_connections variable in mysql db server
Number of requests served by one apache process before it is re-spawned
So, pl use persistent connection after enough deliberation. You may not want to invite complex runtime issues for a small gain that you get from persistent connection.
Creating connections to the database is a fairly expensive operation. Persistent connections are a good idea. In the ASP.Net and Java world, we have "connection pooling", which is roughly the same thing, and also a good idea.
IMO, The real answer to this question is whatever works best for you app. I would recommend you benchmark your app using both persistent and non-persistent connections.
Maggie Nelson # Objectively Oriented posted about this in August and Robert Swarthout made an accompanying post with some hard numbers. Both are pretty good reads.
In my humble opinion:
When using PHP for web development, most of your connection will only "live" for the life of the page executing. A persistant connection is going to cost you a lot of overhead as you'll have to put it in the session or some such thing.
99% of the time a single non-persistant connection that dies at the end of the page execution will work just fine.
The other 1% of the time, you probably should not be using PHP for the app, and there is no perfect solution for you.
In general, you'll need to use non-persistent connections sometimes, and it's nice to have a single pattern to apply to db connection design (as long as there's relatively little upside to using persistent connections in your context.)
I was going to ask this same question but rather than ask the same question again I'll just add some information that I've found.
Are PHP persistent connections evil ?
Persistent Database Connections
It is also worth noting that the newer mysqli extension does not even include the option to use persistent database connections.
I'm still using persitent connections at the moment but plan to switch to non-persistent in the near future.