I have a basic SQL question, If there are two active connections, "A" and "B" to the SQL server, and let say a deadlock occurs between the two, so to avoid deadlock SQL server will roll back one of the transactions, either of connection "A" or of "B". Let say SQL server roll back the transaction of connection "A", does this roll back of transaction can cause a connection timeout/connection break for the connection A also? –
Neither of those will occur. When a connection is choosen as a deadlock victim, all that happens is that the transaction will be automatically rolled back, and nothing else. The connection will still be alived (and can be used immediately again if desired), but any previous instructions on the killed transaction are lost and must be redone.
Timeouts are a completely different kind of events, and always controled client-side and happens when the client "gives up" in waiting for a response. But a deadlock is server-side generated and results in an error on the connection, but otherwise the connection is still alive, like many other errors.
Related
I was asked this question in an interview. Why is it important to close a database connection? Is it just good practice because it might be wasting resources or there is something more to it?
You already mentioned first reason: resource leaks. This would mean that the usage of memory, sockets and file descriptors on your system is constantly increasing until your program or the database crashes, gets killed or brings down the operating system to its knees. Even before that happens, your system would likely become unresponsive, slow and prone to various timeouts, network disconnects and so on.
If your code depends on implicit commits (which is a bad idea anyway), you would be losing the data that your application writes to the database.
Not closing a connection could also leave locks and transactions in the database, which would mean that other connections get stuck while waiting on a lock held by the zombie connection. For example, if you have an external reporting system, it might stop working. Database backups might also stop working, leaving you vulnerable to loss of data.
Depending on circumstances, unfinished transactions could also fill up database transaction logs and/or temporary space, potentially bringing the database offline in a state that requires manual intervention.
If you are using connection pools, not closing a connection could be preventing a connection being returned to the pool. This would mean that connection pool would eventually get depleted, preventing your program from opening new connections.
In Go, when using a SQL database, does one need to close the DB (db.Close) before closing an application? Will the DB automatically detect that the connection has died?
DB will do its best to detect but with no luck, it may not be able to detect. Better to release what is acquired as soon as possible.
send() system call will wait for TCP connection to send data but client won't receive anything.
Power failure, network issue or bare exit happened without properly releasing resources. TCP keepalive mechanism will kick in and try to detect that connection is dead.
Client is paused and doesn't receive any data, in this case send() will block.
As a result, it may prevent
Graceful shutdown of cluster.
Advancing event horizon if it was holding exclusive locks as a part of transaction such as auto vacuum in postgresql.
Server keepalive config could be shortened to detect it earlier. (For example, ~2h 12m default in postgresql will be very long according to workload).
There may be a hard limit on max open connections, until detection, some connections will be zombie (there, unusable but decreases limit).
The database will notice the connection had died and take appropriate actions: for instance, all uncommitted transactions active on that connection will be rolled back and the user session will be terminated.
But notice that this is a "recovery" scenario from the point of view of the database engine: it cannot just throw up when a client disconnects; it rather have to take explicit actions to have consistent state.
On the other hand, to shut down property when the program goes down "the normal way" (that is, not because of a panic or log.Fatal()) is really not that hard. And since the sql.DB instance is usually a program-wide global variable, its even simpler: just try closing it in main() like Matt suggested.
If you're initialising a connection in any function, you're normally better off deferring the call to close immediately, i.e.
conn := sql.Connect() // for example
defer conn.Close()
Which will close the connection once the enclosing function exits.
This is handy when used in a main function since once the program exits, the call to Close() will happen.
I have a procedure, currently running at one SPID. Now, I found the query running too slow. In this Proc update/insert going on. If I kill the session, will what happen?
SQL Server will stop executing the query and rollback any open transactions. That rollback will undo any changes that haven't been fully committed. Since SQL Server adheres to the ACID principle, you shouldn't be able to leave your database in a bad state, even by killing SPIDs. That isn't to say you couldn't leave your data in a bad state, i.e. not wrapping multiple operations in a transaction to enforce consistency upon failure.
https://msdn.microsoft.com/en-us/library/ms173730.aspx
Check the docs here. In a nutshell which ever spid you kill will be disconnected
KILL can be used to terminate a normal connection, which internally terminates the transactions that are associated with the specified session ID
I have an application that is connecting to a SQL server to run some inserts/updates. We've been getting deadlocks, But I’ve noticed that in addition to reporting the deadlock, the connection to the database is dropped and the client has to reconnect.
Is this normal? Even if it's not normal behaviour, is it possible that SQL server is not only deciding that my client will be the deadlock victim, but it is also terminating the connection?
Is there a way to stop the connection being dropped?
By definition a deadlock means two connections are stuck where SPID 1 has something locked that SPID 2 needs and SPID 2 has something locked that SPID 1 needs. Neither can complete their transaction because they need something the other has locked. In cases like this the server will choose a victim SPID and kill it so the other can complete its transaction.
The only way to stop it is to figure out why the deadlocks are occurring in the first place. You can run a trace to monitor deadlocks and capture the information relating to them into a diagram and then view the diagram in SSMS.
More information available here: http://www.simple-talk.com/sql/learn-sql-server/how-to-track-down-deadlocks-using-sql-server-2005-profiler/
Here's the sequence of events my hypothetical program makes...
Open a connection to server.
Run an UPDATE command.
Go off and do something that might take a significant amount of time.
Run another UPDATE that reverses the change in step 2.
Close connection.
But oh-no! During step 3, the machine running this program literally exploded. Other machines querying the same database will now think that the exploded machine is still working and doing something.
What I'd like to do, is just as the connection is opened, but before any changes have been made, tell the server that should this connection close for whatever reason, to run some SQL. That way, I can be sure that if something goes wrong, the closing update will run.
(To pre-empt the answer, I'm not looking for table/record locks or transactions. I'm not doing resource claims here.)
Many thanks, billpg.
I'm not sure there's anything built in, so I think you'll have to do some bespoke stuff...
This is totally hypothetical and straight off the top of my head, but:
Take the SPID of the connection you
opened and store it in some temp
table, with the text of the reversal
update.
Use an a background process (either
SSIS or something else) to monitor
the temp table and check that the
SPID is still present as an open connection.
If the connection dies then the background process can execute the stored revert command
If the connection completes properly then the SPID can be removed from the temp table so that the background process no longer reverts it when the connection closes.
Comments or improvements welcome!
I'll expand on my comment. In general, I think you should reconsider your approach. All database access code should open a connection, execute a query then close the connection where you rely on connection pooling to mitigate the expense of opening lots of database connections.
If it is the case that we are talking about a single SQL command whose rows on which it operates should not change, that is a problem that should be handled by the transaction isolation level. For that you might investigate the Snapshot isolation level in SQL Server 2005+.
If we are talking about a series of queries that are part of a long running transaction, that is more complicated and can be handled via storage of a transaction state which other connections read in order to determine whether they can proceed. Going down this road, you need to provide users with tools where they can cancel a long running transaction that might no longer be applicable.
Assuming it's even possible... this will only help you if the client machine explodes during the transaction. Also, there's a risk of false positives - the connection might get dropped for a few seconds due to network noise.
The approach that I'd take is to start a process on another machine that periodically pings the first one to check if it's still on-line, then takes action if it becomes unreachable.