I have a procedure, currently running at one SPID. Now, I found the query running too slow. In this Proc update/insert going on. If I kill the session, will what happen?
SQL Server will stop executing the query and rollback any open transactions. That rollback will undo any changes that haven't been fully committed. Since SQL Server adheres to the ACID principle, you shouldn't be able to leave your database in a bad state, even by killing SPIDs. That isn't to say you couldn't leave your data in a bad state, i.e. not wrapping multiple operations in a transaction to enforce consistency upon failure.
https://msdn.microsoft.com/en-us/library/ms173730.aspx
Check the docs here. In a nutshell which ever spid you kill will be disconnected
KILL can be used to terminate a normal connection, which internally terminates the transactions that are associated with the specified session ID
Related
I call extensive update SQL statement and PL/SQL procedures.
What will happen with data when my application lose connection to DB or server halted or etc?
In case of SQL update command I think that it will be rollback.
For PL/SQL procedure I assume that code execution stopped at some time, any previous commit command will be applied but rest of code doesn't.
Am I right?
Yes it should rollback to the last rollback/commit call.
This became too long for a comment.
DDL statements (truncate, create, drop,...) implicitly commit. So if you do that in your stored procedure calls, everything before that statement will be committed whether you want or not. If the jdbc session is lost after the truncate, the changes before are still committed.
And yes, if you are inserting large volumes without intermediate commits, things can slow down. This is typically because you are building up rollback segments. There is a sweet spot with large inserts where you insert a batch of, say, 1,000 records at a time, committing after each batch.
What you are describing does not seem like normal transactional activity but more like bulk loading. If you are bulk loading, then maintain state so that you can restart the load or discard the records already loaded if you replay. Consider things like shipping as a file and importing (or using an external table) rather than necessarily inserting via a client connection. The APPEND hint and INSERT's NOLOGGING clause to speed up inserts (but note that the db will not be in a typical 'recoverable' state afterward and should be backed up again).
By mistake, I performed this query in informix using dbaccess session.
Delete from table #without where condition
Realizing my mistake, that I should have used TRUNCATE, I did another foolishness.
I killed the dbaccess session. But the table is exclusively locked and I am not able to do any action on that table.
What are the steps I can do to remove the lock and truncate the table.
1) Restart Informix server
2) onmode -z <sessionid> # Does not work.
I see hell lot of sessions created for the delete query
Is there any other easy way to fix this issue?
Assuming that you are not using Informix SE...
Is the database logged? If so, did you run the statement inside an explicit (BEGIN WORK) transaction?
Analysis
If you've got an unlogged database, then each row that the server's deleted is gone. If you stop the DELETE, it will not undo the partially complete changes. Using an unlogged database means that you do not want guaranteed statement level recovery.
If you've got a regular logged database and no explicit transaction, then the statement is probably still running after the DB-Access session is terminated. Because it is running as a singleton statement, it will complete and commit. Until it does that, if you forcibly take the server down, then fast recovery will rollback the statement (transaction). Given that I see '5 hours ago', I fear your chances of taking the server down in time now are limited.
If you've got a logged database with an explicit transaction, or a MODE ANSI database (where you're always in a transaction), then when the DELETE statement completes, the server will wait for the COMMIT, realize that the session connection is terminated, and will rollback the uncommitted work.
Recovery
If you've got an unlogged database, you can only recover to your last archive. Because it is unlogged, you can't recover it from the logical logs (but other databases in the same instance that are logged can be recovered up to the last logical log).
If you've got a logged database and you can take the server down — preferably under control, but crashing it if necessary — before the DELETE statement completes, then fast recovery will deal with the issue.
If the DELETE has completed and committed and you have good backups, you can consider a point-in-time restore of the database. It will take it offline while you do that (but if the data from the table is all missing, your DB is not going to be functional for a while).
If none of these scenarios applies, then you should contact IBM Technical Support, who may be able to perform minor (and not so minor) miracles.
But, as you may have noticed, a lot depends on the type of database (unlogged, logged, MODE ANSI) and whether there was an explicit transaction in effect when you ran the statement.
The trouble with DBMS is that they're trusting creatures. If you're authorized to do an operation, they assume that you intend to do what you say you want to do, and they go ahead and do it to the best of their ability. When you don't ask it to do what you intended to request, life gets tricky; the DBMS still trusts you and does what you actually asked it to do.
Here's the sequence of events my hypothetical program makes...
Open a connection to server.
Run an UPDATE command.
Go off and do something that might take a significant amount of time.
Run another UPDATE that reverses the change in step 2.
Close connection.
But oh-no! During step 3, the machine running this program literally exploded. Other machines querying the same database will now think that the exploded machine is still working and doing something.
What I'd like to do, is just as the connection is opened, but before any changes have been made, tell the server that should this connection close for whatever reason, to run some SQL. That way, I can be sure that if something goes wrong, the closing update will run.
(To pre-empt the answer, I'm not looking for table/record locks or transactions. I'm not doing resource claims here.)
Many thanks, billpg.
I'm not sure there's anything built in, so I think you'll have to do some bespoke stuff...
This is totally hypothetical and straight off the top of my head, but:
Take the SPID of the connection you
opened and store it in some temp
table, with the text of the reversal
update.
Use an a background process (either
SSIS or something else) to monitor
the temp table and check that the
SPID is still present as an open connection.
If the connection dies then the background process can execute the stored revert command
If the connection completes properly then the SPID can be removed from the temp table so that the background process no longer reverts it when the connection closes.
Comments or improvements welcome!
I'll expand on my comment. In general, I think you should reconsider your approach. All database access code should open a connection, execute a query then close the connection where you rely on connection pooling to mitigate the expense of opening lots of database connections.
If it is the case that we are talking about a single SQL command whose rows on which it operates should not change, that is a problem that should be handled by the transaction isolation level. For that you might investigate the Snapshot isolation level in SQL Server 2005+.
If we are talking about a series of queries that are part of a long running transaction, that is more complicated and can be handled via storage of a transaction state which other connections read in order to determine whether they can proceed. Going down this road, you need to provide users with tools where they can cancel a long running transaction that might no longer be applicable.
Assuming it's even possible... this will only help you if the client machine explodes during the transaction. Also, there's a risk of false positives - the connection might get dropped for a few seconds due to network noise.
The approach that I'd take is to start a process on another machine that periodically pings the first one to check if it's still on-line, then takes action if it becomes unreachable.
When working with database transactions, what are the possible conditions (if any) that would cause the final COMMIT statement in a transaction to fail, presuming that all statements within the transaction already executed without issue?
For example... let's say you have some two-phase or three-phase commit protocol where you do a bunch of statements, then wait for some master process to tell you when it is ok to finally commit the transaction:
-- <initial handshaking stuff>
START TRANSACTION;
-- <Execute a bunch of SQL statements>
-- <Inform master of readiness to commit>
-- <Time passes... background transactions happening while we wait>
-- <Receive approval to commit from master (finally!)>
COMMIT;
If your code gets to that final COMMIT statement and sends it to your DBMS, can you ever get an error (uniqueness issue, database full, etc) at that statement? What errors? Why? How do they appear? Does it vary depending on what DBMS you run?
COMMIT may fail. You might have had sufficent resources to log all the changes you wished to make, but lack resources to actually implement the changes.
And that's not considering other reasons it might fail:
The change itself might not fit the constraints of the database.
Power loss stops things from completing.
The level of requested selection concurrency might disallow an update (cursors updating a modified table, for example).
The commit might time out or be on a connection which times out due to starvation issues.
The network connection between the client and the database may be lost.
And all the other "simple" reasons that aren't on the top of my head.
It is possible for some database engines to defer UNIQUE index constraint checking until COMMIT. Obviously if the constraint does not hold true at the time of commit then it will fail.
Sure.
In a multi-user environment, the COMMIT may fail because of changes by other users (e.g. your COMMIT would violate a referential constraint when applied to the now current database...).
Thomas
If you're using two-phase commit, then no. Everything that could go wrong is done in the prepare phase.
There could still be network outage, power less, cosmic rays, etc, during the commit, but even so, the transactions will have been written to permanent storage, and if a commit has been triggered, recovery processes should carry them through.
Hopefully.
Certainly, there could be a number of issues. The act of committing, in and of itself, must make some final, permanent entry to indicate that the transaction committed. If making that entry fails, then the transaction can't commit.
As Ignacio states, there can be deferred constraint checking (this could be any form of constraint, not just unique constraint, depending on the DBMS engine).
SQL Server Specific: flushing FILESTREAM data can be deferred until commit time. That could fail.
One very simple and often overlooked item: hardware failure. The commit can fail if the underlying server dies. This might be disk, cpu, memory, or even network related.
The transaction could fail if it never receives approval from the master (for any number of reasons).
No matter how wonderfully a system may be designed, there is going to be some possibility that a commit will get into a situation where it's impossible to know whether it succeeded or not. In some cases, it may not matter (e.g. if a hard drive holding the database turns into a pile of slag, it may be impossible to tell whether the commit succeeded or not before that occurred but it wouldn't really matter); in others cases, however, this could be a problem. Especially with distributed database systems, if a connection failure occurs at just the right time during a commit, it will be impossible for both sides to be certain of whether the other side is expecting a commit or a rollback.
With MySQL or MariaDB, when used with Galera clustering, COMMIT is when the other nodes in the cluster are checked. So, yes important errors can be discovered by COMMIT, and you must check for these errors.
The in-house application framework we use at my company makes it necessary to put every SQL query into transactions, even though if I know that none of the commands will make changes in the database. At the end of the session, before closing the connection, I commit the transaction to close it properly. I wonder if there were any particular difference if I rolled it back, especially in terms of speed.
Please note that I am using Oracle, but I guess other databases have similar behaviour. Also, I can't do anything about the requirement to begin the transaction, that part of the codebase is out of my hands.
Databases often preserve either a before-image journal (what it was before the transaction) or an after-image journal (what it will be when the transaction completes.) If it keeps a before-image, that has to be restored on a rollback. If it keeps an after-image, that has to replace data in the event of a commit.
Oracle has both a journal and rollback space. The transaction journal accumulates blocks which are later written by DB writers. Since these are asychronous, almost nothing DB writer related has any impact on your transaction (if the queue fills up, then you might have to wait.)
Even for a query-only transaction, I'd be willing to bet that there's some little bit of transactional record-keeping in Oracle's rollback areas. I suspect that a rollback requires some work on Oracle's part before it determines there's nothing to actually roll back. And I think this is synchronous with your transaction. You can't really release any locks until the rollback is completed. [Yes, I know you aren't using any in your transaction, but the locking issue is why I think a rollback has to be fully released then all the locks can be released, then your rollback is finished.]
On the other hand, the commit is more-or-less the expected outcome, and I suspect that discarding the rollback area might be slightly faster. You created no transaction entries, so the db writer will never even wake up to check and discover that there was nothing to do.
I also expect that while commit may be faster, the differences will be minor. So minor, that you might not be able to even measure them in a side-by-side comparison.
I agree with the previous answers that there's no difference between COMMIT and ROLLBACK in this case. There might be a negligible difference in the CPU time needed to determine that there's nothing to COMMIT versus the CPU time needed to determine that there's nothing to ROLLBACK. But, if it's a negligible difference, we can safely forget about about it.
However, it's worth pointing out that there's a difference between a session that does a bunch of queries in the context of a single transaction and a session that does the same queries in the context of a series of transactions.
If a client starts a transaction, performs a query, performs a COMMITor ROLLBACK, then starts a second transaction and performs a second query, there's no guarantee that the second query will observe the same database state as the first query. Sometimes, maintaining a single consistent view of the data is of the essence. Sometimes, getting a more current view of the data is of the essence. It depends on what you are doing.
I know, I know, the OP didn't ask this question. But some readers may be asking it in the back of their minds.
In general a COMMIT is much faster than a ROLLBACK, but in the case where you have done nothing they are effectively the same.
The documentation states that:
Oracle recommends that you explicitly end every transaction in your application programs with a COMMIT or ROLLBACK statement, including the last transaction, before disconnecting from Oracle Database. If you do not explicitly commit the transaction and the program terminates abnormally, then the last uncommitted transaction is automatically rolled back. A normal exit from most Oracle utilities and tools causes the current transaction to be committed. A normal exit from an Oracle precompiler program does not commit the transaction and relies on Oracle Database to roll back the current transaction.
http://download.oracle.com/docs/cd/B28359_01/server.111/b28286/statements_4010.htm#SQLRF01110
If you want o choose to do one or the other then you might as well do the one that is the same as doing nothing, and just commit it.
Well, we must take into account what an SELECT returns in Oracle. There are two modes. By default an SELECT returns data as that data looked in the very moment the SELECT statement started executing (this is default behavior in READ COMMITTED isolation mode, the default transactional mode). So if an UPDATE/INSERT was executed after SELECT was issued that won't be visible in result set.
This can be a problem if you need to compare two result sets (for example debta and credit sides of an general ledger app). For that we have a second mode. In that mode SELECT returns data as it looked at the moment the current transaction began (default behavior in READ ONLY and SERIALIZABLE isolation levels).
So, at least sometimes it is necessary to execute SELECTs in transaction.
Since you've not done any DML, I suspect there'd be no difference between a COMMIT and ROLLBACK in Oracle. Either way there's nothing to do.
I'd think a Commit would be more efficient; since generally you'd expect most DB transactions to be committed; so you would think the DB optimizes for this case (as opposed to trying to be more efficient for a rollback).