How to rollback data in a SQL Server database? - sql

I had unfortunately deleted data from database by using following query in SQL Server
exec usp_delete_cascade "someTable", "id='somexyz'"
Can anyone please tell me how to get back my data?
Is this possible?

There are two kinds of transactions - implicit and explicit.
Implicit transaction is used every time you do DML statement (in your case delete). This transaction is not user handled. And it is not true your qry did not run under transaction.
Explicit transaction can be defined by user (with begin transaction). When you do not specify transaction, there are only implicit transactions, which are autocommited when statement success.
There is a few ways how to data recover, but never with 100% success and without work. You have to use some external program as SysTools SQL Recovery, ApexSQL Recover or Veeam. Level of recovery depends on your storage use and your server configuration.
Only one 100% way is prevension (and backups, change tracking etc).

You can try to recover with this tool of ApexSQL, but you would think in backup and measures to avoid this kind of problem.
http://www.apexsql.com/sql_tools_recover.aspx
Obviously is a third party tool and you would pay for using it.

It depends on your server config. But, by default, SQL Server does not starts transaction when executing query. So, if you do not started transaction, or transaction started, but commited, rollback is impossible.
Other ways to restore the data: if your database recovery model is set to full, and you have diff or full backup, youre lucky. If no, data is missing forewer.

Related

Stored procedure without transaction

I have a stored procedure that performs calculations and stores a large amount of data in new tables, in another database.
If anything goes wrong, I just drop the tables.
I've already set the recovery mode for this database as simple, as the data is available elsewhere. Is there anything else I can do in the stored procedure to limit writing to the transaction log or remove transactions entirely to speed up the process?
It is impossible to completely eliminate transaction log from the equation in SQL Server.
You may try to check bulk logged recovery model in conjunction with bulk insert, but if your calculations are complex and cannot be expressed within a single select statement, it could be worth trying SSIS.
I suggest you that using SSIS package in order to convert data from one database to another database. in SSIS you can control converted data and can use balk insert. In buck insert mode you limit your database to write transaction logs completely.
I ran into similar situations even while using SSIS where my staging database(s) kept logs more then 10 times the size of the actual data (on simple logging and using bulk insert). After lots of searching I have found that it is not feasable to prevent this from happening when doing large data operations like loading a datawarehouse. Instead it is easier to just clean up after you are done by shrinking the log.
dbcc shrinkfile

Is a commit needed on a select query in DB2?

I have a vendor reporting product executing queries to pull report data, no inserts, no updates just reading data.
We have double our heap size 3 times and are now at 1024 4k pages, The app will run fine for a week then we will begin to see DB2 SQL error: SQLCODE: -954, SQLSTATE: 57011 indicating the transaction log is not able to accomodate the request.
Its not the size of the reports since they run fine after a recycle. I spoke with another DBA on this. He believe the problem was in a difference between ORACLE and DB2 in that the vendor code is crappy and not issuing commits on the selects. This is causing the references to not be cleaned up and is slowly accumulating as garbage in the heap.
I wanted to know if this is accurate as I thought only inserts and updates needed to have commits included. Is there any IBM documentation on this?
We are currently recycling on a weekly basis to alleviate the problem but I would like to have a good handle on the issue before going back to the vendor asking them to alter their code.
Any transaction needs to be properly terminated -- why did you think that only applies to inserts and updates? Consider running transactionally a "select a from b where c > 12" and then "select a from b where c <= 12"; within a transaction the DB has to guarantee that every a gets returned exactly once either from the first or second select, not both (assuming c is never null;-). Without transactionality, some a's might fall between the cracks or be returned twice if their corresponding c was changed by a different transaction, and that's just not ACID!-)
So when you do not need separate SELECT queries to be transactional wrt each other, tell the DB! And the way you tell, is by terminating the transaction after each select (normally commit is what you use for the purpose, though I guess you could, indifferently, choose to use rollback here;-).
Per Alex's response, the first SQL activity after any CONNECT, COMMIT, or ROLLBACK initiates a transaction.
To get a handle on your resource issue (transaction logs full), you should investigate your application that issues the reports - ensure that transactions are being closed out explicitly in code. I've seen cases where application developers rely upon the Garbage Collector to clean up database objects - while those objects are waiting for cleanup, the database resources (transactions) are held open.
It's always good practice to explicitly COMMIT or ROLLBACK your transactions as soon as you are done with the data - regardless of the programming methodology you use.
I get this error when committing transaction on a SELECT query, but despite the error it does return a Result-Set that include queried data.
tran.Commit();
error [hy011] [ibm] cli0126e the operation is invalid sqlstate=hy011
I changed my code to tran.Rollback(); and the error disapered.
Can anyone explain this behavior?

Sybase ASE: "Your server command encountered a deadlock situation"

When running a stored procedure (from a .NET application) that does an INSERT and an UPDATE, I sometimes (but not that often, really) and randomly get this error:
ERROR [40001] [DataDirect][ODBC Sybase Wire Protocol driver][SQL Server]Your server command (family id #0, process id #46) encountered a deadlock situation. Please re-run your command.
How can I fix this?
Thanks.
Your best bet for solving you deadlocking issue is to set "print deadlock information" to on using
sp_configure "print deadlock information", 1
Everytime there is a deadlock this will print information about what processes were involved and what sql they were running at the time of the dead lock.
If your tables are using allpages locking. It can reduce deadlocks to switch to datarows or datapages locking. If you do this make sure to gather new stats on the tables and recreate indexes, views, stored procedures and triggers that access the tables that are changed. If you don't you will either get errors or not see the full benefits of the change depending on which ones are not recreated.
I have a set of long term apps which occasionally over lap table access and sybase will throw this error. If you check the sybase server log it will give you the complete info on why it happened. Like: The sql that was involved the two processes trying to get a lock. Usually one trying to read and the other doing something like a delete. In my case the apps are running in separate JVMs, so can't sychronize just have to clean up periodically.
Assuming that your tables are properly indexed (and that you are actually using those indexes - always worth checking via the query plan) you could try breaking the component parts of the SP down and wrapping them in separate transactions so that each unit of work is completed before the next one starts.
begin transaction
update mytable1
set mycolumn = "test"
where ID=1
commit transaction
go
begin transaction
insert into mytable2 (mycolumn) select mycolumn from mytable1 where ID = 1
commit transaction
go

Firebird 2.0 transaction SELECT performance

In Firebird 2.0, is using an explicit transaction faster on a SELECT command than executing the command with an implicit one?
All SQL commands (SELECT, INSERT, UPDATE etc.) can be executed ONLY within some transaction. You cannot run a command with out transaction being started prior to it.
Explicit and Implicit transaction are a feature of the component set you're using to access the database, not a feature of Firebird itself. As mentioned before, Firebird always does everything within a transaction. This has a couple of implications for you:
Using a "Implicit" transaction can't be faster then using a "Explicit" transaction because from Firebird's point of view, a transaction is a transaction, doesn't matter who started it.
Getting the best performance sometimes requires fine control over "Commits". While the "Implicit" transaction can't be faster then the "Explicit" transaction, the Explicit might be faster because you can control your StartTransactions and Commits. While you usually want to do all updates to a database within one transaction (so they all succeed or fail as a set) you sometimes want to split operations into multiple groups: If you need to bulk-insert many-many records, you probably want to Commit one every 1000 records or so.
Firebird cannot execute SQL commands without a transaction.
PS: You get the best performance results if you commit transactions, rather than rolling them back. Even if you only called SELECT and changed nothing.
Besides what was already said, take into account that the transaction can be:
Read-Write
Read-Only
For a SELECT it would be best to use a Read-Only transaction
PS: There are other types of transactions but this two are the important ones for this topic.
Usually transaction adds some overhead. However, you should be careful if you do not have some default transaction started when you connect to Firebird.
In my experience the implicit transactions tend to default to Auto commit Retaining, so they should be slower. You can always change the default behaviour.
But I would recommend using explicit transactions as Commit Retaining may cause you grief further down the line if it blocks too many transactions. If it does then access to Firebird can slow down dramatically as it traverses through all the held-up/blocked transactions to determine the correct value of the data.
Here are some discussions on it
http://forums.devshed.com/firebird-sql-development-61/difference-active-transaction-863103.html
http://www.slideshare.net/ibsurgeon/3-how-transactionswork

Should I commit or rollback a read transaction?

I have a read query that I execute within a transaction so that I can specify the isolation level. Once the query is complete, what should I do?
Commit the transaction
Rollback the transaction
Do nothing (which will cause the transaction to be rolled back at the end of the using block)
What are the implications of doing each?
using (IDbConnection connection = ConnectionFactory.CreateConnection())
{
using (IDbTransaction transaction = connection.BeginTransaction(IsolationLevel.ReadUncommitted))
{
using (IDbCommand command = connection.CreateCommand())
{
command.Transaction = transaction;
command.CommandText = "SELECT * FROM SomeTable";
using (IDataReader reader = command.ExecuteReader())
{
// Read the results
}
}
// To commit, or not to commit?
}
}
EDIT: The question is not if a transaction should be used or if there are other ways to set the transaction level. The question is if it makes any difference that a transaction that does not modify anything is committed or rolled back. Is there a performance difference? Does it affect other connections? Any other differences?
You commit. Period. There's no other sensible alternative. If you started a transaction, you should close it. Committing releases any locks you may have had, and is equally sensible with ReadUncommitted or Serializable isolation levels. Relying on implicit rollback - while perhaps technically equivalent - is just poor form.
If that hasn't convinced you, just imagine the next guy who inserts an update statement in the middle of your code, and has to track down the implicit rollback that occurs and removes his data.
If you haven't changed anything, then you can use either a COMMIT or a ROLLBACK. Either one will release any read locks you have acquired and since you haven't made any other changes, they will be equivalent.
If you begin a transaction, then best practice is always to commit it. If an exception is thrown inside your use(transaction) block the transaction will be automatically rolled-back.
Consider nested transactions.
Most RDBMSes do not support nested transactions, or try to emulate them in a very limited way.
For example, in MS SQL Server, a rollback in an inner transaction (which is not a real transaction, MS SQL Server just counts transaction levels!) will rollback the everything which has happened in the outmost transaction (which is the real transaction).
Some database wrappers might consider a rollback in an inner transaction as an sign that an error has occured and rollback everything in the outmost transaction, regardless whether the outmost transaction commited or rolled back.
So a COMMIT is the safe way, when you cannot rule out that your component is used by some software module.
Please note that this is a general answer to the question. The code example cleverly works around the issue with an outer transaction by opening a new database connection.
Regarding performance: depending on the isolation level, SELECTs may require a varying degree of LOCKs and temporary data (snapshots). This is cleaned up when the transaction is closed. It does not matter whether this is done via COMMIT or ROLLBACK. There might be a insignificant difference in CPU time spent - a COMMIT is probably faster to parse than a ROLLBACK (two characters less) and other minor differences. Obviously, this is only true for read-only operations!
Totally not asked for: another programmer who might get to read the code might assume that a ROLLBACK implies an error condition.
IMHO it can make sense to wrap read only queries in transactions as (especially in Java) you can tell the transaction to be "read-only" which in turn the JDBC driver can consider optimizing the query (but does not have to, so nobody will prevent you from issuing an INSERT nevertheless). E.g. the Oracle driver will completely avoid table locks on queries in a transaction marked read-only, which gains a lot of performance on heavily read-driven applications.
ROLLBACK is mostly used in case of an error or exceptional circumstances, and COMMIT in the case of successful completion.
We should close transactions with COMMIT (for success) and ROLLBACK (for failure), even in the case of read-only transactions where it doesn't seem to matter. In fact it does matter, for consistency and future-proofing.
A read-only transaction can logically "fail" in many ways, for example:
a query does not return exactly one row as expected
a stored procedure raises an exception
data fetched is found to be inconsistent
user aborts the transaction because it's taking too long
deadlock or timeout
If COMMIT and ROLLBACK are used properly for a read-only transaction, it will continue to work as expected if DB write code is added at some point, e.g. for caching, auditing or statistics.
Implicit ROLLBACK should only be used for "fatal error" situations, when the application crashes or exits with an unrecoverable error, network failure, power failure, etc.
Just a side note, but you can also write that code like this:
using (IDbConnection connection = ConnectionFactory.CreateConnection())
using (IDbTransaction transaction = connection.BeginTransaction(IsolationLevel.ReadUncommitted))
using (IDbCommand command = connection.CreateCommand())
{
command.Transaction = transaction;
command.CommandText = "SELECT * FROM SomeTable";
using (IDataReader reader = command.ExecuteReader())
{
// Do something useful
}
// To commit, or not to commit?
}
And if you re-structure things just a little bit you might be able to move the using block for the IDataReader up to the top as well.
If you put the SQL into a stored procedure and add this above the query:
set transaction isolation level read uncommitted
then you don't have to jump through any hoops in the C# code. Setting the transaction isolation level in a stored procedure does not cause the setting to apply to all future uses of that connection (which is something you have to worry about with other settings since the connections are pooled). At the end of the stored procedure it just goes back to whatever the connection was initialized with.
Given that a READ does not change state, I would do nothing. Performing a commit will do nothing, except waste a cycle to send the request to the database. You haven't performed an operation that has changed state. Likewise for the rollback.
You should however, be sure to clean up your objects and close your connections to the database. Not closing your connections can lead to issues if this code gets called repeatedly.
If you set AutoCommit false, then YES.
In an experiment with JDBC(Postgresql driver), I found that if select query breaks(because of timeout), then you can not initiate new select query unless you rollback.
Do you need to block others from reading the same data? Why use a transaction?
#Joel - My question would be better phrased as "Why use a transaction on a read query?"
#Stefan - If you are going to use AdHoc SQL and not a stored proc, then just add the WITH (NOLOCK) after the tables in the query. This way you dont incur the overhead (albeit minimal) in the application and the database for a transaction.
SELECT * FROM SomeTable WITH (NOLOCK)
EDIT # Comment 3: Since you had "sqlserver" in the question tags, I had assumed MSSQLServer was the target product. Now that that point has been clarified, I have edited the tags to remove the specific product reference.
I am still not sure of why you want to make a transaction on a read op in the first place.
In your code sample, where you have
// Do something useful
Are you executing a SQL Statement that changes data ?
If not, there's no such thing as a "Read" Transaction... Only changes from an Insert, Update and Delete Statements (statements that can change data) are in a Transaction... What you are talking about is the locks that SQL Server puts on the data you are reading, because of OTHER transactions that affect that data. The level of these locks is dependant on the SQL Server Isolation Level.
But you cannot Commit, or ROll Back anything, if your SQL statement has not changed anything.
If you are changing data, then you can change the isolation level without explicitly starting a transation... Every individual SQL Statement is implicitly in a transaction. explicitly starting a Transaction is only necessary to ensure that 2 or more statements are within the same transaction.
If all you want to do is set the transaction isolation level, then just set a command's CommandText to "Set Transaction Isolation level Repeatable Read" (or whatever level you want), set the CommandType to CommandType.Text, and execute the command. (you can use Command.ExecuteNonQuery() )
NOTE: If you are doing MULTIPLE read statements, and want them all to "see" the same state of the database as the first one, then you need to set the isolation Level top Repeatable Read or Serializable...