I've got a simple bit of code that uses BeginTransaction(). The resulting transaction is assigned to the connection that I'm using for some sql commands.
When I profile the resulting sql, I don't see a BEGIN TRANSACTION at any point. What might be happening that would prevent the transaction from being used?
Transactions are handled at a lower level when using ADO.NET. There are no "BEGIN TRANSACTION" statements sent to the server.
You need to ensure that you not only set the transaction on the connection object, but you also need to assign the transaction into the sqlCommand.
See this codeproject article for an example.
To reiterate Philippe's statement:
Transactions are handled at a lower level when using ADO.NET. There are no "BEGIN TRANSACTION" statements sent to the server.
At some point SQL has to be converted into actual calls. Most ADO.NET (all that I've worked with) often send a database specific command to BEGIN, COMMIT, and ROLLBACK transactions as sending ASCII (or whatever else) would be less efficient than something the server will have to parse.
This is why sending parameterised queries are often faster than pure SQL based ones as the library can send specific commands which results in less parsing and probably less data validation (?).
HTH!
Related
I had unfortunately deleted data from database by using following query in SQL Server
exec usp_delete_cascade "someTable", "id='somexyz'"
Can anyone please tell me how to get back my data?
Is this possible?
There are two kinds of transactions - implicit and explicit.
Implicit transaction is used every time you do DML statement (in your case delete). This transaction is not user handled. And it is not true your qry did not run under transaction.
Explicit transaction can be defined by user (with begin transaction). When you do not specify transaction, there are only implicit transactions, which are autocommited when statement success.
There is a few ways how to data recover, but never with 100% success and without work. You have to use some external program as SysTools SQL Recovery, ApexSQL Recover or Veeam. Level of recovery depends on your storage use and your server configuration.
Only one 100% way is prevension (and backups, change tracking etc).
You can try to recover with this tool of ApexSQL, but you would think in backup and measures to avoid this kind of problem.
http://www.apexsql.com/sql_tools_recover.aspx
Obviously is a third party tool and you would pay for using it.
It depends on your server config. But, by default, SQL Server does not starts transaction when executing query. So, if you do not started transaction, or transaction started, but commited, rollback is impossible.
Other ways to restore the data: if your database recovery model is set to full, and you have diff or full backup, youre lucky. If no, data is missing forewer.
I am working with ms sql with struts framework.
While calling procedure I put autocommit false in program.
when the procedure run I have to commit one seperate transaction and it must be affect the table externally
But it never be save until conn.commit() statement execute in program.
Is it any other way to commit the transaction in procedure itself, to affect the table on the end of the single transaction in procedure?
Pl. tell me if you know.
T.Saravanan
You should start and commit/rollback a transaction at the same level, otherwise you are introducing a lot of unpredictable paths - and frankly some bad design. So: if you need to commit at the server, use BEGIN TRAN / COMMIT TRAN in the TSQL to handle the transaction locally.
Note, though, that TSQL exception/error handling is not as rich as handling errors at a caller such as java/C#. If the problem is that you want to disassociate this work from another unrelated transaction, then it depends on how your calling code works:
if it is using connection-level transactions, then you will need to use a separate connection; just run the transaction on a different connection using the java/C#/whatever transaction API (i.e. the same as your existing code, by the sound of it, but on a different connection)
if it is using things like scope-based transactions (TransactionScope in C#; not sure about java etc - but this is an LTM or DTC transaction) then you can explicitly create a new scope that is bound to either a new (isolated) transaction, or the nil-transaction (i.e. the inner scope is not enlisted)
As for affecting the tables... SQL Server generally does optimistic changes, i.e. yes the changes are applied immediately (so that commit is cheap, and rollback is more expensive) - however, the isolation level will generally prevent other SPIDs from seeing the data. A competing SPID with a low isolation level (or using the NOLOCK hint) will see the uncommitted data, but this may be a phantom/non-repeatable read if the data eventually gets rolled back.
In Firebird 2.0, is using an explicit transaction faster on a SELECT command than executing the command with an implicit one?
All SQL commands (SELECT, INSERT, UPDATE etc.) can be executed ONLY within some transaction. You cannot run a command with out transaction being started prior to it.
Explicit and Implicit transaction are a feature of the component set you're using to access the database, not a feature of Firebird itself. As mentioned before, Firebird always does everything within a transaction. This has a couple of implications for you:
Using a "Implicit" transaction can't be faster then using a "Explicit" transaction because from Firebird's point of view, a transaction is a transaction, doesn't matter who started it.
Getting the best performance sometimes requires fine control over "Commits". While the "Implicit" transaction can't be faster then the "Explicit" transaction, the Explicit might be faster because you can control your StartTransactions and Commits. While you usually want to do all updates to a database within one transaction (so they all succeed or fail as a set) you sometimes want to split operations into multiple groups: If you need to bulk-insert many-many records, you probably want to Commit one every 1000 records or so.
Firebird cannot execute SQL commands without a transaction.
PS: You get the best performance results if you commit transactions, rather than rolling them back. Even if you only called SELECT and changed nothing.
Besides what was already said, take into account that the transaction can be:
Read-Write
Read-Only
For a SELECT it would be best to use a Read-Only transaction
PS: There are other types of transactions but this two are the important ones for this topic.
Usually transaction adds some overhead. However, you should be careful if you do not have some default transaction started when you connect to Firebird.
In my experience the implicit transactions tend to default to Auto commit Retaining, so they should be slower. You can always change the default behaviour.
But I would recommend using explicit transactions as Commit Retaining may cause you grief further down the line if it blocks too many transactions. If it does then access to Firebird can slow down dramatically as it traverses through all the held-up/blocked transactions to determine the correct value of the data.
Here are some discussions on it
http://forums.devshed.com/firebird-sql-development-61/difference-active-transaction-863103.html
http://www.slideshare.net/ibsurgeon/3-how-transactionswork
I'm Trying to make a long stored procedure a little more manageable, Is it wrong to have a stored procedures that calls other stored procedures for example I want to have a sproc that inserts data into a table and depending on the type insert additional information into table for that type, something like:
BEGIN TRANSACTION
INSERT INTO dbo.ITSUsage (
Customer_ID,
[Type],
Source
) VALUES (
#Customer_ID,
#Type,
#Source
)
SET #ID = SCOPE_IDENTITY()
IF #Type = 1
BEGIN
exec usp_Type1_INS #ID, #UsageInfo
END
IF #TYPE = 2
BEGIN
exec usp_Type2_INS #ID, #UsageInfo
END
IF (##ERROR <> 0)
ROLLBACK TRANSACTION
ELSE
COMMIT TRANSACTION
Or is this something I should be handling in my application?
We call procs from other procs all the time. It's hard/impossible to segment a database-intensive (or database-only) application otherwise.
Calling a procedure from inside another procedure is perfectly acceptable.
However, in Transact-SQL relying on ##ERROR is prone to failure. Case in point, your code. It will fail to detect an insert failure, as well as any error produced inside the called procedures. This is because ##ERROR is reset with each statement executed and only retains the result of the very last statement. I have a blog entry that shows a correct template of error handling in Transact-SQL and transaction nesting. Also Erland Sommarskog has an article that is, for long time now, the reference read on error handling in Transact-SQL.
No, it is perfectly acceptable.
Definitely, no.
I've seen ginormous stored procedures doing 20 different things that would have really benefited from being refactored into smaller, single purposed ones.
As long as it is within the same DB schema it is perfectly acceptable in my opinion. It is reuse which is always favorable to duplication. It's like calling methods within some application layer.
not at all, I would even say, it's recommended for the same reasons that you create methods in your code
One stored procedure calling another stored procedure is fine. Just that there is a limit on the level of nesting till which you can go.
In SQL Server the current nesting level is returned by the ##NESTLEVEL function.
Please check the Stored Procedure Nesting section here http://msdn.microsoft.com/en-us/library/aa258259(SQL.80).aspx
cheers
No. It promotes reuse and allows for functionality to be componentized.
As others have pointed out, this is perfectly acceptable and necessary to avoid duplicating functionality.
However, in Transact-SQL watch out for transactions in nested stored procedure calls: You need to check ##TRANCOUNT before issuing rollback transaction because it rolls back all nested transactions. Check this article for an in-depth explanation.
Yes it is bad. While SQL Server does support and allow one stored procedures to call another stored procedure. I would generally try to avoid this design if possible. My reason?
single responsibility principle
In our IT area we use stored procedures to consolidate common code for both stored procedures and triggers (where applicable). It's also virtually mandatory for avoiding SQL source duplication.
The general answer to this question is, of course, No - it's normal and even preferred way of coding SQL stored procedures.
But it could be that in your specific case it is not such a good idea.
If you maintain a set of stored procedures that support data access tier (DAO) in your application (Java, .Net, etc.) then having database tier (let's call stored procedures that way) streamlined and relatively thin would benefit your overall design. Thus, having extensive graph of stored procedure calls may indeed be bad for maintaining and supporting overall data access logic in such application.
I would lean toward more uniform distribution of logic between DAO and database tier so that stored procedure code would fit inside single functional call.
Adding to the correct comments of other posters, there is nothing wrong in principle but you need to watch out on the execution time in case the procedure is being called for instance by an external application which is conforming to a specific timeout.
Typical example if you call the stored procedure from a web application: when the default timeout kicks in since your chain of executions takes longer you get a failure in the web application even when the stored procedure committs correctly.
Same happens if you call from an external service.
This can lead to an inconsistent behaviour in your application, triggering error management routines in external services etc.
If you are in situations like this what I do is breaking the chain of calls redirecting the long execution children calls to different processes using a Service Broker.
I have a read query that I execute within a transaction so that I can specify the isolation level. Once the query is complete, what should I do?
Commit the transaction
Rollback the transaction
Do nothing (which will cause the transaction to be rolled back at the end of the using block)
What are the implications of doing each?
using (IDbConnection connection = ConnectionFactory.CreateConnection())
{
using (IDbTransaction transaction = connection.BeginTransaction(IsolationLevel.ReadUncommitted))
{
using (IDbCommand command = connection.CreateCommand())
{
command.Transaction = transaction;
command.CommandText = "SELECT * FROM SomeTable";
using (IDataReader reader = command.ExecuteReader())
{
// Read the results
}
}
// To commit, or not to commit?
}
}
EDIT: The question is not if a transaction should be used or if there are other ways to set the transaction level. The question is if it makes any difference that a transaction that does not modify anything is committed or rolled back. Is there a performance difference? Does it affect other connections? Any other differences?
You commit. Period. There's no other sensible alternative. If you started a transaction, you should close it. Committing releases any locks you may have had, and is equally sensible with ReadUncommitted or Serializable isolation levels. Relying on implicit rollback - while perhaps technically equivalent - is just poor form.
If that hasn't convinced you, just imagine the next guy who inserts an update statement in the middle of your code, and has to track down the implicit rollback that occurs and removes his data.
If you haven't changed anything, then you can use either a COMMIT or a ROLLBACK. Either one will release any read locks you have acquired and since you haven't made any other changes, they will be equivalent.
If you begin a transaction, then best practice is always to commit it. If an exception is thrown inside your use(transaction) block the transaction will be automatically rolled-back.
Consider nested transactions.
Most RDBMSes do not support nested transactions, or try to emulate them in a very limited way.
For example, in MS SQL Server, a rollback in an inner transaction (which is not a real transaction, MS SQL Server just counts transaction levels!) will rollback the everything which has happened in the outmost transaction (which is the real transaction).
Some database wrappers might consider a rollback in an inner transaction as an sign that an error has occured and rollback everything in the outmost transaction, regardless whether the outmost transaction commited or rolled back.
So a COMMIT is the safe way, when you cannot rule out that your component is used by some software module.
Please note that this is a general answer to the question. The code example cleverly works around the issue with an outer transaction by opening a new database connection.
Regarding performance: depending on the isolation level, SELECTs may require a varying degree of LOCKs and temporary data (snapshots). This is cleaned up when the transaction is closed. It does not matter whether this is done via COMMIT or ROLLBACK. There might be a insignificant difference in CPU time spent - a COMMIT is probably faster to parse than a ROLLBACK (two characters less) and other minor differences. Obviously, this is only true for read-only operations!
Totally not asked for: another programmer who might get to read the code might assume that a ROLLBACK implies an error condition.
IMHO it can make sense to wrap read only queries in transactions as (especially in Java) you can tell the transaction to be "read-only" which in turn the JDBC driver can consider optimizing the query (but does not have to, so nobody will prevent you from issuing an INSERT nevertheless). E.g. the Oracle driver will completely avoid table locks on queries in a transaction marked read-only, which gains a lot of performance on heavily read-driven applications.
ROLLBACK is mostly used in case of an error or exceptional circumstances, and COMMIT in the case of successful completion.
We should close transactions with COMMIT (for success) and ROLLBACK (for failure), even in the case of read-only transactions where it doesn't seem to matter. In fact it does matter, for consistency and future-proofing.
A read-only transaction can logically "fail" in many ways, for example:
a query does not return exactly one row as expected
a stored procedure raises an exception
data fetched is found to be inconsistent
user aborts the transaction because it's taking too long
deadlock or timeout
If COMMIT and ROLLBACK are used properly for a read-only transaction, it will continue to work as expected if DB write code is added at some point, e.g. for caching, auditing or statistics.
Implicit ROLLBACK should only be used for "fatal error" situations, when the application crashes or exits with an unrecoverable error, network failure, power failure, etc.
Just a side note, but you can also write that code like this:
using (IDbConnection connection = ConnectionFactory.CreateConnection())
using (IDbTransaction transaction = connection.BeginTransaction(IsolationLevel.ReadUncommitted))
using (IDbCommand command = connection.CreateCommand())
{
command.Transaction = transaction;
command.CommandText = "SELECT * FROM SomeTable";
using (IDataReader reader = command.ExecuteReader())
{
// Do something useful
}
// To commit, or not to commit?
}
And if you re-structure things just a little bit you might be able to move the using block for the IDataReader up to the top as well.
If you put the SQL into a stored procedure and add this above the query:
set transaction isolation level read uncommitted
then you don't have to jump through any hoops in the C# code. Setting the transaction isolation level in a stored procedure does not cause the setting to apply to all future uses of that connection (which is something you have to worry about with other settings since the connections are pooled). At the end of the stored procedure it just goes back to whatever the connection was initialized with.
Given that a READ does not change state, I would do nothing. Performing a commit will do nothing, except waste a cycle to send the request to the database. You haven't performed an operation that has changed state. Likewise for the rollback.
You should however, be sure to clean up your objects and close your connections to the database. Not closing your connections can lead to issues if this code gets called repeatedly.
If you set AutoCommit false, then YES.
In an experiment with JDBC(Postgresql driver), I found that if select query breaks(because of timeout), then you can not initiate new select query unless you rollback.
Do you need to block others from reading the same data? Why use a transaction?
#Joel - My question would be better phrased as "Why use a transaction on a read query?"
#Stefan - If you are going to use AdHoc SQL and not a stored proc, then just add the WITH (NOLOCK) after the tables in the query. This way you dont incur the overhead (albeit minimal) in the application and the database for a transaction.
SELECT * FROM SomeTable WITH (NOLOCK)
EDIT # Comment 3: Since you had "sqlserver" in the question tags, I had assumed MSSQLServer was the target product. Now that that point has been clarified, I have edited the tags to remove the specific product reference.
I am still not sure of why you want to make a transaction on a read op in the first place.
In your code sample, where you have
// Do something useful
Are you executing a SQL Statement that changes data ?
If not, there's no such thing as a "Read" Transaction... Only changes from an Insert, Update and Delete Statements (statements that can change data) are in a Transaction... What you are talking about is the locks that SQL Server puts on the data you are reading, because of OTHER transactions that affect that data. The level of these locks is dependant on the SQL Server Isolation Level.
But you cannot Commit, or ROll Back anything, if your SQL statement has not changed anything.
If you are changing data, then you can change the isolation level without explicitly starting a transation... Every individual SQL Statement is implicitly in a transaction. explicitly starting a Transaction is only necessary to ensure that 2 or more statements are within the same transaction.
If all you want to do is set the transaction isolation level, then just set a command's CommandText to "Set Transaction Isolation level Repeatable Read" (or whatever level you want), set the CommandType to CommandType.Text, and execute the command. (you can use Command.ExecuteNonQuery() )
NOTE: If you are doing MULTIPLE read statements, and want them all to "see" the same state of the database as the first one, then you need to set the isolation Level top Repeatable Read or Serializable...