This is my first post on here..
I'm writing a program in MVC3 that has a WCF service which acts as the Data Access Layer. In my DAL, I have to do some sort of 'batch' inserts and updates.. particularly with orders for example.. let's say one order has several items and could have several payment methods etc.. so when I insert a new order I'll need to insert all items related to that order and so on..
Therefore, what I'm looking for is the better way and feasible method to be able to run several stored procedures, e.g one which will insert the order, another which will insert its items, etc..
The tables Order and Item are linked together with a third table called Order_Items, which will have (fk) order_id, (fk) item_id, qty, price..
I know I can run multiple commands by changing command text and and executing non query withing a transaction.. but I would like to run stored procedures instead of hardcoding text commands.. or I can run the procedures by making command text something like
cmd.CommandText = 'exec sp_insert_order #order_number #order_date ...'
cmd.ExecuteNonQuery();
and then loop the items say
foreach (string s in insert_items)
{
cmd.CommandText = s;
cmd.ExecuteNonQuery();
}
all this within a transaction and then do a commit.. but I don't feel this is such a clean way of doing things.. can someone please share their opinion.
If you're using stored procedure, you should change the way you call them - I would recommend using this approach:
// define your stored procedure name, and the type
cmd.CommandText = 'dbo.sp_insert_order';
cmd.CommandType = CommandType.StoredProcedure;
// define and fill your parameters
cmd.Parameters.Add("#order_number", SqlDbType.Int).Value = order_nr;
cmd.Parameters.Add("#order_date", SqlDbType.DateTime).Value = ......;
cmd.ExecuteNonQuery();
Basically, you'd have to do this for each stored procedure you want to call, and you could wrap all of those in a single transaction without any problems:
using(SqlConnection connection = new SqlConnection("your-connection-string-here"))
{
SqlTransaction transaction = connection.BeginTransaction();
try
{
// call all stored procuedures here - remember to assign the
// transaction to the SqlCommand!!
....
transaction.Commit();
}
catch(Exception exc)
{
transaction.Rollback();
}
}
You can use the TransactionScope attributes on your methods to enclose all work in a transaction proc or text
You may also be interested in the Transaction Propagation functionality built in to WCF. It can be configured in such a way that each web service call to WCF automatically creates, and commits or rolls-back transactions for you, basically wrapping the entire service method call in a transaction.
There is a good MSDN writeup on it here.
It is a bit of an advanced topic and may be overkill for what you need, but something to keep in mind.
Related
Firstly, I apologise if this is a really stupid question.
I had a question about dealing correctly with SQL statements within Yii. I'll make a small example code.
public function actionCreate($id) {
$cmd = Yii::app()->db->createCommand();
$cmd->insert('table_1',array(
'user_id'=> (int) $id,
),'id=:id', array(':id'=>$id));
}
What's the correct way to confirm this query worked? Is it try/catch blocks?
The reason I ask is that could fail if it's passed a bad parameter, but on a couple of tables I have DB constraints that could also result in a failure, so I wanted to try and make sure I handled everything properly rather than blanket handle them.
From official document
Executing SQL Statements Once a database connection is established, SQL statements can be executed using CDbCommand. One
creates a CDbCommand instance by calling
CDbConnection::createCommand() with the specified SQL statement:
$connection=Yii::app()->db; // assuming you have configured a "db" connection
// If not, you may explicitly create a connection:
// $connection=new CDbConnection($dsn,$username,$password);
$command=$connection->createCommand($sql);
// if needed, the SQL statement may be updated as follows:
// $command->text=$newSQL;
A SQL statement is executed via CDbCommand in one of the following two ways:
And here it is
execute(): performs a non-query SQL statement, such as INSERT, UPDATE
and DELETE. If successful, it returns the number of rows that are
affected by the execution.
Btw, insert() is a low level method that's used internally by Active Record (AR). Why don't you simply use AR instead
By Yii gii, you automatically get model for table_1, and you can find, insert, update, delete from that. Example:
$model = new Table1ModelName;
$model->user_id= $id;
$model->name= $user_name;
...
$model->save();
There still has many workarounds and interesting things which you would like to study about
Yii Working Active Record
I Have dapper orm in project and i have save alto of data (1200000row) in database but in transaction with dapper is very slow i want fast.with nhibernate (session statetless)is slow.
I think dapper is fast because that fetch data(700000) with nhibernate in 33 second that with dapper in 9 second.
how solved problem ?
my code is :
IDbTransaction trans = connection.BeginTransaction();
connection.Execute(#"
insert DailyResult(Id, PersonId,DateTaradod,DailyTaradods)
values(#Id, #PersonId,#DateTaradod,#DailyTaradods)", entity, trans);
trans.Commit();
There is no mechanism to make inserting 1200000 rows in a transaction instant, via any regular ADO.NET API. That simply isn't what the intent of that API is.
For what you want, it sounds like you should be using SqlBulkCopy. This supports transactions, and you can use FastMember to help here; for example:
IEnumerable<YourEntity> source = ...
using(var bcp = new SqlBulkCopy(
connection, SqlBulkCopyOptions.UseInternalTransaction))
using(var reader = ObjectReader.Create(source,
"Id", "PersonId", "DateTaradod", "DailyTaradods"))
{
bcp.DestinationTableName = "DailyResult";
bcp.WriteToServer(reader);
}
It also supports external transactions, but if you are going to "create tran, push, commit tran" you might as well use the internal transaction.
If you don't want to use SqlBulkCopy, you can also look at table-valued-parameter approaches, but SqlBulkCopy would be my recommended API when dealing with this volume.
Note: if the table has more columns than Id, PersonId, DateTaradod and DailyTaradods, you can specify explicit bcp.ColumnMappings to tweak how the insert behaves.
I have an issue Im running into with EF and SQL. I have a crazy stored proc that doesnt translate well to C# code (EF/LINQ). Basically, what I do is call the stored proc with a SqlConnection and SqlCommand call (System.Data.SqlClient) [See below] then pull the data from the table using EF. This happens over and over until a main table is depleted. (I have a main table with several hundred thousand records and the stored proc pulls a small portion of that and puts it in a table to be processed, once processed, those records are removed from the main table and it goes all over again until the main table has been completely processed).
The issue is that the table never gets updated in C#, but it IS getting updated on the backend.
So here's the SQL Call:
SqlConnection sqlConn;
SqlCommand sqlCommand;
using (sqlConn = new SqlConnection(ConfigurationManager.ConnectionStrings["AppMRIConnection"].ConnectionString))
{
using (sqlCommand = new SqlCommand(String.Format("EXEC sp_PullFinalDataSetPart '{0}', '{1}'", sLocation, sOutputFileType), sqlConn))
{
sqlConn.Open();
sqlCommand.ExecuteNonQuery();
sqlConn.Close();
}
}
That truncates the FinalDataSetPart table and re-loads it with X new records.
This is the call in C#
List<FinalDataSetPart> lstFinalPart = db.FinalDataSetPart.ToList();
This call will ALWAYS get the first finaldatasetpart table loaded regardless of what is actually in the table. That call is correctly inside the loop (I can break into the code and see it calling that method every loop iteration).
Has anyone seen anything like this before?!
Any thoughts/help/tips would be GREATLY appreciated.
Thanks!
Do the ID's change in the temporary table when you pull in new data? EF won't detect changes to data if the primary id's don't change.
Do you drop en recreate the context everytime you grab new data?
I run this query
"insert into students (StudentName) values ('reza');insert into Records (RecordValue,StudentID)" +
" values (20,##IDENTITY)";
in C# and get following exception :
Characters found after end of SQL statement
I guess you want to retrieve the identity of the newly inserted student and then insert that into the "Records" table, right?
I would strongly suggest you use SCOPE_IDENTITY() instead of ##IDENTITY which has some problems if you have e.g. triggers on your table. Pinal Dave has a great blog post about those problems.
Also, if you call SQL Server from C#, I'd strongly recommend you use the native .NET SQL Provider (SqlConnection, SqlCommand, etc.) - not oledbcommand.
Try this
using (SqlConnection _con = new SqlConnection("server=(local);database=TEST;integrated security=SSPI;"))
{
string queryStmt = "INSERT INTO dbo.Students (StudentName) VALUES('reza'); " +
"INSERT INTO dbo.Records(RecordID, StudentID) VALUES (20, SCOPE_IDENTITY());";
using (SqlCommand _cmd = new SqlCommand(queryStmt, _con))
{
try
{
_con.Open();
_cmd.ExecuteNonQuery();
_con.Close();
}
catch (Exception exc)
{
string msg = exc.Message;
}
}
}
This certainly works, I just tested it successfully in my setting.
I just ran this code and it worked out:
SqlConnection cn = new SqlConnection("...");
SqlCommand cm = cn.CreateCommand();
cm.CommandText =
"INSERT INTO MyTable (FieldA) VALUES ('Sample'); " +
"INSERT INTO MyTable (FieldB) VALUES (##Identity); ";
cn.Open();
cm.ExecuteNonQuery();
Maybe you need to add a space after that first semicolon character.
You can do this command much as shown - you do not need to have a temporary variable nor do you need a "GO" or a space after the semicolon. I do agree with Marc_s that you should use SCOPE_IDENTITY() to avoid problems with other transactions sneaking a value in.
The question is: why do you have quotes around your statement and a semicolon at the end? Clearly you are pulling this from code and that is where I'd look. So...first, run this command in SQL Server Management Studio or the like to verify that it works. Once you do verify it (it should work assuming your table structure is what I think it is) then figure out what your code is doing wrong.
Update: I am laughing here as my answer is "correcting" lots of other answers that are disappearing as it becomes obvious that they are not right.
You should wrap this up into a stored procedure because logically it's one unit of work - and you may want to call it from more than one location too.
I would also seriously consider whether you should wrap the two statements up into a transaction - you wouldn't want the insert into students succeeding and that into records failing. OK that's an edge condition but it's easy to guard against in a SP and makes more professional code.
Another advantage of using an SP in this case is that as you're actioning a couple of inserts already it's quite possible that you'll want to extend this later - insert into a finance table for example. If you wrap up into an SP you can just alter the SP rather than have to grep code for all instances of a new student insert, amend and recompile.
You need to create a Stored Procedure that contains those SQL statements, and then execute the Stored Procedure from your C# code.
I have triggers that manipulate and insert a lot of data into a Change tracking table for audit purposes on every insert, update and delete.
This trigger does its job very well, by using it we are able to log the desired oldvalues/newvalues as per the business requirements for every transaction.
However in some cases where the source table has a lot columns, it can take up to 30 seconds for the transaction to complete which is unacceptable.
Is there a way to make the trigger run asynchronously? Any examples.
You can't make the trigger run asynchronously, but you could have the trigger synchronously send a message to a SQL Service Broker queue. The queue can then be processed asynchronously by a stored procedure.
these articles show how to use service broker for async auditing and should be useful:
Centralized Asynchronous Auditing with Service Broker
Service Broker goodies: Cross Server Many to One (One to Many) scenario and How to troubleshoot it
SQL Server 2014 introduced a very interesting feature called Delayed Durability. If you can tolerate loosing a few rows in case of an catastrophic event, like a server crash, you could really boost your performance in schenarios like yours.
Delayed transaction durability is accomplished using asynchronous log
writes to disk. Transaction log records are kept in a buffer and
written to disk when the buffer fills or a buffer flushing event takes
place. Delayed transaction durability reduces both latency and
contention within the system
The database containing the table must first be altered to allow delayed durability.
ALTER DATABASE dbname SET DELAYED_DURABILITY = ALLOWED
Then you could control the durability on a per-transaction basis.
begin tran
insert into ChangeTrackingTable select * from inserted
commit with(DELAYED_DURABILITY=ON)
The transaction will be commited as durable if the transaction is cross-database, so this will only work if your audit table is located in the same database as the trigger.
There is also a possibility to alter the database as forced instead of allowed. This causes all transactions in the database to become delayed durable.
ALTER DATABASE dbname SET DELAYED_DURABILITY = FORCED
For delayed durability, there is no difference between an unexpected
shutdown and an expected shutdown/restart of SQL Server. Like
catastrophic events, you should plan for data loss. In a planned
shutdown/restart some transactions that have not been written to disk
may first be saved to disk, but you should not plan on it. Plan as
though a shutdown/restart, whether planned or unplanned, loses the
data the same as a catastrophic event.
This strange defect will hopefully be addressed in a future release, but until then it may be wise to make sure to automatically execute the 'sp_flush_log' procedure when SQL server is restarting or shutting down.
To perform asynchronous processing you can use Service Broker, but it isn't the only option, you can also use CLR objects.
The following is an example of an stored procedure (AsyncProcedure) that asynchronous calls another procedure (SyncProcedure):
using System;
using System.Data;
using System.Data.SqlClient;
using System.Data.SqlTypes;
using Microsoft.SqlServer.Server;
using System.Runtime.Remoting.Messaging;
using System.Diagnostics;
public delegate void AsyncMethodCaller(string data, string server, string dbName);
public partial class StoredProcedures
{
[Microsoft.SqlServer.Server.SqlProcedure]
public static void AsyncProcedure(SqlXml data)
{
AsyncMethodCaller methodCaller = new AsyncMethodCaller(ExecuteAsync);
string server = null;
string dbName = null;
using (SqlConnection cn = new SqlConnection("context connection=true"))
using (SqlCommand cmd = new SqlCommand("SELECT ##SERVERNAME AS [Server], DB_NAME() AS DbName", cn))
{
cn.Open();
using (SqlDataReader reader = cmd.ExecuteReader())
{
reader.Read();
server = reader.GetString(0);
dbName = reader.GetString(1);
}
}
methodCaller.BeginInvoke(data.Value, server, dbName, new AsyncCallback(Callback), null);
//methodCaller.BeginInvoke(data.Value, server, dbName, null, null);
}
private static void ExecuteAsync(string data, string server, string dbName)
{
string connectionString = string.Format("Data Source={0};Initial Catalog={1};Integrated Security=SSPI", server, dbName);
using (SqlConnection cn = new SqlConnection(connectionString))
using (SqlCommand cmd = new SqlCommand("SyncProcedure", cn))
{
cmd.CommandType = CommandType.StoredProcedure;
cmd.Parameters.Add("#data", SqlDbType.Xml).Value = data;
cn.Open();
cmd.ExecuteNonQuery();
}
}
private static void Callback(IAsyncResult ar)
{
AsyncResult result = (AsyncResult)ar;
AsyncMethodCaller caller = (AsyncMethodCaller)result.AsyncDelegate;
try
{
caller.EndInvoke(ar);
}
catch (Exception ex)
{
// handle the exception
//Debug.WriteLine(ex.ToString());
}
}
}
It uses asynchronous delegates to call SyncProcedure:
CREATE PROCEDURE SyncProcedure(#data xml)
AS
INSERT INTO T(Data) VALUES (#data)
Example of calling AsyncProcedure:
EXEC dbo.AsyncProcedure N'<doc><id>1</id></doc>'
Unfortunatelly, the assembly requires UNSAFE permission.
I wonder if you could tag a record for the change tracking by inserting into a "too process" table including who did the change etc etc.
Then another process could come along and copy the rest of the data on a regular basis.
There's a basic conflict between "does its job very well" and "unacceptable", obviously.
It sounds to me that you're trying to use triggers the same way you would use events in an OO procedural application, which IMHO doesn't map.
I would call any trigger logic that takes 30 seconds - no, more that 0.1 second - as disfunctional. I think you really need to redesign your functionality and do it some other way. I'd say "if you want to make it asynchronous", but I don't think this design makes sense in any form.
As far as "asynchronous triggers", the basic fundamental conflict is that you could never include such a thing between BEGIN TRAN and COMMIT TRAN statements because you've lost track of whether it succeeded or not.
Create history table(s). While updating (/deleting/inserting) main table, insert old values of record (deleted pseudo-table in trigger) into history table; some additional info is needed too (timestamp, operation type, maybe user context). New values are kept in live table anyway.
This way triggers run fast(er) and you can shift slow operations to log viewer (procedure).
From sql server 2008 you can use CDC feature for automatically logging changes, which is purely asynchronous. Find more details in here
Not that I know of, but are you inserting values into the Audit table that also exist in the base table? If so, you could consider tracking just the changes. Therefore an insert would track the change time, user, extra and a bunch of NULLs (in effect the before value). An update would have the change time, user etc and the before value of the changed column only. A delete has the change at, etc and all values.
Also, do you have an audit table per base table or one audit table for the DB? Of course the later can more easily result in waits as each transaction tries to write to the one table.
I suspect that your trigger is of of these generic csv/text generating triggers designed to log all changes for all table in one place. Good in theory (perhaps...), but difficult to maintain and use in practice.
If you could run asynchronously (which would still require storing data somewhere for logging again later), then you are not auditing and neither do have history to use.
Perhaps you could look at the trigger execution plan and see what bit is taking the longest?
Can you change how you audit, say, to per table? You could split the current log data into the relevant tables.