I have an entity with Id column generated using Hilo.
I have a transaction, creating a new entity and calling SaveOrUpdate() in order to get the Hilo generated Id of the entity (I need to write that Id to another DB).
later on, within the same transaction I update the new entity, just a simple update of a simple property, and in the end I call SaveOrUpdate() again.
I see that the SQL commands generated are first INSERT and then an UPDATE, but what I want is just an INSERT with the final details of the entity. is that possible? am I doing something wrong?
EDIT: added code sample
here's a very simplified example of pseudo code:
Person newPerson = new Person(); // Person is a mapped entity
newPerson.Name = "foo";
_session.SaveOrUpdate(newPerson); // generates INSERT statement
newPerson.BirthDate = DateTime.Now;
_session.SaveOrUpdate(newPerson); // generates UPDATE statement
// assume session transaction was opened before and disposed correctly for sake of simplicity
_session.Transaction.Commit();
The point is that with ORM tools like NHibernate, we are working different way, then we did with ADO.NET.
While ADO.NET Commands and their Execute() method family would cause immediate SQL statement execution on the DB server... with NHibernate it is dramatically different.
We are working with a ISession. The session, could be thought as a C# collection in a memory. All the Save(), SaveOrUdpate(), Update(), Delete() ... calls are executed against that object representation. NO SQL Command is executed, when calling these methods, no low-level ADO.NET calls at the moment.
That abstraction allows NHibernate to optimize the final SQL Statement batch... based on all the information gathered in the ISession. And that's why, you will never see INSERT, UPDATE if working with one Session, unless we explictly call the magical Flush() or change the FlushMode at all.
In that case (calling Flush() ), we are trying to say: NHibernate we are smart enough, now is the time to execute commands. In other scenarios, usually it is good enough to leave it on NHibernate...
See here:
- 9.6. Flush
Related
Firstly, I apologise if this is a really stupid question.
I had a question about dealing correctly with SQL statements within Yii. I'll make a small example code.
public function actionCreate($id) {
$cmd = Yii::app()->db->createCommand();
$cmd->insert('table_1',array(
'user_id'=> (int) $id,
),'id=:id', array(':id'=>$id));
}
What's the correct way to confirm this query worked? Is it try/catch blocks?
The reason I ask is that could fail if it's passed a bad parameter, but on a couple of tables I have DB constraints that could also result in a failure, so I wanted to try and make sure I handled everything properly rather than blanket handle them.
From official document
Executing SQL Statements Once a database connection is established, SQL statements can be executed using CDbCommand. One
creates a CDbCommand instance by calling
CDbConnection::createCommand() with the specified SQL statement:
$connection=Yii::app()->db; // assuming you have configured a "db" connection
// If not, you may explicitly create a connection:
// $connection=new CDbConnection($dsn,$username,$password);
$command=$connection->createCommand($sql);
// if needed, the SQL statement may be updated as follows:
// $command->text=$newSQL;
A SQL statement is executed via CDbCommand in one of the following two ways:
And here it is
execute(): performs a non-query SQL statement, such as INSERT, UPDATE
and DELETE. If successful, it returns the number of rows that are
affected by the execution.
Btw, insert() is a low level method that's used internally by Active Record (AR). Why don't you simply use AR instead
By Yii gii, you automatically get model for table_1, and you can find, insert, update, delete from that. Example:
$model = new Table1ModelName;
$model->user_id= $id;
$model->name= $user_name;
...
$model->save();
There still has many workarounds and interesting things which you would like to study about
Yii Working Active Record
I Have dapper orm in project and i have save alto of data (1200000row) in database but in transaction with dapper is very slow i want fast.with nhibernate (session statetless)is slow.
I think dapper is fast because that fetch data(700000) with nhibernate in 33 second that with dapper in 9 second.
how solved problem ?
my code is :
IDbTransaction trans = connection.BeginTransaction();
connection.Execute(#"
insert DailyResult(Id, PersonId,DateTaradod,DailyTaradods)
values(#Id, #PersonId,#DateTaradod,#DailyTaradods)", entity, trans);
trans.Commit();
There is no mechanism to make inserting 1200000 rows in a transaction instant, via any regular ADO.NET API. That simply isn't what the intent of that API is.
For what you want, it sounds like you should be using SqlBulkCopy. This supports transactions, and you can use FastMember to help here; for example:
IEnumerable<YourEntity> source = ...
using(var bcp = new SqlBulkCopy(
connection, SqlBulkCopyOptions.UseInternalTransaction))
using(var reader = ObjectReader.Create(source,
"Id", "PersonId", "DateTaradod", "DailyTaradods"))
{
bcp.DestinationTableName = "DailyResult";
bcp.WriteToServer(reader);
}
It also supports external transactions, but if you are going to "create tran, push, commit tran" you might as well use the internal transaction.
If you don't want to use SqlBulkCopy, you can also look at table-valued-parameter approaches, but SqlBulkCopy would be my recommended API when dealing with this volume.
Note: if the table has more columns than Id, PersonId, DateTaradod and DailyTaradods, you can specify explicit bcp.ColumnMappings to tweak how the insert behaves.
I have triggers that manipulate and insert a lot of data into a Change tracking table for audit purposes on every insert, update and delete.
This trigger does its job very well, by using it we are able to log the desired oldvalues/newvalues as per the business requirements for every transaction.
However in some cases where the source table has a lot columns, it can take up to 30 seconds for the transaction to complete which is unacceptable.
Is there a way to make the trigger run asynchronously? Any examples.
You can't make the trigger run asynchronously, but you could have the trigger synchronously send a message to a SQL Service Broker queue. The queue can then be processed asynchronously by a stored procedure.
these articles show how to use service broker for async auditing and should be useful:
Centralized Asynchronous Auditing with Service Broker
Service Broker goodies: Cross Server Many to One (One to Many) scenario and How to troubleshoot it
SQL Server 2014 introduced a very interesting feature called Delayed Durability. If you can tolerate loosing a few rows in case of an catastrophic event, like a server crash, you could really boost your performance in schenarios like yours.
Delayed transaction durability is accomplished using asynchronous log
writes to disk. Transaction log records are kept in a buffer and
written to disk when the buffer fills or a buffer flushing event takes
place. Delayed transaction durability reduces both latency and
contention within the system
The database containing the table must first be altered to allow delayed durability.
ALTER DATABASE dbname SET DELAYED_DURABILITY = ALLOWED
Then you could control the durability on a per-transaction basis.
begin tran
insert into ChangeTrackingTable select * from inserted
commit with(DELAYED_DURABILITY=ON)
The transaction will be commited as durable if the transaction is cross-database, so this will only work if your audit table is located in the same database as the trigger.
There is also a possibility to alter the database as forced instead of allowed. This causes all transactions in the database to become delayed durable.
ALTER DATABASE dbname SET DELAYED_DURABILITY = FORCED
For delayed durability, there is no difference between an unexpected
shutdown and an expected shutdown/restart of SQL Server. Like
catastrophic events, you should plan for data loss. In a planned
shutdown/restart some transactions that have not been written to disk
may first be saved to disk, but you should not plan on it. Plan as
though a shutdown/restart, whether planned or unplanned, loses the
data the same as a catastrophic event.
This strange defect will hopefully be addressed in a future release, but until then it may be wise to make sure to automatically execute the 'sp_flush_log' procedure when SQL server is restarting or shutting down.
To perform asynchronous processing you can use Service Broker, but it isn't the only option, you can also use CLR objects.
The following is an example of an stored procedure (AsyncProcedure) that asynchronous calls another procedure (SyncProcedure):
using System;
using System.Data;
using System.Data.SqlClient;
using System.Data.SqlTypes;
using Microsoft.SqlServer.Server;
using System.Runtime.Remoting.Messaging;
using System.Diagnostics;
public delegate void AsyncMethodCaller(string data, string server, string dbName);
public partial class StoredProcedures
{
[Microsoft.SqlServer.Server.SqlProcedure]
public static void AsyncProcedure(SqlXml data)
{
AsyncMethodCaller methodCaller = new AsyncMethodCaller(ExecuteAsync);
string server = null;
string dbName = null;
using (SqlConnection cn = new SqlConnection("context connection=true"))
using (SqlCommand cmd = new SqlCommand("SELECT ##SERVERNAME AS [Server], DB_NAME() AS DbName", cn))
{
cn.Open();
using (SqlDataReader reader = cmd.ExecuteReader())
{
reader.Read();
server = reader.GetString(0);
dbName = reader.GetString(1);
}
}
methodCaller.BeginInvoke(data.Value, server, dbName, new AsyncCallback(Callback), null);
//methodCaller.BeginInvoke(data.Value, server, dbName, null, null);
}
private static void ExecuteAsync(string data, string server, string dbName)
{
string connectionString = string.Format("Data Source={0};Initial Catalog={1};Integrated Security=SSPI", server, dbName);
using (SqlConnection cn = new SqlConnection(connectionString))
using (SqlCommand cmd = new SqlCommand("SyncProcedure", cn))
{
cmd.CommandType = CommandType.StoredProcedure;
cmd.Parameters.Add("#data", SqlDbType.Xml).Value = data;
cn.Open();
cmd.ExecuteNonQuery();
}
}
private static void Callback(IAsyncResult ar)
{
AsyncResult result = (AsyncResult)ar;
AsyncMethodCaller caller = (AsyncMethodCaller)result.AsyncDelegate;
try
{
caller.EndInvoke(ar);
}
catch (Exception ex)
{
// handle the exception
//Debug.WriteLine(ex.ToString());
}
}
}
It uses asynchronous delegates to call SyncProcedure:
CREATE PROCEDURE SyncProcedure(#data xml)
AS
INSERT INTO T(Data) VALUES (#data)
Example of calling AsyncProcedure:
EXEC dbo.AsyncProcedure N'<doc><id>1</id></doc>'
Unfortunatelly, the assembly requires UNSAFE permission.
I wonder if you could tag a record for the change tracking by inserting into a "too process" table including who did the change etc etc.
Then another process could come along and copy the rest of the data on a regular basis.
There's a basic conflict between "does its job very well" and "unacceptable", obviously.
It sounds to me that you're trying to use triggers the same way you would use events in an OO procedural application, which IMHO doesn't map.
I would call any trigger logic that takes 30 seconds - no, more that 0.1 second - as disfunctional. I think you really need to redesign your functionality and do it some other way. I'd say "if you want to make it asynchronous", but I don't think this design makes sense in any form.
As far as "asynchronous triggers", the basic fundamental conflict is that you could never include such a thing between BEGIN TRAN and COMMIT TRAN statements because you've lost track of whether it succeeded or not.
Create history table(s). While updating (/deleting/inserting) main table, insert old values of record (deleted pseudo-table in trigger) into history table; some additional info is needed too (timestamp, operation type, maybe user context). New values are kept in live table anyway.
This way triggers run fast(er) and you can shift slow operations to log viewer (procedure).
From sql server 2008 you can use CDC feature for automatically logging changes, which is purely asynchronous. Find more details in here
Not that I know of, but are you inserting values into the Audit table that also exist in the base table? If so, you could consider tracking just the changes. Therefore an insert would track the change time, user, extra and a bunch of NULLs (in effect the before value). An update would have the change time, user etc and the before value of the changed column only. A delete has the change at, etc and all values.
Also, do you have an audit table per base table or one audit table for the DB? Of course the later can more easily result in waits as each transaction tries to write to the one table.
I suspect that your trigger is of of these generic csv/text generating triggers designed to log all changes for all table in one place. Good in theory (perhaps...), but difficult to maintain and use in practice.
If you could run asynchronously (which would still require storing data somewhere for logging again later), then you are not auditing and neither do have history to use.
Perhaps you could look at the trigger execution plan and see what bit is taking the longest?
Can you change how you audit, say, to per table? You could split the current log data into the relevant tables.
I am kind of confused on how Flush ( and NHibernate.ISession) in NHibernate works.
From my code, it seems that when I saved an object by using ISession.Save(entity), the object can be saved directly to the database.
However, when I update and object using ISession.SaveOrUpdate(entity) or ISession.Update(entity), the object in the database is not updated--- I need to call ISession.Flush in order to update it.
The procedure on how I update the object is as follows:
Obtain the object from the database by using ISession.Get(typeof(T), id)
Change the object property, for example, myCar.Color="Green"
Commit it back to the database by using ISession.Update(myCar)
The myCar is not updated to database. However, if I call ISession.Flush afterwards, then it is updated.
When to use Flush, and when not to use it?
In many cases you don't have to care when NHibernate flushes.
You only need to call flush if you created your own connection because NHibernate doesn't know when you commit on it.
What is really important for you is the transaction. During the transaction, you are isolated from other transactions, this means, you always see your changes when you read form the database, and you don't see others changes (unless they are committed). So you don't have to care when NHibernate updates data in the database unless it is committed. It is not visible to anyone anyway.
NHibernate flushes if
you call commit
before queries to ensure that you filter by the actual state in memory
when you call flush
Example:
using (session = factory.CreateSession())
using (session.BeginTransaction())
{
var entity = session.Get<Entity>(2);
entity.Name = "new name";
// there is no update. NHibernate flushes the changes.
session.Transaction.Commit();
session.Close();
}
The entity is updated on commit. NHibernate sees that your session is dirty and flushes the changes to the database. You need update and save only if you made the changes outside of the session. (This means with a detached entity, that is an entity that is not known by the session).
Notes on performance: Flush not only performs the required SQL statements to update the database. It also searches for changes in memory. Since there is no dirty flag on POCOs, it needs to compare every property of every object in the session to its first level cache. This may become a performance problem when it is done too often. There are a couple of things you can do to avoid performance problems:
Do not flush in loops
Avoid serialized objects (serialization is required to check for changes)
Use read-only entities when appropriate
Set mutable = false when appropriate
When using custom types in properties, implement efficient Equals methods
Carefully disable auto-flush when you are sure that you know what you are doing.
NHibernate will only perform SQL statements when it is necessary. It will postpone the execution of SQL statements as long as possible.
For instance, when you save an entity which has an assigned id, it will likely postpone the executin of the INSERT statement.
However, when you insert an entity which has an auto-increment id for instance, then NHibernate needs to INSERT the entity directly, since it has to know the id that will be assigned to this entity.
When you explicitly call flush, then NHibernate will execute the SQL statements that are necessary for objects that have been changed / created / deleted in that session.
Flush
In NHibernate, I want to retrieve an instance, and put an exclusive lock on the record that represents the retrieved entity on the database.
Right now, I have this code:
With.Transaction (session, IsolationLevel.Serializable, delegate
{
ICriteria crit = session.CreateCriteria (typeof (TarificationProfile));
crit.SetLockMode (LockMode.Upgrade);
crit.Add (Expression.Eq ("Id", tarificationProfileId));
TarificationProfile profile = crit.UniqueResult<TarificationProfile> ();
nextNumber = profile.AttestCounter;
profile.AttestCounter++;
session.SaveOrUpdate (profile);
});
As you can see, I set the LockMode for this Criteria to 'Upgrade'.
This issues an SQL statement for SQL Server which uses the updlock and rowlock locking hints:
SELECT ... FROM MyTable with (updlock, rowlock)
However, I want to be able to use a real exclusive lock. That is, prevent that others can read this very same record, until I have released the lock.
In other words, I want to be able to use an xlock locking hint, instead of an updlock.
I don't know how (or even if) I can achieve that .... Maybe somebody can give me some hints about this :)
If it is really necessary, I can use the SQLQuery functionality of NHibernate, and write my own SQL Query, but, I'd like to avoid that as much as possible.
A HQL DML query will accomplish your update without needing a lock.
This is available in NHibernate 2.1, but is not yet in the reference documentation. The Java hibernate documentation is very close to the NHibernate implementation.
Assuming you are using ReadCommitted Isolation, you can then safely read your value back inside the transaction.
With.Transaction (session, IsolationLevel.Serializable, delegate
{
session.CreateQuery( "update TarificationProfile t set t.AttestCounter = 1 + t.AttestCounter where t.id=:id" )
.SetInt32("id", tarificationProfileId)
.ExecuteUpdate();
nextNumber = session.CreateQuery( "select AttestCounter from TarificationProfile where Id=:id" )
.SetInt32("id", id )
.UniqueResult<int>();
}
Depending on your table and column names, the generated SQL will be:
update TarificationProfile
set AttestCounter = 1 + AttestCounter
where Id = 1 /* #p0 */
select tarificati0_.AttestCounter as col_0_0_
from TarificationProfile tarificati0_
where tarificati0_.Id = 1 /* #p0 */
I doubt it can be done from NHibernate. Personally, I would use a stored procedure to do what you're trying to accomplish.
Update: Given the continued downvotes I'll expand on this. Frederick is asking how to use locking hints, which are syntax- and implementation-specific details of his underlying database engine, from his ORM layer. This is the wrong level to attempt to perform such an operation - even if it was possible (it isn't), the likelihood it would ever work consistently across all NHibernate-supported databases is vanishingly low.
It's great Frederick's eventual solution didn't require pre-emptive exclusive locks (which kill performance and are generally a bad idea unless you know what you're doing), but my answer is valid. Anyone who stumbles across this question and wants to do exclusive-lock-on-read from NHibernate - firstly: don't, secondly: if you have to, use a stored procedure or a SQLQuery.
If all you read are done with a IsolationLevel of Serializable and all the write are also done with a IsolationLevel of Serializable I don't see why you need to do any locking of database rows your self.
So the serialize keeps the data safe, now we still have the problem of possible dead locks....
If the deadlocks are not common, just putting the [start transaction, read, update, save] in a retry loop when you get a deadlock may be good enough.
Otherwise a simple “select for update” statement generated directly (e.g. not with nhibernate) could be used to stop another transaction reading the row before it is changed.
However I keep thinking, that if the update rate is fast enough to get lots of deadlocks a ORM may not be the correct tool for the update, or the database schema may need redesigning to avoid the value that has to be read/written (e.g calculation it when reading the data)
You could use isolation level "repeatable read" if you want to make sure that values you read from the database don't change during the transaction. But you have to do this in all critical transactions. Or you lock it in the critical reading transaction with an upgrade lock.