Rollback transactions - sql

I have a service where I am doing something like below
1. await users = user.read(); // reads all users;
2. await task = task.create('taskId') // creates a new task
3. await tasks = task.create(users,task) // add the above task for all users ;
Problem
If code line 3 fails, i.e. adding tasks to all users then code line 2 should roll back.
Edit 1
What I have tried.
As pointed out by delerik deleting is an option but I have a feeling there is a better way to do this. Imagine n tables then it will be a mess.

Related

DbUpdateConcurrencyException on inserting a new row in SQL Server using EF Core (expected to affect 1 row(s) but actually affected 0 row(s))

I am trying to insert data in one table using Ef core 5 with repository pattern and Unit of work.
Code Sample :
var stateData = new State
{
StateId = state.StateId,
Action = state.Action,
Event = state.Event,
ExecutedOn = DateTime.Now
};
_unitOfWork.GetRepository<State>().Add(stateData);
var result = _unitOfWork.Commit();
GetRepository method used to get respective repo:
{
return (IRepository<TEntity>)GetOrAddRepository(typeof(TEntity), new
Repository<TEntity>(Context));
}
Commit Method :
{
return Context.SaveChanges();
}
I am trying to insert data in state table which has Id as primary and identity column. Rest other columns are StateId,Action,Event and ExecutedOn(datatype : datetime2).
Application is running on multiple nodes. So there will be multiple insert request at a same time from multiple node but different data.
I am getting DbUpdateConcurrencyException frequently while inserting the records of states in DB. Sometimes it works but most of time I get DbUpdateConcurrencyException with message "Database operation expected to affect 1 row(s) but actually affected 0 row(s). Data may have been modified or deleted since entities were loaded. See http://go.microsoft.com/fwlink/?LinkId=527962 for information on understanding and handling optimistic concurrency exceptions".
There is no update operation but still I am getting concurrency exception.
I have tried all others solutions on similar questions but no luck.

How to check if a "Send Query" is over?

I need to check if my res from dbSendQuery() is over.
My code is like this:
db <- dbConnect(drv=SQLite(),flags=SQLITE_RW,dbname="db.sqlite",synchronous = "off")
dbBegin(db)
res <- dbSendQuery(db,"Update Operation SET Name = 'teste' where Id = 1")
if("my SendQuery is over"){
dbClearResult(res)
dbCommit(db)
dbDisconnect(db)
}
I need to know when it is over to send this to commit and then disconnect.
UPDATE 1
Im my sistem, for this example above can happen with more than 1 users simultaneous.
Then, when the first connection in db is over, i need to finish him requisition and provide to the other connection the possibility to write your query.
dbSendQuery() always awaits completion. You can double-check by calling dbGetRowsAffected(res) .
For SQL statements that are run for the side effect and do not return a value, dbSendStatement() is preferred.
The synchronous = "off" argument to dbConnect() is a misnomer, this defines when and how the data is written to disk; no multi-threading is involved here.

Querying over large data under NHibernate transaction

I understand that explicit transactions should be used even for reading data but I am unable to understand why the below code runs much slower under an NHibernate transaction (as opposed to running without it)
session.BeginTransaction();
var result = session.Query<Order>().Where(o=>o.OrderNumber > 0).Take(100).ToList();
session.Transaction.Commit();
I can post more detailed UT code if needed but if I am querying over 50,000 Order records, it takes about 1 sec for this query to run under NHibernate's explicit transaction, and it takes only about 15/20 msec without one.
Update 1/15/2019
Here is the detailed code
[Test]
public void TestQueryLargeDataUnderTransaction()
{
int count = 50000;
using (var session = _sessionFactory.OpenSession())
{
Order order;
// write large amount of data
session.BeginTransaction();
for (int i = 0; i < count; i++)
{
order = new Order {OrderNumber = i, OrderDate = DateTime.Today};
OrderLine ol1 = new OrderLine {Amount = 1 + i, ProductName = $"sun screen {i}", Order = order};
OrderLine ol2 = new OrderLine {Amount = 2 + i, ProductName = $"banjo {i}", Order = order};
order.OrderLines = new List<OrderLine> {ol1, ol2};
session.Save(order);
session.Save(ol1);
session.Save(ol2);
}
session.Transaction.Commit();
Stopwatch s = new Stopwatch();
// read the same data
session.BeginTransaction();
var result = session.Query<Order>().Where(o => o.OrderNumber > 0).Skip(0).Take(100).ToList();
session.Transaction.Commit();
s.Stop();
Console.WriteLine(s.ElapsedMilliseconds);
}
}
Your for-loop iterates 50000 times and for each iteration it creates 3 objects. So by the time you reach the first call to Commit(), the session knows about 150000 objects that it will flush to the database at Commit time (or earlier) (subject to your id generator policy and flush mode).
So far, so good. NHibernate is not necessarily optimised to handle so many objects in the session, but it can be acceptable providing one is careful.
On to the problem...
It's important to realize that committing the transaction does not remove the 150000 objects from the session.
When you later perform the query, it will notice that it is inside a transaction, in which case, by default, "auto-flushing" will be performed. This means that before sending the SQL query to the database, NHibernate will check if any of the objects known to the session has changes that might affect the outcome of the query (this is somewhat simplified). If such changes are found, they will be transmitted to the database before performing the actual SQL query. This ensures that the executed query will be able to filter based on changes made in the same session.
The extra second you notice is the time it takes for NHibernate to iterate over the 150000 objects known to the session to check for any changes. The primary use cases for NHibernate rarely involves more than tens or a few hundreds of objects, in which case the time needed to check for changes is negligible.
You can use a new session for the query to not see this effect, or you can call session.Clear() immediately after the first commit. (Note that for production code, session.Clear() can be dangerous.)
Additional: The auto-flushing happens when querying but only if inside a transaction. This behaviour can be controlled using session.FlushMode. During auto-flush NHibernate will aim to flush only objects that may affect the outcome of the query (i.e. which database tables are affected).
There is an additional effect to be aware of with regards to keeping sessions around. Consider this code:
using (var session = _sessionFactory.OpenSession())
{
Order order;
session.BeginTransaction();
for (int i = 0; i < count; i++)
{
// Your code from above.
}
session.Transaction.Commit();
// The order variable references the last order created. Let's modify it.
order.OrderDate = DateTime.Today.AddDays(4);
session.BeginTransaction();
var result = session.Query<Order>().Skip(0).Take(100).ToList();
session.Transaction.Commit();
}
What will happen with the change to the order date done after the first call to Commit()? That change will be persisted to the database when the query is performed in the second transaction despite the fact that the object modification itself happened before the transaction was started. Conversely, if you remove the second transaction, that modification will not be persisted of course.
There are multiple ways to manage sessions and transaction that can be used for different purposes. However, by far the easiest is to always follow this simple unit-of-work pattern:
Open session.
Immediately open transaction.
Perform a reasonable amount of work.
Commit or rollback transaction.
Dispose transaction.
Dispose session.
Discard all objects loaded using the session. At this point they can still
be used in memory, but any changes will not be persisted. Safer to just get
rid of them.

Sync Framework 2.1 taking too much time to sync for first time

I am using sync framework 2.1 to sync my database and it is unidirectional only i.e. source to destination database. I am facing problem while syncing database for the first time. I have divided tables in different groups so that there is less overhead while syncing. I have one table which has 900k records in it. While syncing that table, SyncOrchestrator.Synchronize(); method not returning anything. Network usage,disk i/o everything goes high. I have wait to complete the sync process for 2 days but it still happening nothing. I have also check in sql db using "sp_who2" and the process is in suspended mode. I have also use some queries found from online and it says table_selectchanges takes too much time.
I have used following code to sync my database.
//setup the connections
var serverConn = new SqlConnection(sourceConnectionString);
var clientConn = new SqlConnection(destinationConnectionString);
// create the sync orchestrator
var syncOrchestrator = new SyncOrchestrator();
//setup providers
var localProvider = new SqlSyncProvider(scopeName, clientConn);
var remoteProvider = new SqlSyncProvider(scopeName, serverConn);
localProvider.CommandTimeout = 0;
remoteProvider.CommandTimeout = 0;
localProvider.ObjectSchema = schemaName;
remoteProvider.ObjectSchema = schemaName;
syncOrchestrator.LocalProvider = localProvider;
syncOrchestrator.RemoteProvider = remoteProvider;
// set the direction of sync session Download
syncOrchestrator.Direction = SyncDirectionOrder.Download;
// execute the synchronization process
syncOrchestrator.Synchronize();
We had this problem. We removed all the foreign key constraints from the database for the first sync. This reduced the initial sync from over 4 hours to about 10 minutes. After the initial sync completed we replaced the foreign key constraints.

Allocating integers from sequence with transaction in NHibernate

I have some sequence in database, represented by range. It has three fields: beginId, nextId and endId.
The task is to acquire the nextId from this range, and to ensure that it is unique. The code could be ran in highly parallelized environment, with many threads, on many machines.
What I need to do:
lock(database)
{
var seq = GetSequence()
var acquiredId = seq.NextId;
seq.NextId++
Save(seq)
}
So I use this code:
using (ISession session = GetSessionFactory().OpenSession())
using (ITransaction transaction = session.BeginTransaction(IsolationLevel.RepeatableRead))
{
var sequence = session.CreateCriteria<Sequence>().Single(); // This line is simplified
var allocatedId = sequence.NextId;
sequence.NextId++;
session.SaveOrUpdate(sequence);
transaction.Commit();
return allocatedId;
}
But for some reason when I run this code in multi-threading for testing, I received the same id assigned several times. I'm using the transaction with RepeatableRead lock, but that doesn't help.
P.S. Id doesn't mean Id of the table - it just the name agreement we use.
I've used .Lock().Upgrade when the data is taken from DB, and everything worked. Thanks to hazzik for the link.
Now my code have this:
session.QueryOver<Sequence>().Lock().Upgrade.DoSomeFiltering().Single()