I learned the example of apache Ignite. I just want ignite to help to solve distribute transactions. For example。 My account is in DB A, My wife account is in DB B. I want to transfer money to my wife. So the
transaction like this :
IgniteTransactions transactions = ignite.transactions();
p1.setSalary(500);
p2_1.setSalary(1500);
Transaction tx = transactions.txStart(TransactionConcurrency.PESSIMISTIC,TransactionIsolation.SERIALIZABLE);
try {
cache.put(1L, p1);
cache2.put(1L,p2_1);
tx.commit();
}catch(Exception e) {
tx.rollback();
}
But the cacheStore is like that :
public void write(Entry<? extends Long, ? extends Person> entry) throws CacheWriterException {
System.out.println(" +++++++++++ single wirte");
Long key = entry.getKey();
Person val = entry.getValue();
System.out.println(">>> Store write [key=" + key + ", val=" + val + ']');
try {
Connection conn = dataSource.getConnection();
int updated;
// Try update first. If it does not work, then try insert.
// Some databases would allow these to be done in one 'upsert' operation.
try (PreparedStatement st = conn.prepareStatement(
"update PERSON set orgId = ?, name = ?, salary=? where id = ?")) {
st.setLong(1, val.getOrgId());
st.setString(2, val.getName());
st.setLong(3, val.getSalary());
st.setLong(4, val.getId());
updated = st.executeUpdate();
}
// If update failed, try to insert.
if (updated == 0) {
try (PreparedStatement st = conn.prepareStatement(
"insert into PERSON (id, orgId,name, salary) values (?, ?, ?,?)")) {
st.setLong(1, val.getId());
st.setLong(2, val.getOrgId());
st.setString(3, val.getName());
st.setLong(4, val.getSalary());
st.executeUpdate();
}
}
}
catch (SQLException e) {
throw new CacheWriterException("Failed to write object [key=" + key + ", val=" + val + ']', e);
}
}
When the part one commit that the salary updated, the second part failed. part one can not rollback.
How could commit or rollback them simultaneously? does ignite guarantee this or you do it your self?
ps: why ignite said that : it accelerate the transaction? it seems that it only accelerate querys , not deleting or updating operations. because it simultaneously access database when the soft transaction memory happens.
Can somebody figure it out? I don't understand the principle of ignite.
Ignite does support transactions. But there are two things to consider:
You need to define your cache as TRANSACTIONAL. The default is ATOMIC, which does not support transactions
It does not currently support transactions using SQL. You need to use the key-value API
I’m not sure where you’ve seen it said that Ignite is faster for transactions, but the general principle is that by keeping everything in memory, Ignite can be a lot quicker than legacy databases.
Apache Ignite expects that Cache Store does not fail. In your case, the upsert is very fragile and will fail.
At the very least, transactional operations imply transactional cache stores. You need to observe transaction in your cache store, only COMMIT; when told to.
Related
To handle concurrency in my database:
Client A updates a row
Client B tries to update the same row
Client B needs to wait for Client A to commit his updates
Both Client A & B instance are simulated and using this code:
using (myEntities db = new myEntities ())
{
db.Database.Connection.Open();
try
{
using (var scope = db .Database.BeginTransaction(System.Data.IsolationLevel.Serializable))
{
{
var test = db.customer_table.Where(x => x.id == 38).FirstOrDefault();
test.bank_holder_name = "CLIENT NAME XXXX";
db.SaveChanges(); <=== CLIENT B stop here while client A still in progress. After CLIENT A finish commit, here will throw *Deadlock found error*"
scope.Commit();
}
}
}
catch (Exception ex)
{
throw;
}
}
This is not what I expected where Client B should wait and not allowed to query any data about row id=38, but somehow it can proceed until SaveChanges and throws an error in the end.
Thus, I suspected this might caused by linq (incorrect row/ table lock)
I edited my code as below:
using (myEntities db = new myEntities ())
{
db.Database.Connection.Open();
try
{
using (var scope = db .Database.BeginTransaction(System.Data.IsolationLevel.Serializable))
{
{
var test = db.Database.ExecuteSqlCommand("Update customer_table set bank_holder_name = 'CLIENT XXXXX' where pu_id = 38"); <===== Client B is stop here and proceed after Client A is completed
db.SaveChanges();
scope.Commit();
}
}
}
catch (Exception ex)
{
throw;
}
}
Finally, the transaction is working with code above (not linq function). This is so confusing, what linq have done in behind making Transaction working inconsistent behavior?
This is due to the EF code generating two SQL statements: a SELECT for the line:
var test = db.customer_table.Where(x => x.id == 38).FirstOrDefault();
...and a subsequent UPDATE for the SaveChanges() call.
With a serializable isolation level both client A and client B take a shared lock for the duration of the transaction on the record when the SELECT statement is run. Then when one or other of them first tries to perform the UPDATE they cannot get the requisite exclusive lock because the other client has a shared lock on it. The other client itself then tries to obtain an exclusive lock and you have a deadlock scenario.
The ExecuteSqlCommand only requires a single update statement and thus a deadlock does not occur.
The Serializable isolation level can massively reduce concurrency and this example shows exactly why. You'll find that less stringent isolation levels will allow the EF code to work, but at the risk of phantom records, non-repeatable reads etc. These may well however be risks you are willing to take and/or mitigate against in order to improve concurrency.
Don't fetch the entity first. Instead create a "stub entity" and update that, eg
var test = new Customer() { id = 38 };
test.bank_holder_name = "CLIENT NAME XXXX";
db.Entry(test).Property(nameof(Customer.bank_holder_name)).IsModified = true;
db.SaveChanges();
Which translates to
SET NOCOUNT ON;
UPDATE [Customers] SET [bank_holder_name] = #p0
WHERE [id] = #p1;
SELECT ##ROWCOUNT;
I have to insert a row into the database but the problem is that the primary key is generated based on the total counts of rows.
E.g. if the db has 25601 rows, the ID of the newly inserted record would be CT25602.
I want to use transactions for primary key collisions.
Here is the code I wrote.
public void CreateContact(ContactViewModel input)
{
var transactionScopeOptions = new TransactionOptions
{
IsolationLevel = IsolationLevel.Serializable,
Timeout = TimeSpan.MaxValue
};
using (TransactionScope transaction = new TransactionScope(TransactionScopeOption.Required, transactionScopeOptions))
{
var contactNo = GenerateIdentity();
var contact = MapContactFields(new NavContact { No_ = contactNo }, input);
_db.Contacts.InsertOnSubmit(contact);
_db.SubmitChanges();
transaction.Complete();
}
}
This code gives me deadlocks if two persons are trying to insert a contact in a small timespan.
Any suggestions ? Thank you
Yes, the scenario you described is very likely to deadlock. I would recommend using a sequence instead. If not, then one solution is to acquire an exclusive app lock in the transaction, before scannig for the next identity. See sp_getapplock.
What is the recommended way to insert a batch of records or none if the database raises an error for any of the inserts?
Here is my current code:
PreparedStatement ps = Base.startBatch("INSERT INTO table(col1) VALUES(?)");
for (MyModel m : myModels)
Base.addBatch(ps, m.getCol1());
Base.executeBatch(ps);
ps.close();
This inserts records until the first one that fails (if happens).
I want all or nothing to be inserted, then I was thinking of wrapping the executeBatch():
Base.openTransaction();
Base.executeBatch(ps);
Base.commitTransaction();
If it is correct, should I do Base.rollbackTransaction() in some try catch?
Should I also close the ps.close() in a finally block?
Thanks!
Transacted batch operations are not any different from non-batch operations. Please, see this: http://javalite.io/transactions#transacted-activejdbc-example for a typical pattern.
You will do this then:
List<Person> myModels = new ArrayList<>();
try{
Base.openTransaction();
PreparedStatement ps = Base.startBatch("INSERT INTO table(col1) VALUES(?)");
for (Person m : myModels){
Base.addBatch(ps, m.getCol1());
}
Base.executeBatch(ps);
ps.close();
Base.commitTransaction();
}catch(Exception e){
Base.rollbackTransaction();
}
This way, your data is intact in case of exceptions
I'm load testing the application and get deadlock error. The scenario is inserting and updating database concurrently by 10 different users. I looked up online and still couldn't find the way to solve it. Here attached my sample code involved in deadlock.
Can anyone give me some advice to solve deadlock? Thank you in advance.
SampleController:
onSubmit(userAccount)
{
sampleBO.testDeadLock(userAccount.getUserAccountId());
}
SampleBO:
public void testInsert(Long id)
{
sampleDAO.testInsert4(id);
}
public void testDeadLock(Long id)
{
testInsert(id);
sampleDAO.testUpdate4(id);
}
SampleDAO:
public void testInsert4(Long id)
{
StringBuffer sbSql = new StringBuffer();
sbSql.append(" INSERT INTO Test ");
sbSql.append(" ( ");
sbSql.append(" id, ");
sbSql.append(" note ");
sbSql.append(" ) ");
sbSql.append(" VALUES ");
sbSql.append(" (");
sbSql.append(""+id+",");
sbSql.append(" 'test' ");
sbSql.append(" )");
//Execute SQL using Spring's JDBC Templates
this.getSimpleJdbcTemplate().update(sbSql.toString());
}
public void testUpdate4(Long id)
{
StringBuffer sbSql = new StringBuffer();
sbSql.append(" UPDATE Test WITH(ROWLOCK) SET ");
sbSql.append(" note = 'test1111'");
sbSql.append(" WHERE id="+id);
//Execute SQL using Spring's JDBC Templates
this.getSimpleJdbcTemplate().update(sbSql.toString());
}
If deadlocks are occurring, it is not from the code you've provided unless :
You have multiple instances of this test program running at the same time.
The above code was running asynchronously - but that does not appear to be the case.
There is other activity on the database when you are running these tests.
My money's on the last one.
You could find out why these deadlocks are happening by performing a SQL Trace and identifying exactly what is running and when.
Each to their own, but did you know you could declare your sql query like this? The # allows the text to go onto multiple lines and also escapes special characters like the backslash.
string sbSql = #"
INSERT INTO Test (id, note)
VALUES ({0}, 'test')";
sbSql = string.Format(sbSql, id);
The WITH(ROWLOCK) is probably not needed either. 99.9% of the time SQL server will perform the relevant locking and you should only explicitly force certain locks for complex or extraordinary situations.
Can I use a Criteria to execute a t-sql command to select the max value for a column in a table?
'select #cus_id = max(id) + 1 from customers'
Ta
Ollie
Use Projection:
session.CreateCriteria(typeof(Customer))
.SetProjection( Projections.Max("Id") )
. UniqueResult();
Max(id) + 1 is a very bad way to generate ids. If that's your goal, find another way to generate ids.
Edit: in answer to LnDCobra:
it's bad because it's hard to make sure that the max(id) you got is still the max(id) when you do the insert. If another process inserts a row, your insert will have the same id, and your insert will fail. (Or, conversely, the other process's insert will fail if your insert happened first.)
To prevent this, you have to prevent any other inserts/make your get and subsequent insert atomic, which generally means locking the table, which will hurt performance.
If you only lock against writes, the other process gets max(id), which is the same max(id) you got. You do your insert and release the lock, it inserts a duplicate id and fails. Or it tries to lock too, in which case it waits on you. If you lock against reads too, everybody waits on you. If it locks against writes also, then it doesn't insert the duplicate id, but it does wait on your read and your write.
(And it breaks encapsulation: you should let the rdbms figure out its ids, not the client programs that connect to it.)
Generally, this strategy will either:
* break
* require a bunch of "plumbing" code to make it work
* significantly reduce performance
* or all three
and it will be slower, less robust, and require more hard to maintain code than just using the RDBMS's built in sequences or generated autoincrement ids.
Best approach is to make additional Sequences table.
Where you can maintain sequence target and value.
public class Sequence : Entity
{
public virtual long? OwnerId { get; set; }
public virtual SequenceTarget SequenceTarget { get; set; }
public virtual bool IsLocked { get; set; }
public virtual long Value { get; set; }
public void GenerateNextValue()
{
Value++;
}
}
public class SequenceTarget : Entity
{
public virtual string Name { get; set; }
}
public long GetNewSequenceValueForZZZZ(long ZZZZId)
{
var target =
Session
.QueryOver<SequenceTarget>()
.Where(st => st.Name == "DocNumber")
.SingleOrDefault();
if (target == null)
{
throw new EntityNotFoundException(typeof(SequenceTarget));
}
return GetNewSequenceValue(ZZZZId, target);
}
protected long GetNewSequenceValue(long? ownerId, SequenceTarget target)
{
var seqQry =
Session
.QueryOver<Sequence>()
.Where(seq => seq.SequenceTarget == target);
if (ownerId.HasValue)
{
seqQry.Where(seq => seq.OwnerId == ownerId.Value);
}
var sequence = seqQry.SingleOrDefault();
if (sequence == null)
{
throw new EntityNotFoundException(typeof(Sequence));
}
// re-read sequence, if it was in session
Session.Refresh(sequence);
// update IsLocked field, so we acuire lock on record
// configure dynamic update , so only 1 field is being updated
sequence.IsLocked = !sequence.IsLocked;
Session.Update(sequence);
// force update to db
Session.Flush();
// now we gained block - re-read record.
Session.Refresh(sequence);
// generate new value
sequence.GenerateNextValue();
// set back dummy filed
sequence.IsLocked = !sequence.IsLocked;
// update sequence & force changes to DB
Session.Update(sequence);
Session.Flush();
return sequence.Value;
}
OwnerId - when you need to maintain different sequences for same entity, based on some kind of owner. For example you need to maintain numbering for document within contract, then OwnerId will be = contractId