Getting deadlock when execute UPDATE at same time - sql

I'm load testing the application and get deadlock error. The scenario is inserting and updating database concurrently by 10 different users. I looked up online and still couldn't find the way to solve it. Here attached my sample code involved in deadlock.
Can anyone give me some advice to solve deadlock? Thank you in advance.
SampleController:
onSubmit(userAccount)
{
sampleBO.testDeadLock(userAccount.getUserAccountId());
}
SampleBO:
public void testInsert(Long id)
{
sampleDAO.testInsert4(id);
}
public void testDeadLock(Long id)
{
testInsert(id);
sampleDAO.testUpdate4(id);
}
SampleDAO:
public void testInsert4(Long id)
{
StringBuffer sbSql = new StringBuffer();
sbSql.append(" INSERT INTO Test ");
sbSql.append(" ( ");
sbSql.append(" id, ");
sbSql.append(" note ");
sbSql.append(" ) ");
sbSql.append(" VALUES ");
sbSql.append(" (");
sbSql.append(""+id+",");
sbSql.append(" 'test' ");
sbSql.append(" )");
//Execute SQL using Spring's JDBC Templates
this.getSimpleJdbcTemplate().update(sbSql.toString());
}
public void testUpdate4(Long id)
{
StringBuffer sbSql = new StringBuffer();
sbSql.append(" UPDATE Test WITH(ROWLOCK) SET ");
sbSql.append(" note = 'test1111'");
sbSql.append(" WHERE id="+id);
//Execute SQL using Spring's JDBC Templates
this.getSimpleJdbcTemplate().update(sbSql.toString());
}

If deadlocks are occurring, it is not from the code you've provided unless :
You have multiple instances of this test program running at the same time.
The above code was running asynchronously - but that does not appear to be the case.
There is other activity on the database when you are running these tests.
My money's on the last one.
You could find out why these deadlocks are happening by performing a SQL Trace and identifying exactly what is running and when.
Each to their own, but did you know you could declare your sql query like this? The # allows the text to go onto multiple lines and also escapes special characters like the backslash.
string sbSql = #"
INSERT INTO Test (id, note)
VALUES ({0}, 'test')";
sbSql = string.Format(sbSql, id);
The WITH(ROWLOCK) is probably not needed either. 99.9% of the time SQL server will perform the relevant locking and you should only explicitly force certain locks for complex or extraordinary situations.

Related

Does Apache Ignite support distribute transaction

I learned the example of apache Ignite. I just want ignite to help to solve distribute transactions. For example。 My account is in DB A, My wife account is in DB B. I want to transfer money to my wife. So the
transaction like this :
IgniteTransactions transactions = ignite.transactions();
p1.setSalary(500);
p2_1.setSalary(1500);
Transaction tx = transactions.txStart(TransactionConcurrency.PESSIMISTIC,TransactionIsolation.SERIALIZABLE);
try {
cache.put(1L, p1);
cache2.put(1L,p2_1);
tx.commit();
}catch(Exception e) {
tx.rollback();
}
But the cacheStore is like that :
public void write(Entry<? extends Long, ? extends Person> entry) throws CacheWriterException {
System.out.println(" +++++++++++ single wirte");
Long key = entry.getKey();
Person val = entry.getValue();
System.out.println(">>> Store write [key=" + key + ", val=" + val + ']');
try {
Connection conn = dataSource.getConnection();
int updated;
// Try update first. If it does not work, then try insert.
// Some databases would allow these to be done in one 'upsert' operation.
try (PreparedStatement st = conn.prepareStatement(
"update PERSON set orgId = ?, name = ?, salary=? where id = ?")) {
st.setLong(1, val.getOrgId());
st.setString(2, val.getName());
st.setLong(3, val.getSalary());
st.setLong(4, val.getId());
updated = st.executeUpdate();
}
// If update failed, try to insert.
if (updated == 0) {
try (PreparedStatement st = conn.prepareStatement(
"insert into PERSON (id, orgId,name, salary) values (?, ?, ?,?)")) {
st.setLong(1, val.getId());
st.setLong(2, val.getOrgId());
st.setString(3, val.getName());
st.setLong(4, val.getSalary());
st.executeUpdate();
}
}
}
catch (SQLException e) {
throw new CacheWriterException("Failed to write object [key=" + key + ", val=" + val + ']', e);
}
}
When the part one commit that the salary updated, the second part failed. part one can not rollback.
How could commit or rollback them simultaneously? does ignite guarantee this or you do it your self?
ps: why ignite said that : it accelerate the transaction? it seems that it only accelerate querys , not deleting or updating operations. because it simultaneously access database when the soft transaction memory happens.
Can somebody figure it out? I don't understand the principle of ignite.
Ignite does support transactions. But there are two things to consider:
You need to define your cache as TRANSACTIONAL. The default is ATOMIC, which does not support transactions
It does not currently support transactions using SQL. You need to use the key-value API
I’m not sure where you’ve seen it said that Ignite is faster for transactions, but the general principle is that by keeping everything in memory, Ignite can be a lot quicker than legacy databases.
Apache Ignite expects that Cache Store does not fail. In your case, the upsert is very fragile and will fail.
At the very least, transactional operations imply transactional cache stores. You need to observe transaction in your cache store, only COMMIT; when told to.

ActiveJDBC batch insert and transaction

What is the recommended way to insert a batch of records or none if the database raises an error for any of the inserts?
Here is my current code:
PreparedStatement ps = Base.startBatch("INSERT INTO table(col1) VALUES(?)");
for (MyModel m : myModels)
Base.addBatch(ps, m.getCol1());
Base.executeBatch(ps);
ps.close();
This inserts records until the first one that fails (if happens).
I want all or nothing to be inserted, then I was thinking of wrapping the executeBatch():
Base.openTransaction();
Base.executeBatch(ps);
Base.commitTransaction();
If it is correct, should I do Base.rollbackTransaction() in some try catch?
Should I also close the ps.close() in a finally block?
Thanks!
Transacted batch operations are not any different from non-batch operations. Please, see this: http://javalite.io/transactions#transacted-activejdbc-example for a typical pattern.
You will do this then:
List<Person> myModels = new ArrayList<>();
try{
Base.openTransaction();
PreparedStatement ps = Base.startBatch("INSERT INTO table(col1) VALUES(?)");
for (Person m : myModels){
Base.addBatch(ps, m.getCol1());
}
Base.executeBatch(ps);
ps.close();
Base.commitTransaction();
}catch(Exception e){
Base.rollbackTransaction();
}
This way, your data is intact in case of exceptions

How to get the ID of inserted row in JDBC with old driver?

I want to get auto increment id of inserted row. I know that there is a lot of examples how to do that:
link1
link2
But I use HSQL 1.8.0.10 and following code:
PreparedStatement ps = conn.prepareStatement("insert into dupa (v1) values(3)", Statement.RETURN_GENERATED_KEYS);
throws expection:
java.sql.SQLException: This function is not supported
How to get id if driver does not support the above solution. Is any other way to get auto increment key of inserted row? I want to handle as much as possible drivers. So want to use obove code in try section and use another way in catch section.
Second question: Is possible that database does not support this feature. So even if I use new driver and old database It will still not work? I tried to use hsql 2.3.2 driver but I can not to connect to 1.8.0.10 database.
The following code illustrates how to retrieve generated keys from HSQLDB 2.2.9 and later using the included JDBC 4 driver. This method returns a two element long[]. The first element contains the number of rows that were updated; the second contains the generated key if any:
static final long[] doUpdate( ... ) {
final long[] k = new long[] {0, KEY_UNDEFINED};
PreparedStatement ps = null;
try {
ps = CXN.get().prepareStatement(sql, Statement.RETURN_GENERATED_KEYS);
JdbcValue jv;
for (int i = 0; i < data.size(); i++) {
jv = data.get(i);
jv.type().setJdbcParameter(i + 1, ps, jv);
}
k[NUM_OF_ROWS] = (long) ps.executeUpdate();
if (k[NUM_OF_ROWS] > 0L) {
try (ResultSet rs = ps.getGeneratedKeys()) {
final String identColName = idCol.colName();
while (rs.next()) {
if (k[ROW_CREATED] != KEY_UNDEFINED) throw new AssertionError();
k[ROW_CREATED] = rs.getLong(identColName);
}
}
}
} catch (SQLException e) { ... }
finally {
try { if (ps != null) ps.close(); } catch (SQLException e) { }
}
return k;
}
I am unable to say whether this approach will work with old versions of HSQLDB.
You will have to use some vendor-specific solution, i.e. in mysql you would call LAST_INSERT_ID function.
I don't have valid installation of HSQL to test it, but you can give a try to the highest voted solution from this topic: how to return last inserted (auto incremented) row id in HSQL?

JPA Entity Manager - How to run SQL script file?

I have an SQL script file which drops and recreates various tables as well as inserts various records into these tables. The script runs fine when executing in the SQL query console however I need it to be executed by the Entity Manager.
Any idea's on how I would be able to do this?
Thanks,
H
Late to the party but here's how I do it. Couple of things to note here:
My SQL file ("sql-queries.sql") is on the classpath - you could do this any other way that will get you an input stream...
My SQL file has 1 statement per line
I'm manually beginning/committing transactions, one for each line/statement in the file
Here's the method to execute the file:
void executeStatements(BufferedReader br, EntityManager entityManager) throws IOException {
String line;
while( (line = br.readLine()) != null )
{
entityManager.getTransaction().begin();
entityManager.createNativeQuery(line).executeUpdate();
entityManager.getTransaction().commit();
}
}
Here's how I call it:
InputStream sqlFileInputStream = Thread.currentThread().getContextClassLoader()
.getResourceAsStream("geo-data.sql");
// Convert input stream to something that can be read line-by-line
BufferedReader sqlFileBufferedReader = new BufferedReader( new InputStreamReader(sqlFileInputStream));
executeStatements(sqlFileBufferedReader, dao.getEntityManager());
I tested nominally with 1 transaction instead of 1 per statement (note that this means that 1 bad query will break everything) and the time to execute is the same:
void executeStatements(BufferedReader br, EntityManager entityManager) throws IOException {
String line;
entityManager.getTransaction().begin();
while( (line = br.readLine()) != null )
{
entityManager.createNativeQuery(line).executeUpdate();
}
entityManager.getTransaction().commit();
}

Dapper.Net and the DataReader

I have a very strange error with dapper:
there is already an open DataReader associated with this Command
which must be closed first
But I don't use DataReader! I just call select query on my server application and take first result:
//How I run query:
public static T SelectVersion(IDbTransaction transaction = null)
{
return DbHelper.DataBase.Connection.Query<T>("SELECT * FROM [VersionLog] WHERE [Version] = (SELECT MAX([Version]) FROM [VersionLog])", null, transaction, commandTimeout: DbHelper.CommandTimeout).FirstOrDefault();
}
//And how I call this method:
public Response Upload(CommitRequest message) //It is calling on server from client
{
//Prepearing data from CommitRequest
using (var tr = DbHelper.DataBase.Connection.BeginTransaction(IsolationLevel.Serializable))
{
int v = SelectQueries<VersionLog>.SelectVersion(tr) != null ? SelectQueries<VersionLog>.SelectVersion(tr).Version : 0; //Call my query here
int newVersion = v + 1; //update version
//Saving changes from CommitRequest to db
//Updated version saving to base too, maybe it is problem?
return new Response
{
Message = String.Empty,
ServerBaseVersion = versionLog.Version,
};
}
}
}
And most sadly that this exception appearing in random time, I think what problem in concurrent access to server from two clients.
Please help.
This some times happens if the model and database schema are not matching and an exception is being raised inside Dapper.
If you really want to get into this, best way is to include dapper source in your project and debug.