ActiveJDBC batch insert and transaction - activejdbc

What is the recommended way to insert a batch of records or none if the database raises an error for any of the inserts?
Here is my current code:
PreparedStatement ps = Base.startBatch("INSERT INTO table(col1) VALUES(?)");
for (MyModel m : myModels)
Base.addBatch(ps, m.getCol1());
Base.executeBatch(ps);
ps.close();
This inserts records until the first one that fails (if happens).
I want all or nothing to be inserted, then I was thinking of wrapping the executeBatch():
Base.openTransaction();
Base.executeBatch(ps);
Base.commitTransaction();
If it is correct, should I do Base.rollbackTransaction() in some try catch?
Should I also close the ps.close() in a finally block?
Thanks!

Transacted batch operations are not any different from non-batch operations. Please, see this: http://javalite.io/transactions#transacted-activejdbc-example for a typical pattern.
You will do this then:
List<Person> myModels = new ArrayList<>();
try{
Base.openTransaction();
PreparedStatement ps = Base.startBatch("INSERT INTO table(col1) VALUES(?)");
for (Person m : myModels){
Base.addBatch(ps, m.getCol1());
}
Base.executeBatch(ps);
ps.close();
Base.commitTransaction();
}catch(Exception e){
Base.rollbackTransaction();
}
This way, your data is intact in case of exceptions

Related

Does Apache Ignite support distribute transaction

I learned the example of apache Ignite. I just want ignite to help to solve distribute transactions. For example。 My account is in DB A, My wife account is in DB B. I want to transfer money to my wife. So the
transaction like this :
IgniteTransactions transactions = ignite.transactions();
p1.setSalary(500);
p2_1.setSalary(1500);
Transaction tx = transactions.txStart(TransactionConcurrency.PESSIMISTIC,TransactionIsolation.SERIALIZABLE);
try {
cache.put(1L, p1);
cache2.put(1L,p2_1);
tx.commit();
}catch(Exception e) {
tx.rollback();
}
But the cacheStore is like that :
public void write(Entry<? extends Long, ? extends Person> entry) throws CacheWriterException {
System.out.println(" +++++++++++ single wirte");
Long key = entry.getKey();
Person val = entry.getValue();
System.out.println(">>> Store write [key=" + key + ", val=" + val + ']');
try {
Connection conn = dataSource.getConnection();
int updated;
// Try update first. If it does not work, then try insert.
// Some databases would allow these to be done in one 'upsert' operation.
try (PreparedStatement st = conn.prepareStatement(
"update PERSON set orgId = ?, name = ?, salary=? where id = ?")) {
st.setLong(1, val.getOrgId());
st.setString(2, val.getName());
st.setLong(3, val.getSalary());
st.setLong(4, val.getId());
updated = st.executeUpdate();
}
// If update failed, try to insert.
if (updated == 0) {
try (PreparedStatement st = conn.prepareStatement(
"insert into PERSON (id, orgId,name, salary) values (?, ?, ?,?)")) {
st.setLong(1, val.getId());
st.setLong(2, val.getOrgId());
st.setString(3, val.getName());
st.setLong(4, val.getSalary());
st.executeUpdate();
}
}
}
catch (SQLException e) {
throw new CacheWriterException("Failed to write object [key=" + key + ", val=" + val + ']', e);
}
}
When the part one commit that the salary updated, the second part failed. part one can not rollback.
How could commit or rollback them simultaneously? does ignite guarantee this or you do it your self?
ps: why ignite said that : it accelerate the transaction? it seems that it only accelerate querys , not deleting or updating operations. because it simultaneously access database when the soft transaction memory happens.
Can somebody figure it out? I don't understand the principle of ignite.
Ignite does support transactions. But there are two things to consider:
You need to define your cache as TRANSACTIONAL. The default is ATOMIC, which does not support transactions
It does not currently support transactions using SQL. You need to use the key-value API
I’m not sure where you’ve seen it said that Ignite is faster for transactions, but the general principle is that by keeping everything in memory, Ignite can be a lot quicker than legacy databases.
Apache Ignite expects that Cache Store does not fail. In your case, the upsert is very fragile and will fail.
At the very least, transactional operations imply transactional cache stores. You need to observe transaction in your cache store, only COMMIT; when told to.

Deadlock with EF 6 entity update but not ExecuteSqlCommand

To handle concurrency in my database:
Client A updates a row
Client B tries to update the same row
Client B needs to wait for Client A to commit his updates
Both Client A & B instance are simulated and using this code:
using (myEntities db = new myEntities ())
{
db.Database.Connection.Open();
try
{
using (var scope = db .Database.BeginTransaction(System.Data.IsolationLevel.Serializable))
{
{
var test = db.customer_table.Where(x => x.id == 38).FirstOrDefault();
test.bank_holder_name = "CLIENT NAME XXXX";
db.SaveChanges(); <=== CLIENT B stop here while client A still in progress. After CLIENT A finish commit, here will throw *Deadlock found error*"
scope.Commit();
}
}
}
catch (Exception ex)
{
throw;
}
}
This is not what I expected where Client B should wait and not allowed to query any data about row id=38, but somehow it can proceed until SaveChanges and throws an error in the end.
Thus, I suspected this might caused by linq (incorrect row/ table lock)
I edited my code as below:
using (myEntities db = new myEntities ())
{
db.Database.Connection.Open();
try
{
using (var scope = db .Database.BeginTransaction(System.Data.IsolationLevel.Serializable))
{
{
var test = db.Database.ExecuteSqlCommand("Update customer_table set bank_holder_name = 'CLIENT XXXXX' where pu_id = 38"); <===== Client B is stop here and proceed after Client A is completed
db.SaveChanges();
scope.Commit();
}
}
}
catch (Exception ex)
{
throw;
}
}
Finally, the transaction is working with code above (not linq function). This is so confusing, what linq have done in behind making Transaction working inconsistent behavior?
This is due to the EF code generating two SQL statements: a SELECT for the line:
var test = db.customer_table.Where(x => x.id == 38).FirstOrDefault();
...and a subsequent UPDATE for the SaveChanges() call.
With a serializable isolation level both client A and client B take a shared lock for the duration of the transaction on the record when the SELECT statement is run. Then when one or other of them first tries to perform the UPDATE they cannot get the requisite exclusive lock because the other client has a shared lock on it. The other client itself then tries to obtain an exclusive lock and you have a deadlock scenario.
The ExecuteSqlCommand only requires a single update statement and thus a deadlock does not occur.
The Serializable isolation level can massively reduce concurrency and this example shows exactly why. You'll find that less stringent isolation levels will allow the EF code to work, but at the risk of phantom records, non-repeatable reads etc. These may well however be risks you are willing to take and/or mitigate against in order to improve concurrency.
Don't fetch the entity first. Instead create a "stub entity" and update that, eg
var test = new Customer() { id = 38 };
test.bank_holder_name = "CLIENT NAME XXXX";
db.Entry(test).Property(nameof(Customer.bank_holder_name)).IsModified = true;
db.SaveChanges();
Which translates to
SET NOCOUNT ON;
UPDATE [Customers] SET [bank_holder_name] = #p0
WHERE [id] = #p1;
SELECT ##ROWCOUNT;

Apache DBUtils - Why need resultsethandler for Insert?

I run an insert statement using Apache DBUtils. However, I am not sure why I have to include ResultSetHandler for this case:
String theQuery = QueryGenerator.insertintoStats();
ResultSetHandler<Object> dummyHandler = new ResultSetHandler<Object>() {
#Override
public Object handle(ResultSet rs) throws SQLException
{
return null;
}
};
try
{
queryRunner.insert(connection, theQuery, dummyHandler, Constants.UUIDSTR.toString(), name, prevbackupTime,
curbackupTime, updStartTime, delStartTime, bkupType.toString(), rowCount);
}
catch (SQLException e)
{
LOGGER.info(theQuery.toString());
LOGGER.error("Caught exception!", e);
}
Similar's the case for insertbatch which does use ResultSetHandler. I have resorted to use batch call for batch queries. Can anyone explain why we would be needing resultset handler for insert?
From documentation https://commons.apache.org/proper/commons-dbutils/apidocs/:
public <T> T insert(String sql,
ResultSetHandler<T> rsh,
Object... params)
throws SQLException
rsh - The handler used to create the result object from the ResultSet
of auto-generated keys.
If you insert values in a table which generate id upon insertion, you can retrieve it back, for example see this answer how to do this manually : https://stackoverflow.com/a/1915197/947111
You need ResultSetHandler<T> rsh to iterate over ResultSet which returned with id's which has been created.

How to get the ID of inserted row in JDBC with old driver?

I want to get auto increment id of inserted row. I know that there is a lot of examples how to do that:
link1
link2
But I use HSQL 1.8.0.10 and following code:
PreparedStatement ps = conn.prepareStatement("insert into dupa (v1) values(3)", Statement.RETURN_GENERATED_KEYS);
throws expection:
java.sql.SQLException: This function is not supported
How to get id if driver does not support the above solution. Is any other way to get auto increment key of inserted row? I want to handle as much as possible drivers. So want to use obove code in try section and use another way in catch section.
Second question: Is possible that database does not support this feature. So even if I use new driver and old database It will still not work? I tried to use hsql 2.3.2 driver but I can not to connect to 1.8.0.10 database.
The following code illustrates how to retrieve generated keys from HSQLDB 2.2.9 and later using the included JDBC 4 driver. This method returns a two element long[]. The first element contains the number of rows that were updated; the second contains the generated key if any:
static final long[] doUpdate( ... ) {
final long[] k = new long[] {0, KEY_UNDEFINED};
PreparedStatement ps = null;
try {
ps = CXN.get().prepareStatement(sql, Statement.RETURN_GENERATED_KEYS);
JdbcValue jv;
for (int i = 0; i < data.size(); i++) {
jv = data.get(i);
jv.type().setJdbcParameter(i + 1, ps, jv);
}
k[NUM_OF_ROWS] = (long) ps.executeUpdate();
if (k[NUM_OF_ROWS] > 0L) {
try (ResultSet rs = ps.getGeneratedKeys()) {
final String identColName = idCol.colName();
while (rs.next()) {
if (k[ROW_CREATED] != KEY_UNDEFINED) throw new AssertionError();
k[ROW_CREATED] = rs.getLong(identColName);
}
}
}
} catch (SQLException e) { ... }
finally {
try { if (ps != null) ps.close(); } catch (SQLException e) { }
}
return k;
}
I am unable to say whether this approach will work with old versions of HSQLDB.
You will have to use some vendor-specific solution, i.e. in mysql you would call LAST_INSERT_ID function.
I don't have valid installation of HSQL to test it, but you can give a try to the highest voted solution from this topic: how to return last inserted (auto incremented) row id in HSQL?

SQL ERROR: The connection was not closed. The connection's current state is open

EDIT
After staring at this for 2 days, I do see one issue. I was still opening the original connection. So I changed the inner open statements to conn2.Open. Then, I changed the second inner query to where all the variables were number 3 instead of 2 so that they were completely different than the previous query. At that point, I got the error:
There is already an open DataReader associated with this Command which must be closed first.
I took out the inner connections, thinking I could use the outer connection and took out the inner .Close lines, but that also returned an error saying the connection was not closed.
END EDIT
I am writing a script that updates user information with data pulled from other tables where that user may be in it multiple times for purchases made.
So first, the "outside" sql query pulls some data from the items table which contains purchaser information as well as category information. For each item, it is going to check it's purchaser's information.
Second, the first "inner" sql query pulls category information from the user table. Some code is then run to see if they're already marked as purchasing from the category of the "outside" query. If they are not, it adds the category to a string variable.
Lastly, the second "inner" sql query updates the user table for the current user with the new category list.
I've asked about how to perform queries like this before, but was always given a solution of combining the queries into one. That worked for the other queries, but I cannot do that here. I must iterate through each record of the outer query to perform the necessary functions inside of it. But my issue here is that I get an SQL error saying that the connection was not closed, and it points to the catch of the outer query (for 'conn').
I had tried to set my 2 inner queries so that they used different connection variables (conn2 and conn3), and also different strSQL variables, but that didn't help. And I'm still a newb when it comes to SQL, having programmed using MySQL until this probject. Any help would be greately appreciated.
using (SqlConnection conn = new SqlConnection(System.Configuration.ConfigurationManager.ConnectionStrings["connectionName"].ToString()))
using (SqlCommand strSQL = conn.CreateCommand())
{
strSQL.CommandText = "SELECT field FROM itemsTable";
try
{
conn.Open();
using (SqlDataReader itemReader = strSQL.ExecuteReader())
{
while (itemReader.Read())
{
{Do some stuff here}
using (SqlConnection conn2 = new SqlConnection(System.Configuration.ConfigurationManager.ConnectionStrings["connectionName"].ToString()))
using (SqlCommand strSQL2 = conn2.CreateCommand())
{
strSQL2.CommandText = "SELECT fields FROM userTable";
try
{
conn2.Open();
using (SqlDataReader itemReader2 = strSQL2.ExecuteReader())
{
while (itemReader2.Read())
{
{Do stuff here}
}
itemReader2.Close();
}
}
catch (Exception e3)
{
throw new Exception(e3.Message);
}
finally
{
conn2.Close();
}
}
{Do some more stuff here}
using (SqlConnection conn2 = new SqlConnection(System.Configuration.ConfigurationManager.ConnectionStrings["connectionName"].ToString()))
using (SqlCommand strSQL2 = conn2.CreateCommand())
{
strSQL2.CommandText = "UPDATE userTable set field='value'";
try
{
conn2.Open();
strSQL2.ExecuteNonQuery();
}
catch (Exception e2)
{
throw new Exception(e2.Message);
}
finally
{
conn2.Close();
}
}
{Do even more stuff here.}
}
itemReader.Close();
}
}
catch (Exception e1)
{
throw new Exception(e1.Message);
}
finally
{
conn.Close();
}
}
There's some unusual logic going on with conn.Open(). I see it used several times, but I think you mean to use conn2.Open() in the inner using statements after the first call.