How to get TransactionScope to work on large sql transactions? - sql

I have an app that needs to write to a table on a sql database. The ID's being used in the IN statement of the sql string needs to be chunked up because there could potentially be 300k ID's in the IN and this overflows the max string length in the web config. I get this error when i try to test the scenario with 200+k ID's but not with say around 50K ID's:
The transaction associated with the current connection has completed but has not been disposed. The transaction must be disposed before the connection can be used to execute SQL statements.
Heres is the code:
public int? MatchReconDetail(int ReconRuleNameID, List<int> participants, int? matchID, string userID, int FiscalPeriodID)
{
DataContext context = new DataContext();
context.CommandTimeout = 600;
using (TransactionScope transaction = new TransactionScope())
{
Match dbMatch = new Match() { AppUserIDMatchedBy = userID, FiscalPeriodID = FiscalPeriodID, DateCreated = DateTime.Now };
context.Matches.InsertOnSubmit(dbMatch);
context.SubmitChanges();
//string ids = string.Concat(participants.Select(rid => string.Format("{0}, ", rid))).TrimEnd(new char[] { ' ', ',' });
int listCount = participants.Count;
int listChunk = listCount / 1000;
int count = 0;
int countLimit = 1000;
for (int x = 0; x <= listChunk; x++)
{
count = 1000 * x;
countLimit = 1000 * (x + 1);
List<string> chunkList = new List<string>();
for (int i = count; i < countLimit; i++)
{
if (listCount - count < 1000 && listCount - count != 0)
{
int remainder = listCount - count;
for (int r = 0; r < remainder; r++)
{
chunkList.Add(participants[count].ToString());
count++;
}
}
else if (listCount - count >= 1000)
{
chunkList.Add(participants[i].ToString());
}
}
string ids = string.Concat(chunkList.Select(rid => string.Format("{0}, ", rid))).TrimEnd(new char[] { ' ', ',' });
string sqlMatch = string.Format("UPDATE [Cars3]..[ReconDetail] SET [MatchID] = {0} WHERE [ID] IN ({1})", dbMatch.ID, ids);
context.ExecuteCommand(sqlMatch);
}
matchID = dbMatch.ID;
context.udpUpdateSummaryCache(FiscalPeriodID, ReconRuleNameID, false);
transaction.Complete();
}
return matchID;
}
}
I have read some articles suggesting timeout issues so I add the context.CommandTimeout at the top of this function. The error seems to occur after about a minute after the event fires.
Any thoughts will be greatly appreciated.

You need to set the timeout on transaction too. Not only on command.
You might also have to modify the maximum transaction timeout. The default is 10 minutes, it can be changed in machine.config file.
Read this blog post for more details.

I solved this by using:
using (TransactionScope transaction = new TransactionScope(TransactionScopeOption.Required, new TimeSpan(0,30,0)))

Related

Does Sql batch query execution involve multiple exhange of data between server and client?

Batch query execution from what I have read from multiple sources online have been stating that it enables grouping multiple statements together and executing it at once, thereby eliminating multiple back and forth communication.
Some sources that claim this are:
https://www.tutorialspoint.com/jdbc/jdbc-batch-processing.htm#:~:text=Batch%20Processing%20allows%20you%20to,communication%20overhead%2C%20thereby%20improving%20performance.
http://tutorials.jenkov.com/jdbc/batchupdate.html
https://www.baeldung.com/jdbc-batch-processing
All of these talk about single network trip etc, however going through the source code of H2 or Sqlite, it does look like its executing one by one. Albeit with autocommit disabled.
Eg: Sqlite
final synchronized int[] executeBatch(long stmt, int count, Object[] vals, boolean autoCommit) throws SQLException {
if (count < 1) {
throw new SQLException("count (" + count + ") < 1");
}
final int params = bind_parameter_count(stmt);
int rc;
int[] changes = new int[count];
try {
for (int i = 0; i < count; i++) {
reset(stmt);
for (int j = 0; j < params; j++) {
rc = sqlbind(stmt, j, vals[(i * params) + j]);
if (rc != SQLITE_OK) {
throwex(rc);
}
}
rc = step(stmt);
if (rc != SQLITE_DONE) {
reset(stmt);
if (rc == SQLITE_ROW) {
throw new BatchUpdateException("batch entry " + i + ": query returns results", changes);
}
throwex(rc);
}
changes[i] = changes();
}
}
finally {
ensureAutoCommit(autoCommit);
}
reset(stmt);
return changes;
}
Eg: H2
public int[] executeBatch() throws SQLException {
try {
debugCodeCall("executeBatch");
if (batchParameters == null) {
// Empty batch is allowed, see JDK-4639504 and other issues
batchParameters = Utils.newSmallArrayList();
}
batchIdentities = new MergedResult();
int size = batchParameters.size();
int[] result = new int[size];
SQLException first = null;
SQLException last = null;
checkClosedForWrite();
for (int i = 0; i < size; i++) {
Value[] set = batchParameters.get(i);
ArrayList<? extends ParameterInterface> parameters =
command.getParameters();
for (int j = 0; j < set.length; j++) {
Value value = set[j];
ParameterInterface param = parameters.get(j);
param.setValue(value, false);
}
try {
result[i] = executeUpdateInternal();
// Cannot use own implementation, it returns batch identities
ResultSet rs = super.getGeneratedKeys();
batchIdentities.add(((JdbcResultSet) rs).result);
} catch (Exception re) {
SQLException e = logAndConvert(re);
if (last == null) {
first = last = e;
} else {
last.setNextException(e);
}
result[i] = Statement.EXECUTE_FAILED;
}
}
batchParameters = null;
if (first != null) {
throw new JdbcBatchUpdateException(first, result);
}
return result;
} catch (Exception e) {
throw logAndConvert(e);
}
}
From the above code I see that there are multiple calls to the database, with each having its own result set. How does batch execution actually work?

Cannot add row without complete selection of batch/serial numbers

ERROR: (-4014) Cannot add row without complete selection of batch/serial numbers.
The default function of DI API SaveDraftToDocument() is working fine on MS SQL Database but not SAP HANA.
I am posting the Delivery document with Serial Numbers.
SAPbobsCOM.Documents oDrafts;
oDrafts = (SAPbobsCOM.Documents)oCompany.GetBusinessObject(SAPbobsCOM.BoObjectTypes.oDrafts);
oDrafts.GetByKey(Convert.ToInt32(EditText27.Value));
var count = oDrafts.Lines.Count;
var linenum = oDrafts.Lines.LineNum;
//Validation
#region
var RsRecordCount = (SAPbobsCOM.Recordset)oCompany.GetBusinessObject(SAPbobsCOM.BoObjectTypes.BoRecordset);
var sQryRecordCount = String.Format("Select * from \"SANTEXDBADDON\".\"#TEMPITEMDETAILS\" where \"U_DraftNo\" = '{0}'", EditText27.Value);
RsRecordCount.DoQuery(sQryRecordCount);
#endregion
if (count == RsRecordCount.RecordCount)
{
//LINES
string ItemCode = "", WhsCode = ""; double Quantity = 0; int index = 0;
for (int i = 0; i < oDrafts.Lines.Count; i++)
{
oDrafts.Lines.SetCurrentLine(index);
ItemCode = oDrafts.Lines.ItemCode;
//SERIAL NUMBERS
var RsSerial = (SAPbobsCOM.Recordset)oCompany.GetBusinessObject(SAPbobsCOM.BoObjectTypes.BoRecordset);
string table = "\"#TEMPSERIALS\"";
var sQrySerial = String.Format(
"Select \"U_ItemCode\" , \"U_DistNumber\" from \"SANTEXDBADDON\".\"#TEMPSERIALS\" where " +
"\"U_DraftNo\" = '{0}' and \"U_ItemCode\" = '{1}'", EditText27.Value, ItemCode);
RsSerial.DoQuery(sQrySerial);
int serialindex = 1, lineindex = 0;
#region
if (RsSerial.RecordCount > 0)
{
while (!RsSerial.EoF)
{
//OSRN SERIALS
var RsSerialOSRN = (SAPbobsCOM.Recordset)oCompany.GetBusinessObject(SAPbobsCOM.BoObjectTypes.BoRecordset);
var sQrySerialOSRN = String.Format(
"Select * from OSRN where \"DistNumber\" = '{0}' and \"ItemCode\" = '{1}'"
, RsSerial.Fields.Item("U_DistNumber").Value.ToString(), ItemCode);
RsSerialOSRN.DoQuery(sQrySerialOSRN);
oDrafts.Lines.SerialNumbers.SetCurrentLine(0);
oDrafts.Lines.SerialNumbers.BaseLineNumber = oDrafts.Lines.LineNum;
oDrafts.Lines.SerialNumbers.SystemSerialNumber =
Convert.ToInt32(RsSerialOSRN.Fields.Item("SysNumber").Value.ToString());
oDrafts.Lines.SerialNumbers.ManufacturerSerialNumber =
RsSerialOSRN.Fields.Item("DistNumber").Value.ToString();
oDrafts.Lines.SerialNumbers.InternalSerialNumber =
RsSerialOSRN.Fields.Item("DistNumber").Value.ToString();
oDrafts.Lines.SerialNumbers.Quantity = 1;
if (RsSerial.RecordCount != serialindex)
{
Application.SBO_Application.StatusBar.SetText("INTERNAL NO " + oDrafts.Lines.SerialNumbers.InternalSerialNumber, SAPbouiCOM.BoMessageTime.bmt_Long, SAPbouiCOM.BoStatusBarMessageType.smt_Success);
oDrafts.Lines.SerialNumbers.Add();
serialindex++;
lineindex++;
}
RsSerial.MoveNext();
}
}
#endregion
index++;
}
var status = oDrafts.SaveDraftToDocument();
if (status == 0)
{
oDrafts.Remove();
Application.SBO_Application.StatusBar.SetText("Delivery Posted Successfully !", SAPbouiCOM.BoMessageTime.bmt_Long, SAPbouiCOM.BoStatusBarMessageType.smt_Success);
}
else
{
int code = 0; string message = "";
oCompany.GetLastError(out code, out message);
Application.SBO_Application.SetStatusBarMessage(message, SAPbouiCOM.BoMessageTime.bmt_Medium, true);
}
}`
The error you have posted explains the problem. You are trying to deliver products but have not included all of the serial/batch numbers.
I don't think there's enough information to be sure where this problem happens, but here are some pointers:
You are reading the serial numbers from a custom table. Are the values you are reading valid? For example, could another user have added them to a different order? Could the values be for a different product?
Are you specifying the correct quantity of serial numbers? Is it possible that the item quantity on the line is more than the number of serial numbers you are adding?
Believe the error message until you can prove it's wrong. It doesn't seem like this is a HANA issue (we use HANA extensively) it's a logical problem with the data you are providing.
You may want to capture more debugging information to help you if you can't easily identify where the problem is.

Topic views do not show up in jTable

I try to code a forum using java swing. Firstly, on click for the jTable, it will lead to eForumContent.class which pass in the id together.
jTable.addMouseListener(new java.awt.event.MouseAdapter() {
public void mouseClicked(java.awt.event.MouseEvent e) {
int id = 0;
eForumTopics topics = new eForumTopics(id);
topics.retrieveDiscussion();
eForumThreadContent myWindow = new eForumThreadContent(id);
myWindow.getJFrame().setVisible(true);
}
});
I initialize the id first. But I set my id in database table to auto number. Then I call the retrieveDiscussion method to get the id from database and pass it to eForumThreadContent. Here are my codes for retrieveDiscussion method.
public boolean retrieveDiscussion(){
boolean success = false;
ResultSet rs = null;
DBController db = new DBController();
db.setUp("IT Innovation Project");
String dbQuery = "SELECT * FROM forumTopics WHERE topic_id = '" + id
+ "'";
rs = db.readRequest(dbQuery);
db.terminate();
return success;
}
}
Then, inside the eForumThreadContent, I want to display the content of chosen thread using a table. So I use the id which I pass in just now and insert into my sql statement. Here are my codes.
public eForumThreadContent(int id) {
this.id = id;
}
if (jTable == null) {
Vector columnNames = new Vector(); // Vector class allows dynamic
// array of objects
Vector data = new Vector();
try {
DBController db = new DBController();
db.setUp("IT Innovation Project");
Class.forName("sun.jdbc.odbc.JdbcOdbcDriver").newInstance();
String dsn = "IT Innovation Project";
String s = "jdbc:odbc:" + dsn;
Connection con = DriverManager.getConnection(s, "", "");
String sql = "Select topic_title,topic_description,topic_by from forumTopics WHERE topic_id = '"+id+"'";
java.sql.Statement statement = con.createStatement();
ResultSet resultSet = statement.executeQuery(sql);
ResultSetMetaData metaData = resultSet.getMetaData();
int columns = metaData.getColumnCount();
for (int i = 1; i <= columns; i++) {
columnNames.addElement(metaData.getColumnName(i));
}
while (resultSet.next()) {
Vector row = new Vector(columns);
for (int i = 1; i <= columns; i++) {
row.addElement(resultSet.getObject(i));
}
data.addElement(row);
}
resultSet.close();
((Connection) statement).close();
} catch (Exception e) {
System.out.println(e);
}
jTable = new JTable(data, columnNames);
TableColumn column;
for (int i = 0; i < jTable.getColumnCount(); i++) {
column = jTable.getColumnModel().getColumn(i);
if (i == 1) {
column.setPreferredWidth(400); // second column is bigger
}else {
column.setPreferredWidth(200);
}
}
String header[] = { "Title", "Description", "Posted by" };
for (int i = 0; i < jTable.getColumnCount(); i++) {
TableColumn column1 = jTable.getTableHeader().getColumnModel()
.getColumn(i);
column1.setHeaderValue(header[i]);
}
jTable.getTableHeader().setFont( new Font( "Dialog" , Font.PLAIN, 20 ));
jTable.getTableHeader().setForeground(Color.white);
jTable.getTableHeader().setBackground(new Color(102, 102, 102));
jTable.setEnabled(false);
jTable.setRowHeight(100);
jTable.getRowHeight();
jTable.setFont( new Font( "Dialog" , Font.PLAIN, 18 ));
}
return jTable;
}
However, the table inside eForumThreadContent is empty even when I clicked on certain thread. It also gives me an error message.
[Microsoft][ODBC Microsoft Access Driver] Data type mismatch in criteria expression.
at sun.jdbc.odbc.JdbcOdbc.createSQLException(Unknown Source)
at sun.jdbc.odbc.JdbcOdbc.standardError(Unknown Source)
at sun.jdbc.odbc.JdbcOdbc.SQLExecDirect(Unknown Source)
at sun.jdbc.odbc.JdbcOdbcStatement.execute(Unknown Source)
at sun.jdbc.odbc.JdbcOdbcStatement.executeQuery(Unknown Source)
at DBController.database.DBController.readRequest(DBController.java:27)
at kioskeForum.entity.eForumTopics.retrieveDiscussion(eForumTopics.java:67)
at kioskeForum.ui.eForumDiscussion$3.mouseClicked(eForumDiscussion.java:296)
I research online to get an idea for topic views using id. But my table does not show up anything. Can somebody enlighten me how to fix it? Or any other ways to display topic views in java swing? Thanks in advance.

How can I do this all in one query? Nhibernate

I have a list of Ids and I want to get all the rows back in one query. As a list of objects(So a List of Products or whatever).
I tried
public List<TableA> MyMethod(List<string> keys)
{
var query = "SELECT * FROM TableA WHERE Keys IN (:keys)";
var a = session.CreateQuery(query).SetParameter("keys", keys).List();
return a; // a is a IList but not of TableA. So what do I do now?
}
but I can't figure out how to return it as a list of objects. Is this the right way?
List<TableA> result = session.CreateQuery(query)
.SetParameterList("keys", keys)
.List<TableA>();
Howeever there could be a limitation in this query if number of ":keys" exceed more than 1000 (incase of oracle not sure with other dbs) so i would recommend to use ICriteria instead of CreateQuery- native sqls.
Do something like this,
[TestFixture]
public class ThousandIdsNHibernateQuery
{
[Test]
public void TestThousandIdsNHibernateQuery()
{
//Keys contains 1000 ids not included here.
var keys = new List<decimal>();
using (ISession session = new Session())
{
var tableCirt = session.CreateCriteria(typeof(TableA));
if (keys.Count > 1000)
{
var listsList = new List<List<decimal>>();
//Get first 1000.
var first1000List = keys.GetRange(0, 1000);
//Split next keys into 1000 chuncks.
for (int i = 1000; i < keys.Count; i++)
{
if ((i + 1)%1000 == 0)
{
var newList = new List<decimal>();
newList.AddRange(keys.GetRange(i - 999, 1000));
listsList.Add(newList);
}
}
ICriterion firstExp = Expression.In("Key", first1000List);
ICriterion postExp = null;
foreach (var list in listsList)
{
postExp = Expression.In("Key", list);
tableCirt.Add(Expression.Or(firstExp, postExp));
firstExp = postExp;
}
tableCirt.Add(postExp);
}
else
{
tableCirt.Add(Expression.In("key", keys));
}
var results = tableCirt.List<TableA>();
}
}
}

NHibernate: Saving different types of objects in the same session breaks batching

newbie here, sorry if this is an obvious question.
It seems saving different types of objects in the same session breaks batching, cause significant performance drop.
ID generator is set to Increment (as Diego Mijelshon advised, I tried hilo("100"), but unfortunately same issue, Test1() is still about 5 times slower than Test2()):
public class CustomIdConvention : IIdConvention
{
public void Apply(IIdentityInstance instance)
{
instance.GeneratedBy.Increment();
}
}
AdoNetBatchSize is set to 1000:
MsSqlConfiguration.MsSql2008
.ConnectionString(connectionString)
.AdoNetBatchSize(1000)
.Cache(x => x
.UseQueryCache()
.ProviderClass<HashtableCacheProvider>())
.ShowSql();
These are the models:
public class TestClass1
{
public virtual int Id { get; private set; }
}
public class TestClass2
{
public virtual int Id { get; private set; }
}
These are the test methods. Test1() takes 62 seconds, Test2() takes only 11 seconds. (as Phill advised, I tried stateless sessions, but unfortunately same issue):
[TestMethod]
public void Test1()
{
int count = 50 * 1000;
using (var session = SessionFactory.OpenSession())
{
using (var transaction = session.BeginTransaction())
{
for (int i = 0; i < count; i++)
{
var x = new TestClass1();
var y = new TestClass2();
session.Save(x);
session.Save(y);
}
transaction.Commit();
}
}
}
[TestMethod]
public void Test2()
{
int count = 50 * 1000;
using (var session = SessionFactory.OpenSession())
{
using (var transaction = session.BeginTransaction())
{
for (int i = 0; i < count; i++)
{
var x = new TestClass1();
session.Save(x);
}
transaction.Commit();
}
}
using (var session = SessionFactory.OpenSession())
{
using (var transaction = session.BeginTransaction())
{
for (int i = 0; i < count; i++)
{
var y = new TestClass2();
session.Save(y);
}
transaction.Commit();
}
}
}
Any ideas?
Thanks!
Update:
The test project can be downloaded from here. You need to change the connectionString in the Main method. I changed all sessions to stateless sessions.
My restuls: Test1 = 59.11, Test2 = 7.60, Test3 = 7.72. Test1 is 7.7 times slower than Test2 & Test3!
Do not use increment. It's the worst possible generator.
Try changing it to HiLo.
Update:
It looks like the problem occurs when alternating saves of different entities, regardless of whether the session/transaction are separated or not.
This produces similar results to the second test method:
[TestMethod]
public void Test3()
{
int count = 50 * 1000;
using (var session = SessionFactory.OpenSession())
{
using (var transaction = session.BeginTransaction())
{
for (int i = 0; i < count; i++)
{
var x = new TestClass1();
session.Save(x);
}
for (int i = 0; i < count; i++)
{
var y = new TestClass2();
session.Save(y);
}
transaction.Commit();
}
}
}
My guess, without looking at NH's sources, is that it preserves the order because of possible relationships between the entities, even when there are none.
When you run test2 and test3, the insert's are batched together.
When you run test1, where you alternate the inserts, the inserts are issued as separate statements and are not batched together.
I found this out by profiling all three tests.
So as per Diego's answer, it must preserve the order that you're inserting, and batch them together.
I wrote a 4th test, I set the batch size to 10, then alternated when i changed from TestClass1 to TestClass2 so that I was doing 5 of TestClass1 and then 5 of TestClass2, to hit the batch size.
This pushed out batch's of 5 in the order they were processed.
public void Test4()
{
int count = 10;
using (var session = SessionFactory.OpenSession())
using (var transaction = session.BeginTransaction())
{
for (int i = 0; i < count; i++)
{
if (i%2 == 0)
{
for (int j = 0; j < 5; j++)
{
var x = new TestClass1();
session.Save(x);
}
}
else
{
for (int j = 0; j < 5; j++)
{
var y = new TestClass2();
session.Save(y);
}
}
}
transaction.Commit();
}
}
Then I changed it to insert 3 at a time instead of 5. The batch's were in multiples of 3, so what must be happening is the batch size allows a batch of 1 type to go to specified amount, but groups only the same type together. While alternating causes separate insert statements.