LOCK_MODE_WAIT Configuration in Geronimo Datasource for Informix DB - datasource

The application which we are currently working on with Informix DB and Geronimo app server, is throwing Lock Timeout expired exception since one of the "READ" operations is taking a long to time to complete the transaction and there is another UPDATE operation changing the record.
The approach was to increase the lock wait time out value, so that the transactions can wait for existing transactions to be completed.
The following configuration was made in the datasource definition for the informix database under the geronimo console.
IfxIFX_LOCK_MODE_WAIT - 3000
However we are still getting the lock wait timeout exception.
Is there any other solution to increase the lock wait timeout value?

Related

Time Limit When The Server Connection is Timed Out ? Active JDBC

In Active JDBC, is there a functionality that allows us to set Connection Time Out limit ?
It works like... whenever the user deletes or (insert, update, etc..) a large number of data and suddenly the server's connection is lost... it will rollback its transaction if the time of waiting is greater than defined time out limit ???
Regards, Vincent
Found this : Base.connection().setNetworkTimeout(); but... No documentation on ActiveJDBC. Does this still work???
This methos is not a function of the framework. The code in question:
Base.connection().setNetworkTimeout()
relates to java.sql.Connection, which is part of JDK/JDBC:
https://docs.oracle.com/javase/8/docs/api/java/sql/Connection.html#setNetworkTimeout-java.util.concurrent.Executor-int-
As such, you can find documentation there. However, I would recommend you to NOT track timeouts but run your statements under transactions. This means that any time you have any error, including network failures, your data integrity will be preserved. See: http://javalite.io/transactions.

Make WCF request in a loop results in a Timeout Exception

I've two DBs offline and online.
On the system where offline DB exists, I've created a scheduled application that pulls the record from offline db and push them to online db using WCF and Entity Framework.
Application pulls the batch of 10 records at a time and then push them.
Most of the time there are great number of records on offline db that need to put on online db.
So a loop executes that
Pulls the db from offline DB.
Push them to WCF.
WCF calls DAL layer and those records has been inserted in online DB.
After the request complete, marks those batch of records that they are uploaded in offline db.
It runs fine couple of times, then it gives the error
{"Connection Timeout Expired. The timeout period elapsed during the post-login phase. The connection could have timed out while waiting for server to complete the login process and respond; Or it could have timed out while attempting to create multiple active connections. The duration spent while attempting to connect to this server was - [Pre-Login] initialization=3977; handshake=7725; [Login] initialization=0; authentication=0; [Post-Login] complete=3019; "}
Why does this happen and how do I resolve this?
In point 3 you mentioned
WCF calls DAL layer and those records has been inserted in online DB.
How you are inserting records in your database?
You must dispose your context, so it'll be available for next request that you are about to make.
I'll prefer "using " statement. something like this
using (var context = new YourContext())
{
context.methodThatIsUsedToInsertRecords();
}
Try this

Intermittent sqlexception timeout expired errors

We have an app with around 200-400 users and once a day or every other day we get the dreaded sql exception:
"Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding".
Once we get this then it happens several times for different users and then all users are stuck. They can't perform any operations.
I don't have the full specs of the boxes right in front of me but we have:
IIS and SQL Server running on separate boxes
each box has 64gb of memory with multiple cores
We get nothing in the SQL server logs (as would be expected) and our application catches the sqlexception so we just see the timeout error there - on an UPDATE. In the database we have only a few key tables. The timeout happens on one of the tables where there is 30k of rows. We have run profiler on these queries hitting the UI against a copy of production to get the size and made sure we have all of the right indexes (clustered/non-clustered). In a local environment (smaller box, same size database) everything runs fast and to the users most of the day the system runs fast. The exact same query (which had the timeout error in production) ran in less than a second.
We did change our command timeout from 30 seconds to 300 seconds (I know that 0 is unlimited and I guess we should use that, but it seems like that's just masking the real problem).
We had the profiler running in production, but unfortunately it wasn't fully enabled the last time it happened. We are setting it up correctly now.
Any ideas on what this might be?

how to globally set timeout in nhibernate

I periodically receive the following exception.
System.Data.SqlClient.SqlException: Timeout expired. The timeout
period elapsed prior to completion of the operation or the server is
not responding.
I've added the following to my criteria to increase the timeout time.
.SetTimeout(180)
1) Is there a way to add this to my nhibernate configuration so that 180 is the default time?
2) what are the consequences to increase the timeout? Will this increase or decrease the likely hood of deadlocks?
command_timeout - Specify the default timeout of IDbCommands generated by NHibernate
Taken from Table 3.1. NHibernate ADO.NET Properties in
http://www.nhforge.org/doc/nh/en/index.html#configuration-hibernatejdbc
Ad 2. Connection timeout is not help you with deadlocks. Timeout is the time which the client waits for DB response and if the time is out DB just sends error state.
On the other hands, deadlocks are resolvable as one transaction has a lock and waiting for other lock which is owned by other transaction whilst it's waiting for resource locked by first transaction. Note that when DB detects this issue it releases error immediately - not after any timeout.
See:
When you increase timeout the only thing you will allow is longer waiting when any transaction holds a lock which are you waiting for.
E.g. when you have a client which deploys larger data into your system and it performs lock on table basis. Deployment operations can take 60 seconds. Suppose another client which reads the data from table, this client is blocked for 60 seconds until it's able to read the data. Suppose timeout = 30 seconds - this always fails, on the other hand, same situation with 90 seconds timeout will work.
Depends on situation but you should provide as small transactions as possible to enforce better latency and throughput.

NHibernate : Transactions are not closing

We have created one application using Silverlight and NHibernate.
and SOA architecture is used.
When i run the application, it creates NHibernate sessions, which i can see in the sqlserver Activity Monitor. But after completion of the transaction still that session is not going to be closed [i can see session in sleep mode]. it closes after something 5-10 min later [ByDefault].
we are using NHibernateDataContext object.
before start of the business action, call the EnlistTransaction and afer completion it calls CompleteTransaction. But still i can see sleep session in the Sql server activity monitor.
Can anyone have any idea about it to resolve the issue?
You need to use something like NHibernate Profiler or SQL Profiler to see in more detail what statements are executing against your database. Most likely the transaction is being committed as you expect but the connection is being held open because of connection-pooling.