I periodically receive the following exception.
System.Data.SqlClient.SqlException: Timeout expired. The timeout
period elapsed prior to completion of the operation or the server is
not responding.
I've added the following to my criteria to increase the timeout time.
.SetTimeout(180)
1) Is there a way to add this to my nhibernate configuration so that 180 is the default time?
2) what are the consequences to increase the timeout? Will this increase or decrease the likely hood of deadlocks?
command_timeout - Specify the default timeout of IDbCommands generated by NHibernate
Taken from Table 3.1. NHibernate ADO.NET Properties in
http://www.nhforge.org/doc/nh/en/index.html#configuration-hibernatejdbc
Ad 2. Connection timeout is not help you with deadlocks. Timeout is the time which the client waits for DB response and if the time is out DB just sends error state.
On the other hands, deadlocks are resolvable as one transaction has a lock and waiting for other lock which is owned by other transaction whilst it's waiting for resource locked by first transaction. Note that when DB detects this issue it releases error immediately - not after any timeout.
See:
When you increase timeout the only thing you will allow is longer waiting when any transaction holds a lock which are you waiting for.
E.g. when you have a client which deploys larger data into your system and it performs lock on table basis. Deployment operations can take 60 seconds. Suppose another client which reads the data from table, this client is blocked for 60 seconds until it's able to read the data. Suppose timeout = 30 seconds - this always fails, on the other hand, same situation with 90 seconds timeout will work.
Depends on situation but you should provide as small transactions as possible to enforce better latency and throughput.
Related
In what situation would a simple update statement
UPDATE [BasicUserTable]
SET [DateTimeCol] = '9/6/2022'
WHERE [UniqueIntPKCol] = 123
take 1m 30s to complete, AND THEN all subsequent updates using the same statement and lines of code (except for id and datetime), execute in < 100 ms?
The table has less than 10,000 records, standard int auto incrementing primary key.
Background: our app was timing out (standard 30 sec timeout) while it waited for SQL Server to execute the statement above. We manually tried the statement using SSMS on the same server, and it took ~1m 30s to execute.
Immediately afterward, all other attempts to run the same code were blazing fast as expected. We can't walk past this issue without knowing the real reason that it happened, so we can prevent it in the future.
After looking at logs, there were no apparent blocking locks on the records, nor code that could intervene an cause issue.
SQL Logs did not have any errors
Microsoft.EntityFrameworkCore.DbUpdateException
Inner exception: Microsoft.Data.SqlClient.SqlException: Execution Timeout Expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
Has anyone run into this before, or do you have a plausible working theory? (index rebuild, caching, etc.)
A lock wait is the only thing I can imagine that would cause this.
After looking at logs, there were no apparent blocking locks
Lock waits don't cause any logging. You might see logs if you configure the blocked process report, but it's not on by default.
Turning on the Query Store can help by helping track query resource utilization and waits.
Although extremely unlikely here, file growth can also cause sporadic delays, as the statement that needs the additional log file or data file space has to wait for the file to be resized.
In Active JDBC, is there a functionality that allows us to set Connection Time Out limit ?
It works like... whenever the user deletes or (insert, update, etc..) a large number of data and suddenly the server's connection is lost... it will rollback its transaction if the time of waiting is greater than defined time out limit ???
Regards, Vincent
Found this : Base.connection().setNetworkTimeout(); but... No documentation on ActiveJDBC. Does this still work???
This methos is not a function of the framework. The code in question:
Base.connection().setNetworkTimeout()
relates to java.sql.Connection, which is part of JDK/JDBC:
https://docs.oracle.com/javase/8/docs/api/java/sql/Connection.html#setNetworkTimeout-java.util.concurrent.Executor-int-
As such, you can find documentation there. However, I would recommend you to NOT track timeouts but run your statements under transactions. This means that any time you have any error, including network failures, your data integrity will be preserved. See: http://javalite.io/transactions.
I am a newbie to Cassandra - I have been searching for information related to commits and crash recovery in Cassandra on a single node. And, hoping someone can clarify the details.
I am testing Cassandra - so, set it up on a single node. I am using stresstool on datastax to insert millions of rows. What happens if there is an electrical failure or system shutdown? Will all the data that was in Cassandra's memory get written to disk upon Cassandra restart (I guess commitlog acts as intermediary)? How long is this process?
Thanks!
Cassandra's commit log gives Cassandra durable writes. When you write to Cassandra, the write is appended to the commit log before the write is acknowledged to the client. This means every write that the client receives a successful response for is guaranteed to be written to the commit log. The write is also made to the current memtable, which will eventually be written to disk as an SSTable when large enough. This could be a long time after the write is made.
However, the commit log is not immediately synced to disk for performance reasons. The default is periodic mode (set by the commitlog_sync param in cassandra.yaml) with a period of 10 seconds (set by commitlog_sync_period_in_ms in cassandra.yaml). This means the commit log is synced to disk every 10 seconds. With this behaviour you could lose up to 10 seconds of writes if the server loses power. If you had multiple nodes in your cluster and used a replication factor of greater than one you would need to lose power to multiple nodes within 10 seconds to lose any data.
If this risk window isn't acceptable, you can use batch mode for the commit log. This mode won't acknowledge writes to the client until the commit log has been synced to disk. The time window is set by commitlog_sync_batch_window_in_ms, default is 50 ms. This will significantly increase your write latency and probably decrease the throughput as well so only use this if the cost of losing a few acknowledged writes is high. It is especially important to store your commit log on a separate drive when using this mode.
In the event that your server loses power, on startup Cassandra replays the commit log to rebuild its memtable. This process will take seconds (possibly minutes) on very write heavy servers.
If you want to ensure that the data in the memtables is written to disk you can run 'nodetool flush' (this operates per node). This will create a new SSTable and delete the commit logs referring to data in the memtables flushed.
You are asking something like
What happen if there is a network failure at the time data is being loaded in Oracle using SQL*Loader ?
Or what happens Sqoop stops processing due to some condition while transferring data?
Simply whatever data is being transferred before electrical failure or system shutdown, it will remain the same.
Coming to second question, when ever the memtable runs out of space, i.e when the number of keys exceed certain limit (128 is default) or when it reaches the time duration (cluster clock), it is being stored into sstable, immutable space.
The application which we are currently working on with Informix DB and Geronimo app server, is throwing Lock Timeout expired exception since one of the "READ" operations is taking a long to time to complete the transaction and there is another UPDATE operation changing the record.
The approach was to increase the lock wait time out value, so that the transactions can wait for existing transactions to be completed.
The following configuration was made in the datasource definition for the informix database under the geronimo console.
IfxIFX_LOCK_MODE_WAIT - 3000
However we are still getting the lock wait timeout exception.
Is there any other solution to increase the lock wait timeout value?
What are the possible causes of premature redo log switching in Oracle other than reaching the specified file size and executing ALTER SYSTEM SWITCH LOGFILE?
We have a situation where some (but not all) of our nodes are prematurely switching redo log files before filling up. This happens every 5 - 15 minutes and the size of the logs in each case vary wildly (from 15% - 100% of the specified size).
This article says that it behaves differently in RAC.
In a parallel server environment, the
LGWR process in each instance holds a
KK instance lock on its own thread.
The id2 field identifies the thread
number. This lock is used to trigger
forced log switches from remote
instances. A log switch is forced
whenever the current SCN for a thread
falls behind the force SCN recorded in
the database entry section of the
controlfile. The force SCN is one more
than the highest high SCN of any log
file reused in any thread.