Why does NHibernate session.Query<T>().Delete() timeout? - sql

I discovered that since NHibernate 5.0 I can call the following code to delete all records of a table:
session.Query<T>().Delete();
It executes the code on the database without copying it across the network, which should improve performance a lot.
But this query times out.
I get the following error:
Execution Timeout Expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
I've tried setting the Connection Timeout setting in my connectionstring to 0 and to 3600, but it makes no difference.
My table only has about 200 records. But one column is a base64 encoded pdf, so it is quite big and the query would take a few seconds.
What else can I try?

It sounds like you need to increase the command timeout.
Note that there is a difference between connection timeout and command timeout.
You can set the default command timeout (in seconds) for ADO.NET commands created by NHibernate using the command_timeout NHibernate session configuration property. There are multiple ways to do that depending on what method you use to configure NHibernate. Here are the basics: http://nhibernate.info/doc/nhibernate-reference/session-configuration.html

Related

Passing large amount of rows as a User defined Table value Parameter to SP in Sql Azure taking long time

We need to pass around 80K rows to a SP in sql Azure. Previously we are able to do so without any glitches. But currently once we call the SP from c#, it's some times taking 10-15 minutes to start execution in DB and many times the SP is not getting Executed.
one thing I have noticed once we make the call from c#, some operation is getting started in DB. And If I try to alter the Sp, the mentioned operation blocks it. The info about the blocking sessionid is not available in sp_who2 or from "sys.dm_exec_requests"
Any help to resolve this issue is highly appreciated.
I am able to fix the issue with Setting the maxLength of the Columns of the datatable which I sent to DB

SQL Workbench/J and BigQuery

I am currently using SQL Workbench/J as my GUI to interface and run queries for BigQuery, however I'm running into a 10 second timeout issue when running more complicated queries. Any advice on how to increase the default timeout limit? Is it accessible/changeable from within the GUI? The error message I'm receiving is this:
[Simba]BigQueryJDBCDriver The job has timed out on the server. Try increasing the timeout value. [SQL State=HY000, DB Errorcode=100034]
(PS: I set up SQL Workbench/J using these steps and this driver)
When you define the ConnectionString you are able to add driver properties. Timeout is one of the properties you can use.
Simply add:
jdbc:bigquery://...;Timeout=3600;

Max out Azure SQL DTU with SQL inside code, but not from SQL Server Management Studio

I have a bit a funny situation. Our Azure SQL instance maxes out at 100 DTU for a certain query and the query returns a timeout:
SqlException (0x80131904): Timeout expired. The timeout period
elapsed prior to completion of the operation or the server is not
responding. This failure occurred while attempting to connect to the
routing destination.
If I run exactly the same query (with the parameters hardcoded) in SQL Server Management Studio it still takes the DTU up to 25%, but that's still far away from 100%. Nothing else runs on that server. There are a few other queries that run before/after. But if we just run them, nothing spikes out.
Any ideas?
My analysis of the issue goes like this..
First when DTU's are maxed out and if a query fails due to that,you will not get time out..Below is the error message you will get..
Resource ID: %d. The %s limit for the database is %d and has been reached. For more information
You can try testing that by opening multiple resource intensive queries
Secondly when you get time out's as indicated in your question,this is mostly due to query waiting for resources like say some database IO,memory..
we faced similar time out's ,but most of them are fixed by updating stats and rebuilding indexes,rest of them we optimized

Why does Timeout.timeout(sec) not work with activerecord

I have the following code running to catch any SQL statements that might get hung. While trying to test this, I wrote a horibly optimized sql statement that takes over a minute to run. I put a 20 second timeout wrapper around an activerecord execute sql statement, but it doesn't seem to interrupt the sql call for taking to long. This is running against Oracle databases.
start_time = Time.now
Timeout.timeout(20) do #20 timeout for long running sql
#connection.connection.execute(sql_string)
end
total_sql_time = Time.now - start_time
puts "Sql statement took #{total_sql_time} seconds
output
Sql statement took 64 seconds
(Just a guess…)
Ruby's Timeout.timeout() is probably operating in much the same way as the equivalent function in PHP, which is to say that it counts time executed within Ruby, and stops counting when you're waiting for database queries to run, waiting to finish reading or writing files, waiting to for a network request to complete, etc. — i.e. waiting for anything IO-related.
Admittedly, I don't have much experience with Rails backed by Oracle, but there's the timeout option in the database config file.
# config/database.yml
production:
...
timeout: 20000
It's in milliseconds.
If I can assume you are in the development environment, you might want to explicitly set your explain timeout and review the console log:
# config/environments/development.rb
config.active_record.auto_explain_threshold_in_seconds 20
Remember that ActiveRecord is an ORM, which is to say it's a layer of abstraction that sits on top of your database.
This is important in the context of what happens when Timeout.timeout executes. Database adapters are responsible for taking a command in ruby and converting them into commands that are executed in your database, an entirely different process.
When a timeout happens, the call to #connection.connection.execute, as a ruby command, is interrupted in the adapter but not the sql code itself as you've noted.
You should be able to make use of the optional Exception Class parameter klass in Timeout.timeout to specify a signal/command to send to your database in order to cancel the query itself. However this would depend on your database adapter. Here's the kill query command I would try to execute if I were using mysql.

Suspended status in SQL Activity Monitor

What would cause a query being done in Management Studio to get suspended?
I perform a simple select top 60000 from a table (which has 11 million rows) and the results come back within a sec or two.
I change the query to top 70000 and the results take up to 40 min.
From doing a bit of searching on another but related issue I came across someone using DBCC FREEPROCCACHE to fix it.
I run DBCC FREEPROCCACHE and then redo the query for 70000 and it seemmed to work.
However, the issue still occurs with a different query.
I increase to say 90000 or if I try to open the table using [Right->Open Table], it pulls about 8000 records and stops.
Checking the activity log for when I do the Open Table shows the session has been suspended with a wait type of "Async_Network_IO". For the session running the select of 90000 the status is "Sleeping", this is the same status for the above select 70000 query which did return but in 45min. It is strange to me that the status shows "Sleeping" and it does not appear to be changing to "Runable" (I have the activiy monitor refreshing ever 30sec).
Additional notes:
I am not running both the Open Table and select 90000 at the same time. All queries are done one at a time.
I am running 32bit SQL Server 2005 SP2 CU9. I tried upgrading to SP3 but ran into install failurs. The issues was occuring prior to me trying this upgrade.
Server setup is an Active/Active cluster the issue occurs on either node, and the other instance does not have this issue.
I have ~20 other database on this same server instance but only this one DB is seeing the issue.
This database gets fairly large. It is currently at 76756.19MB. Data file is 11,513MB.
I am logged in locally on the Server box using Remote Desktop.
The wait type "Async_Network_IO" means that its waiting for the client to retrieve the result set as SQL Server's network buffer is full. Why your client isn't picking up the data in a timely manner I can't say.
The other case it can happen is with linked servers when SQL Server is querying a remote table, in this case SQL Server is waiting for the remote server to respond.
Something worth looking at is virus scanners, if they are monitoring network connections sometimes they can get lagged, its often apparent by them hogging all the CPU.
Suspended means it is waiting on a resource and will resume when it gets its resource. Judging from the sizes you are pulling back, it seems you are in an OLAP type of query.
Try the following things:
Use NOLOCK or set the TRANSACTION ISOLATION LEVEL at the top of the query
Check your execution plan and tune the query to be more efficient