I have a simple table with 6 columns. Most of the time any insert statements to it works just fine, but once in a while I'm getting a DB Timeout exception:
Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. The statement has been terminated.
Timeout is set to 10 seconds.
I should mention that I'm using NHibernate and that the statement also include a "select SCOPE_IDENTITY()" right after the insert itself.
My thought was that the table was locked or something, but there were no other statements running on that table at that time.
All the inserts are very simple, everything looks normal in sql profiler, the table has no indices but the PK (Page fullness: 98.57 %).
Any ideas on what should I look for?
Thanks.
I think your most likely culprit is a blocking lock from another transaction (or maybe from a trigger or something else behind the scenes).
The easiest way to tell is to kick off the INSERT, and while it's hung, run EXEC SP_WHO2 in another window on the same server. This will list all of the current database activity, and has a column called BLK that will show you if any processes are currently blocked. Check the SPID of your hung connection to see if it has anything in the BLK column, and if it does, that's the process that's blocking you.
Even if you don't think there are any other statements running, the only way to know for sure is to list the current transactions using an SP like that one.
This question seems like a good place for a code snippet which I used to see the actual SQL text of the blocked and blocking queries.
The snippet below employs the convention that SP_WHO2 returns " ." text for BlockedBy for the non-blocked queries, and so it filters them out and returns the SQL text of the remaining queries (both "victim" and "culprit" ones):
--prepare a table so that we can filter out sp_who2 results
DECLARE #who TABLE(BlockedId INT,
Status VARCHAR(MAX),
LOGIN VARCHAR(MAX),
HostName VARCHAR(MAX),
BlockedById VARCHAR(MAX),
DBName VARCHAR(MAX),
Command VARCHAR(MAX),
CPUTime INT,
DiskIO INT,
LastBatch VARCHAR(MAX),
ProgramName VARCHAR(MAX),
SPID_1 INT,
REQUESTID INT)
INSERT INTO #who EXEC sp_who2
--select the blocked and blocking queries (if any) as SQL text
SELECT
(
SELECT TEXT
FROM sys.dm_exec_sql_text(
(SELECT handle
FROM (
SELECT CAST(sql_handle AS VARBINARY(128)) AS handle
FROM sys.sysprocesses WHERE spid = BlockedId
) query)
)
) AS 'Blocked Query (Victim)',
(
SELECT TEXT
FROM sys.dm_exec_sql_text(
(SELECT handle
FROM (
SELECT CAST(sql_handle AS VARBINARY(128)) AS handle
FROM sys.sysprocesses WHERE spid = BlockedById
) query)
)
) AS 'Blocking Query (Culprit)'
FROM #who
WHERE BlockedById != ' .'
Could be that the table is taking a long time to grow.
If you have the table set to grow by a large amount, and don't have instant file initialization enabled, then the query could certainly timeout every once in a while.
Check this mess out: MSDN
no other statements running on that table at that time.
What about statements running against other tables as part of a transaction? That could leave locks on the problem table.
Also check for log file or datafile growth happening at the time, if you're running SQL2005 it would show in the SQL error logs.
Our QA had some Excel connections that returned big result sets, those queries got suspended with WaitType of ASYNC_NETWORK_IO for some time. During this time all other queries timed out, so that specific insert had nothing to do with it.
look at fragmentation of the table, you could be getting page splits because of that
Related
I'm probably asking a silly question, but is there a way to execute SQL scripts that depend on each other?
I have a set of 20 scripts, each one is dependent on the table that the previous script creates. Currently it's a case of waiting for each one to finish and without error before setting of the next one. This was fine for a while, but now the total run time is around 15 hours, so it would be really good if i could just set this off over a weekend and leave it without having to keep an eye on things.
you can create a stored proc like this.
Create proc SPWaitforTable
#tableName varchar (255)
as
while 1=1
begin
if exists (
select name from sys.tables where name =#tableName)
return
else
waitfor delay '00:00:01'
end
you can run all your script at once, but it will wait until the other table is created to proceed
I have several linked servers and I want insert a value into each of those linked servers. On first try executing, I've waited too long for the INSERT using CURSOR. It's done for about 17 hours. But I'm curious for those INSERT queries, and I checked a single line of my INSERT query using Display Estimated Execution Plan, it showed a Cost of 46% on Remote Insert and Constant Scan for 54%.
Below of my code snippets I worked before
DECLARE #Linked_Servers varchar(100)
DECLARE CSR_STAGGING CURSOR FOR
SELECT [Linked_Servers]
FROM MyTable_Contain_Lists_of_Linked_Server
OPEN CSR_STAGGING
FETCH NEXT FROM CSR_STAGGING INTO #Linked_Servers
WHILE ##FETCH_STATUS = 0
BEGIN
BEGIN TRY
EXEC('
INSERT INTO ['+#Linked_Servers+'].[DB].[Schema].[Table] VALUES (''bla'',''bla'',''bla'')
')
END TRY
BEGIN CATCH
DECLARE #ERRORMSG as varchar(8000)
SET #ERRORMSG = ERROR_MESSAGE()
END CATCH
FETCH NEXT FROM CSR_STAGGING INTO #Linked_Servers
END
CLOSE CSR_STAGGING
DEALLOCATE CSR_STAGGING
Also below, figure of how I check my estimation execution plan of my query
I check only INSERT query, not all queries.
How can I get best practice and best performance using Remote Insert?
You can try this, but I think the difference should be negligibly better. I do recall that when reading on the differences of approaches with doing inserts across linked servers, most of the standard approaches where basically on par with each other, though its been a while since I looked that up, so do not quote me.
It will also require you to do some light rewriting due to the obvious differences in approach (and assuming that you would be able to do so anyway). The dynamic sql required to do this might be tricky though as I am not entirely sure if you can call openquery within dynamic sql (I should know this but ive never needed to either).
However, if you can use this approach, the main benefit is that the where clause gets the destination schema without having to select any data (because 1 will never equal 0).
INSERT OPENQUERY (
[your-server-name],
'SELECT
somecolumn
, another column
FROM destinationTable
WHERE 1=0'
-- this will help reduce the scan as it will
-- get schema details without having to select data
)
SELECT
somecolumn
, another column
FROM sourceTable
Another approach you could take is to build a insert proc on the destination server/DB. Then you just call the proc by sending the params over. While yes this is a little bit more work, and introduces more objects to maintain, it add simplicity into your process and potentially reduces I/O when sending things across the linked servers, not to mention might save on CPU cost of your constant scans as well. I think its probably a more clean cut approach instead of trying to optimize linked server behavior.
I have a table named inventory_transaction in a SQL Server 2008 R2 production environment. The table and indexes, as well as the maintenance package to maintain them, were developed by a professional vendor. We have had no problems for almost 10 years since implementing the program that uses this table, and there is no fragmentation above 8% on any of the indexes for the table.
However, I am having some trouble now. When I try to filter this table by one specific value, the query will not complete. Any other value will make the query execute in < 1 sec.
Here is the specific query that will not complete:
DECLARE #BASE_ID VARCHAR(30) = 'W46873'
SELECT *
FROM PROD.DBO.INVENTORY_TRANS
WHERE WORKORDER_BASE_ID = #BASE_ID
I let it run for 35 minutes before cancelling it. I never receive an error message. If I put literally any other value in #BASE_ID, it will execute in <1 sec. If I change the equal operator on WORKORDER_BASE_ID to use LIKE and change #BASE_ID to #BASE_ID = 'W4687%', it will work. All values in this column are the same format (W + 5 numbers).
The data type for #BASE_ID is the same as the column WORKORDER_BASE_ID on the table. There is a nonclustered index that includes this column which is being used by in other query plans that are completing for other values. Since this one won't complete, I'm not sure what the actual execution plan it is using is.
Any ideas what could be causing this or what I could do to fix it? I can provide more information as needed. This issue is causing timeouts within the program, which creates a lot of problems.
EDIT1:
Running the query with OPTION(RECOMPILE) doesn't help.
DECLARE #BASE_ID VARCHAR(30) = 'W46873'
SELECT *
FROM VMFGTEST.DBO.INVENTORY_TRANS
WHERE WORKORDER_BASE_ID = #BASE_ID
OPTION (RECOMPILE)
Your query may have been blocked by another process. While running your query in SQL Server Management Studio (SSMS), check the tab title. The parenthesized number at the end is the Server Process ID (SPID):
While your query is still executing, execute the following command in another window to check if your query is being blocked:
/* Replace 70 with actual SPID */
SELECT * FROM sys.sysprocesses WHERE spid = 70
If the blocked column contains a positive number, this is the SPID of the process blocking your query. For more details about this process, execute the following command:
/* Replace 62 with the actual SPID of the process in question */
DBCC INPUTBUFFER(62)
SELECT * FROM sys.sysprocesses WHERE spid = 62
The EventInfo may hint you on what is being executed in this connection, while login_time, hostname, program_name and other columns could pinpoint the connected user. For example another transaction may still be active.
I like to write a procedure that return the name of each table, that has a row with specific id. I other words, the tables have a column 'id' which is of type varchar and contains an uuid. After doing some research, I chose the following approach (simplified, focussing on the problem that I can't solve/understand):
-- get a cursor for all foo table names that have an id column
DECLARE table_name_cursor CURSOR FOR
SELECT a.name
FROM sysobjects a, syscolumns b
WHERE a.id = b.id
AND a.name like 'Foo%'
AND b.name = 'id'
GO
-- define some variables
DECLARE #current_table_name VARCHAR(100)
DECLARE #id_found VARCHAR(100)
OPEN table_name_cursor
FETCH table_name_cursor INTO #current_table_name
WHILE ##SQLSTATUS = 0
BEGIN
EXEC ('SELECT #id_found = id from ' + #current_table_name + " where id = '#id_param'") -- #id_param will be passed with the procedure call
select #current_table_name
FETCH table_name_cursor INTO #current_table_name
END
-- clean up resources
CLOSE table_name_cursor
DEALLOCATE table_name_cursor
It works as expected, when the size of the cursor is fairly low (~20 tables in my case) but if the cursor size grows, then the procedure never terminates.
It smells like a resource problem but my white belt in Sybase-Fu doesn't help finding the answer.
Question: why does it stops working with 'too many' cursor rows and is there a way to get it working with this approach?
Is there an alternative (better) way to solve to real problem (running queries on all tables)? This is not intended to be used for production, it's just some sort of dev/maintenance script.
It might help to have some context around your comment "it stops working", eg, does the proc return unexpectedly, does the proc generate a stack trace, is it really 'stopped' or is it 'running longer than expected'?
Some basic monitoring should help figure out what's going on:
does sp_who show the cursor process as being blocked (eg, by other processes that have an exclusive lock on data you're querying)
do periodic queries of master..monProcessWaits where SPID =<spid_of_cursor_process> show any events with largish amounts of wait time (eg, high wait times for disk reads; high wait times for network writes)
do periodic queries of master..{monProcessStatement|monProcessObject} where SPID = <spid_of_cursor_process> show cpu/wait/logicalreads/physicalreads increasing?
I'm guessing some of your SELECTs are running against largish tables with no usable index on the id column, with the net result being that some SELECTs are running expensive (and slow) table and/or index scans, possibly having to wait while large volumes of data are pulled from disk.
If my guess is correct, the MDA tables should show ever increasing numbers for disk waits, logical/physical reads, and to a lesser extent cpu.
Also, if you are seeing large volumes of logical/physical reads (indicative of table/index scans), the query plan for the currently running SELECT should confirm the use of a table/index scan (and thus the inability to find/use an index on the id column for the current table).
For your smaller/faster test runs I'm guessing you're hitting either a) smaller tables where table/index scans are relatively fast and/or b) tables with usable indexes on the id column (and thus relatively fast index lookups).
Something else to consider ... what application are you using to make your proc call?
I've lost track of the number of times where a user has run into some ... shall I say 'funky' issues ... when accessing ASE; with the issue normally being tracked back to a configuration or coding issue with the front-end/client application.
In these circumstances I suggest the user run their query(s) and/or procs via the isql command line tool to see if they get the same 'funky' results; more often than not the isql command line session does not show the 'funky' behavior, thus pointing to an issue with whatever application/tool the user is has been using to access ASE.
NOTE: By isql command line tool I mean exactly that ... the command line tool ... not to be confused with wisql or dbisql or any other point-n-click GUI tool (many of which do cause some 'funky' behavior under certain scenarios).
NOTE: Even if this turns out to be a client-side issue (as opposed to an ASE issue), the MDA tables can often pinpoint this, eg, monProcessWaits might show a large amount of wait time while waiting for output (to the client) to complete; in this scenario sp_who would also show the spid with a status of send sleep (ie, ASE is waiting for client to process the last result set sent by ASE to the client).
Executing the following statement with SQL Server 2005 (My tests are through SSMS) results in success upon first execution and failure upon subsequent executions.
IF OBJECT_ID('tempdb..#test') IS NULL
CREATE TABLE #test ( GoodColumn INT )
IF 1 = 0
SELECT BadColumn
FROM #test
What this means is that something is comparing the columns I am accessing in my select statement against the columns that exist on a table when the script is "compiled". For my purposes this is undesirable functionality. My question is if there is anything that can be done so that this code would execute successfully on every run, or if that is not possible perhaps someone could explain why the demonstrated functionality is desirable. The only solutions I have currently is to wrap the select with EXEC or select *, but I don't like either of those solution.
Thanks
If you put:
IF OBJECT_ID('tempdb..#test') IS NOT NULL
DROP TABLE #test
GO
At the start, then the problem will go away, as the batch will get parsed before the #test table exists.
What you're asking is for the system to recognise that "1=0" will always evaluate to false. If it were ever true (which could potentially be the case for most real-life conditions), then you'd probably want to know that you were about to run something that would cause failure.
If you drop the temporary table and then create a stored procedure that does the same:
CREATE PROC dbo.test
AS
BEGIN
IF OBJECT_ID('tempdb..#test') IS NULL
CREATE TABLE #test ( GoodColumn INT )
IF 1 = 0
SELECT BadColumn
FROM #test
END
Then this will happily be created, and you can run it as many times as you like.
Rob
Whether or not this behaviour is "desirable" from a programmer's point of view is debatable of course -- it basically comes down to the difference between statically typed and dynamically typed languages. From a performance point of view, it's desirable because SQL Server needs complete information in order to compile and optimize the execution plan (and also cache execution plans).
In a word, T-SQL is not an interpretted or dynamically typed language, and so you cannot write code like this. Your options are either to use EXEC, or to use another language and embed the SQL queries within it.
This problem is also visible in these situations:
IF 1 = 1
select dummy = GETDATE() into #tmp
ELSE
select dummy = GETDATE() into #tmp
Although the second statement is never executed the same error occurs.
It seems the query engine first level validation ignores all conditional statements.
You say you have problems with subsequent request and that is because the object already exits. It it recommended that you drop your temporary tables as soon as possible when you are done with it.
Read more about temporary table performance at:
SQL Server performance.com