I have a SQL Server 2000 database with a stored procedure that deletes a row from a specific table, given its id. When I call the stored procedure from VB.NET, it does not delete the row, but running the same script directly on the database via SSMS, it works.
Here's my chain of events:
Start SQL Server Profiler to watch all calls to the database. I have
it setup to track when stored procedure starts, completes, and even
on SQL statements start/complete within that stored procedure.
Call stored procedure via VB.NET dll.
Stop the profiler trace to avoid excessive data to dig through.
Select from table, and see that the row still exists.
View the profiler trace, which only shows RPC:Starting, SP:Starting, RPC:Completed. No inner statements are traced, which verifies why the row wasn't deleted since the delete statement never fired.
Copy/paste the EXEC call directly from the RPC:Starting trace entry from when it was called via VB.NET, into SQL Server Management Studio query window pointed to the same database with same credentials.
Start profiler again.
Execute EXEC statement from bullet 6 in SSMS.
Stop profiler.
Select from table, and see that the row GOT DELETED like it should.
View the profiler trace, which shows SP:Starting, all statements starting/completed including the DELETE statement, and SP:Completed.
Why would running it via RPC make it not execute any of the statements in the proc, but running directly acts as it should?
EDIT: Below is my VB.NET code. This is the same code we use in over 100 other places:
Dim paramRowID As New SqlParameter("#RowID", sRowID)
Microsoft.ApplicationBlocks.Data.SqlHelper.ExecuteNonQuery(oConn, "spDeleteRow", paramRowID)
See SqlHelper source here.
EDIT: I hate myself right now. :) SQL threw an exception "nvarchar is incompatible with image" about another parameter that I was passing NULL to. SSMS didn't worry about the type, but VB.NET did since I didn't explicitly tell it that it was of type image. Once I defined that param, it worked. I wish profiler would have told me there was an error though.
Any help would be appreciated,
Greg
That would be because SSMS does not call an RPC but a batch. There is no way in fact to call a RPC from SSMS since you cannot declare a parameter, which is what differentiate an RPC call from a batch call in TDS:
2.2.1.3 SQL Batch To send a SQL statement or a batch of SQL statements, the SQL batch, represented by a Unicode string, is copied into the data section of a TDS packet and then sent to the database server that supports SQL. A SQL batch may span more than one TDS packet. See section 2.2.6.6 for additional detail
2.2.1.5 Remote Procedure Call To execute a remote procedure call (RPC) on the server, the client sends an RPC message data stream to the server. This is a binary stream that contains the RPC name or numeric identifier, options, and parameters. RPCs MUST be in a separate TDS message and not intermixed with SQL statements. There can be several RPCs in one message. See section 2.2.6.5 for additional details.
So monitor instead for the SQL:BatchCompleted event and you'll see your SSMS statement(s).
Does the user the application is using to connect to sql have permission to execute stored procedures? That is the first thing I would verify.
Related
I created a CLR procedure to download email files. It works perfectly, the problem that when it is running it is not listed when I query the server processes.
Does anyone know a way to get it in sql server processes?
I'm using the query below
exec dbo.sp_download_files_mail
If you are expecting to see exec dbo.sp_download_files_mail in sys.dm_exec_sql_text via the sys.dm_exec_requests.plan_handle, then that's probably not going to happen. When you use EXEC it creates a sub-process that is probably a different execution plan. In the case of SQLCLR, SQL Server has no insight into what is happening unless you are executing T-SQL using SqlConnection, and then you will get the plan for the SQL being executed within the SQLCLR object and not the plan for the SQLCLR object itself. When you are executing a SQLCLR object and it is not executing any T-SQL statements, then both the sql_handle and plan_handle values are empty: 0x0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000.
However, you can see the SQLCLR object showing up in the plan_handle value coming back from sys.dm_exec_cached_plans. The SQLCLR object does seem to appear in this DMV upon being executed, but as this DMV reports cached objects, it does not necessarily get removed once the SQLCLR object completes. Hence, you can't use this DMV to indicate current running status of the object. Nor does the plan_handle value reported in sys.dm_exec_cached_plans show up in sys.dm_exec_requests while it is running.
You can test this behavior yourself by creating a SQLCLR Stored Procedure or SQLCLR scalar User-Defined Function that does nothing more than call System.Threading.Thread.Sleep() for at least 30 seconds. If you do not want to deal with creating this, a pre-made SQLCLR UDF – DB_WaitForDelay – exists in the Free version of the SQL# SQLCLR library (that I created) and is what I used in the example code below.
In SQL Server Management Studio (SSMS), open up two query tabs and paste in the following:
TAB 1
EXEC [SQL#].[DB_WaitForDelay] 30000, 1;
TAB 2
SELECT txt.*, req.*
FROM sys.dm_exec_requests req
OUTER APPLY sys.dm_exec_sql_text(req.[plan_handle]) txt
WHERE req.[session_id] = <session_id_of_Tab1>;
DBCC INPUTBUFFER(<session_id_of_Tab1>);
SELECT txt.*, cp.*
FROM sys.dm_exec_cached_plans cp
OUTER APPLY sys.dm_exec_sql_text(cp.[plan_handle]) txt
WHERE cp.[cacheobjtype] LIKE N'CLR%';
Once you have replaced the two instances of "<session_id_of_Tab1>" in the Tab 2 query, execute the Tab 1 query, then go back to Tab 2 and execute that batch of queries.
IF you really need to know if this SQLCLR object is executing as it is executing, then you will have to do something along the lines of using SqlConnection with a ConnectionString of Context Connection = true; and then execute (at the beginning of the SQLCLR object) something like SET CONTEXT_INFO 0x1234; (assuming that you are not already using CONTEXT_INFO for something else). At the end of the SQLCLR object, execute a 2nd SqlCommand for SET CONTEXT_INFO 0x00; to clear it out.
This approach allows you to use the following query to confirm that it is currently running:
SELECT req.*
FROM sys.dm_exec_requests req
WHERE req.[context_info] = 0x1234;
Also, it is a rather bad practice to prefix Stored Procedure names with sp_ as that causes SQL Server to first check in [master] for the object and then the current Database. Using something like spDownloadEmailFiles is better, though still no real good reason to prefix Stored Procedure / Function / Table / View names with anything.
I'm running a stored procedure on server1 from my application. The stored procedure does a bunch of stuff and populate a table on server2 with the result from the procedure.
I'm using linked server to accomplish this.
When the stored procedure is done running the application continues and tries to do some manipulation of the result from the stored procedure.
My problem is that the results from the stored procedure has not been completely inserted into the tables yet, so the manipulation of the tables fails.
So my question is. Is it possible to ensure the insert into on the linked server is done synchronous? I would like to have the stored procedure not return until the tables on the linked server actually is done.
You can use an output parameter of the first procedure. When the table is create on the second server the output parameter value will be return to your application and indicates the operation is ready.
If the things are difficult then this you can try setting a different isolation level of your store procedure:
http://msdn.microsoft.com/en-us/library/ms173763.aspx
I found the reason for this strange behavior. There was a line of code in my stored procedure added during debug that did a select on a temporary mem table before the data in the same table was written to the linked server.
When the select statement was run, the control was given back to my application and at the same time the stored procedure continued running. I guess the stored procedure was running synchronously from the start.
I have a stored procedure that is updating a very large table (with over 100 million records). The stored procedure is updating records in this table.
The steps are as follows:
Store record IDs to be updated in a recordset (not all records will be updated - only about 20000)
Loop through the recordset and call the stored procedure for each record ID in the recordset
Each time the stored procedure has finished (for each record in the recordset mentioned in part 1), update a flag in a table to say that the update completed.
I am finding some strange behaviour. It appears that the stored procedure is passing control back to VB6 before it has completed its updates and is continuing processing the next record. The stored procedure is then timing out later on (on another record ID). Therefore there are flags that say updated (step 3), even though the stored procedure has not run (because it timed out). Is this normal behaviour i.e. for the stored procedure to pass control back to VB6 before it has finished the work?
I have Googled this and I have discovered that it could be because of the way the stored procedure is optimised by SQL Server. I would expect control only to be passed back to VB6 after the updates have completed. Is this not the case?
Please note that I realise there may be better ways of approaching this. My question specifically relates to SQL Server passing control back to VB6 before it has finished the work (update).
The following article proved to be the solution to this problem: http://weblogs.sqlteam.com/dang/archive/2007/10/20/Use-Caution-with-Explicit-Transactions-in-Stored-Procedures.aspx. It appears that the following behaviour was happening:
1) Record 1. Run stored procedure and create transaction. Timeout on SQL Command object occurrs.
2) Record 2. Run stored procedure successfully. Return constrol to VB6 to update flag in database.
3) Record 3. Run stored procedure successfully. Return constrol to VB6 to update flag in database.
4) Record 4. Run stored procedure successfully. Return constrol to VB6 to update flag in database.
5) Program ends. Stored procedure rolls back transaction (transaction now encompasses records 1-4). Therefore records 1-4 are not deleted.
Can you...
run the code in sql management studio and see what happens and report back? if so i will update this answer as that will help us understand if its the code / connection or sql.
other things to investigate, given we dont not what cases you have tested for...
use the same code path in ur vb application and change only the sql in the stored procedure to something very simple but has the same signature as far as what its doing (ie/ basica reading if there is reading, basic deleting if there is deleting, and same for updating and adding) to see what happens.
Also, some other thoughts...
if you are using MSSQL, it's as simple as someone leaving a query window open and it ties up the database. This is easily tested. I've had the same trouble before. I've run stored procedures before that had no timeout, that normally would run immediately but would sit overnight and not run. Only to realize another person left their query window open. Close their window and poof it finally runs. Check this out, it could be a table lock. Whether it be the application doing it, or it is being done by another user making queries to the DB. Check to make sure your application is closing connections to the DB each time their being used.
what is the best way of troubleshoot a stored procedure in SQL Server, i mean from where do you start etc..?
Test each SELECT statements (if any) outside of your stored procedure to see whether it returns the expected results;
Make INSERT and UPDATE statements as simple as possible;
Try to test Inserts and Updates outside of your SP so that you can check it gives the expected results;
Use the debugger provided with SSMS Express 2008.
Visual Studio 2008 / 2010 has a debug facility. Simply connect to to your SQL Server instance in 'Server Explorer' and browse to your stored procedure.
Visual Studio 'Test Edition' also can generate Unit Tests around your stored procedures.
Troubleshooting a complex stored proc is far more than just determining if you can get it to run or not and finding the step which won't run. What is most critical is whether it actually returns the corect results or performs the correct actions.
There are two kinds of stored procs that need extensive abilites to troublshoot. First the the proc which creates dynamic SQL. I never create one of these without an input parameter of #debug. When this parameter is set, I have the proc print the SQl statment as it would have run and not run it. Almost everytime, this leads you right away to the problem as you can then see the syntax error in the generated SQL code. You also can run this sql code to see if it is returning the records you expect.
Now with complex procs that have many steps that affect data, I always use an #test input parameter. There are two things I do with the #test parameter, first I make it rollback the actions so that a mistake in development won't mess up the data. Second, I have it display the data before it rollsback to see what the results would have been. (These actually appear in the reverse order in the proc; I just think of them in this order.)
Now I can see what would have gone into the table or been deleted from the tables without affecting the data permananently. Sometimes, I might start with a select of the data as it was before any actions and then compare it to a select run afterwards.
Finally, I often want to log actions of a complex proc and see exactly what steps happened. I don't want those logs to get rolled back if the proc hits an error, so I set up a table variable for the logging information I want at the start of the proc. After each step (or after an error depending on what I want to log), I insert to this table variable. After the rollback or commit statement, I select the results of the table variable or use those results to log to a permanent logging table. This can be especially nice if you are using dynamic SQL because you can log the SQL that was run and then when something strange fails on prod, you have a record of which statement was run when it failed. You do this in a table variable because those do not go out of scope in a rollback.
In SSMS, you can simply start by opening the proc., and clicking on the check mark button (Parse) next to the Execute button on the menu bar. It reports any errors it finds.
If there are no errors there and you're stored procedure is harmless to run (you're not inserting into tables, just creating a temp table for example), then comment out the CREATE PROCEDURE x (or ALTER PROCEDURE x) and declare all the parameters by copying that part, then define them with valid values. Then run it to see what happens.
Maybe this is simple, but it's a place to start.
Our organization has a lot of its essential data in a mainframe Adabas database. We have ODBC access to this data and from C# have queried/updated it successfully using ODBC/Natural "stored procedures".
What we'd like to be able to do now is to query a mainframe table from within SQL Server 2005 stored procs, dump the results into a table variable, massage it, and join the result with native SQL data as a result set.
The execution of the Natural proc from SQL works fine when we're just selecting it; however, when we insert the result into a table variable SQL seems to be starting a distributed transaction that in turn seems to be wreaking havoc with our connections.
Given that we're not performing updates, is it possible to turn off this DTC-escalation behavior?
Any tips on getting DTC set up properly to talk to DataDirect's (formerly Neon Systems) Shadow ODBC driver?
Check out SET REMOTE_PROC_TRANSACTIONS OFF which should disable it.
Or sp_serveroption to configure the linked server generally, not per batch.
Because you are writing on the MS SQL side, you start a transaction.
By default, it escalates whether it needs to or not.
Even though the table variable does not particapate in the transaction.
I've had similar issues before where the MS SQL side behaves differently based on if MS SQL writes, in a stored proc and other stuff. The most reliable way I found was to use dynamic SQL calls to my Sybase linked server...
The following code sets the "Enable Promotion of Distributed Transactions" for linked servers:
USE [master]
GO
EXEC master.dbo.sp_serveroption #server=N'REMOTE_SERVER', #optname=N'remote proc transaction promotion', #optvalue=N'false'
GO
This will allow you to insert the results of a linked server stored procedure call into a table variable.
I'm not sure about DTC, but DTSX (Integration Services) may be useful for moving the data. However, if you can simply query the data, you may want to look at adding a linked server for direct access. You could then just write a simple query to populate your table based on a select from the linked server's table.
That's true. As you might guess, the Natural procedures we want to call do lookups and calculations that we'd like to keep at that level if possible.