How do I debug a complex query with multiple nested sub-queries in SQL Server 2005?
I'm debugging a stored procedure and trigger in Visual Studio 2005. I'd like to be able to see what the results of these sub-queries are, as I feel that this is where the bug is coming from. An example query (slightly redacted) is below:
UPDATE
foo
SET
DateUpdated = ( SELECT TOP 1 inserted.DateUpdated FROM inserted )
...
FROM
tblEP ep
JOIN tblED ed ON ep.EnrollmentID = ed.EnrollmentID
WHERE
ProgramPhaseID = ( SELECT ...)
Visual Studio doesn't seem to offer a way for me to Watch the result of the sub query. Also, if I use a temporary table to store the results (temporary tables are used elsewhere also) I can't view the values stored in that table.
Is there anyway that I can add a watch or in some other way view these sub-queries? I would love it if there was some way to "Step Into" the query itself, but I imagine that wouldn't be possible.
Ok first I would be leary of using subqueries in a trigger. Triggers should be as fast as possible, so get rid of any correlated subqueries which might run row by row instead of in a set-based fashion. Rewrite to joins. If you only want to update records based on what was in the inserted table, then join to it. Also join to the table you are updating. Exactly what are you trying to accomplish with this trigger? It might be easier to give advice if we understood the business rule you are trying to implement.
To debug a trigger this is what I do.
I write a script to:
Do the actual insert to the table
without the trigger on on it
Create a temp table named #inserted
(and/or one named #deleted)
Populate the table as I would expect
the inserted table in the trigger to
be populated from the insert you do.
Add the trigger code (minus the
create or alter trigger parts)
substituting #inserted every time I
reference inserted. (if you plan to
run multiple times until you are
ready to use it in a trigger throw
it in an explicit transaction and
rollback after checking your
results.
Add a query to check the table(s)
you are changing with the trigger for
the values you wanted to change.
Now if you need to add debug
statements to see what is happening
between steps, you can do so.
Run making changes until you get the
results you want.
Once you have the query working as
you expect it to, it is easy to take
the # signs off inserted and use it
to create the body of the trigger.
This is what I usually do in this type of scenerio:
Print out the exact sqls getting generated by each subquery
Then run each of then in the Management Studio as suggested above.
You should check if different parts are giving you the right data you expect.
Related
I saw the block of SQL below in an article in this page:
select #empname = d.Emp_Name
from deleted d
How do you put something like #empname = d.Emp_Name in a SELECT statement?
What does it do? Is it another way of conditioning, instead of using WHERE clause?
Plus, it seems to be in SQL Server, but just to be sure, isn't it something that can be done in MySQL or some other DBMS?
You can assign to a variable this way...
...but this is probably a mistake. This assignment convention is typically used only when you expect exactly one row. However, the deleted table name indicates a likely trigger, and SQL Server will sometimes batch up individual deletions in a single call to the trigger, such that the table has multiple rows. Therefore this trigger is likely not correctly processing some deletions.
I'd like to find out how to update a temporary table before I show the query. This is to avoid making permanent changes to the database.
So far I got the following:
WITH
new_salary AS
(SELECT ID,NAME,DEPT_NAME,SALARY FROM INSTRUCTOR WHERE DEPT_NAME='Comp. Sci.')
SELECT
*
FROM
new_salary
WHERE
DEPT_NAME='Comp. Sci.';
Now here is where it ends. I want to update this temporary table and show the updated version of that table as to avoid changing the actual database. All my attempts at using the UPDATE clause have failed so I am kind of dumbfounded :/
This part I am currently trying to do is not part of homework. It's just me who doesn't want to have to re-do the database over and over.
How would I go about doing this?
I guess you have two options:
You make a procedure, which first checks whether it needs to update the table. After calling that you execute the query.
You create a pipelined function, which does the checking and returning of the data. You could integrate this into the select like this (pipelined function name called pipelined_function_name):
select *
from table(pipelined_function_name)
;
I need to merge two tables in following way:
Target has one extra Column ID. This Id is coming FROM another Single Column Master Table.
While Inserting the Record in Merge Statement I need to INSERT a new row into mater table and use its id to insert into TARGET table.
I have created a Stored Procedure that Inserts and returns newly inserted ID. Now the Problem is inside SQL Merge, we can't call a stored Proc.
What could be the solution of this issue? Cant use Scalar functions as INSERT can't be performed in Functions.
DECLARE #temp INT
MERGE dbo.mytabletarget T
USING dbo.mytableSource S
ON T.refId=S.RefId
WHEN MATCHED THEN
UPDATE
SET T.col1=S.col1,
T.Col2=S.Col2
WHEN NOT MATCHED BY TARGET THEN
INSERT (Id,col1,col2)
VALUES({Here i need value from SP. SP simply Inserts a new Id into master table and Returns it},S.col1,S.col2);
GO
What could be the solution of this issue?
Do not use a stored procedure. Obvious, isn't it?
For a merge statement, you pretty much are stuck with doing the commands right there in the statement. Merge focuses on ETL loads and has advantages as well as limitations.
Basically, put the logic into the merge statement.
While Inserting the Record in Merge Statement I need to INSERT a new row into mater table
and use its id to insert into TARGET table.
Hm, lookup table maintenance?
The regular approach for that is ti make sure the lookup table is filled first (in a separate statement). ETL (and that is where merge comes from) often works along stages for that particular reason.
Sorry, I do not have a better solution either ;(
I have a dozen tables of whom I want to keep the history of the changes. For every one I created a second table with the ending _HISTO and added fields modtime, action, user.
At the moment before I insert, modify or delete a record in this tables I call ( from my delphi app ) a oracle procedure that copies the actual values to the histo table and then do the operation.
My procedure generates a dynamic sql via DBA_TAB_COLUMNS and then executes the generated ( insert into tablename_histo ( fields s ) select fields, sysdate, 'acition', userid from table_name
I was told that I can not call this procedure from a trigger because it has to select the table the trigger is triggered on. Is this true ? Is it possible to implement what I need ?
Assuming you want to maintain history using triggers (rather than any of the other methods of tracking history data in Oracle-- Workspace Manager, Total Recall, Streams, Fine_Grained Auditing etc.), you can use dynamic SQL in the trigger. But the dynamic SQL is subject to the same rules that static SQL is subject to. And even static SQL in a row-level trigger cannot in general query the table that the trigger is defined on without generating a mutating table exception.
Rather than calling dynamic SQL from your trigger, however, you can potentially write some dynamic SQL that generates the trigger in the first place using the same data dictionary tables. The triggers themselves would statically refer to :new.column_name and :old.column_name. Of course, you would have to either edit the trigger or re-run the procedure that dynamically creates the trigger when a new column gets added. Since you, presumably, need to add the column to both the main table and the history table, however, this generally isn't too big of a deal.
Oracle does not allow a trigger to execute a SELECT against the table on which the trigger is defined. If you try it you'll get the dreaded "mutating table" error (ORA-04091), and while there are ways to get around that error they add a lot of complexity for little value. If you really want to build a dynamic query every time your table is updated (IMO this is a bad idea from the standpoint of performance - I find that metadata queries are often slow, but YMMV) it should end up looking something like
strAction := CASE
WHEN INSERTING THEN 'INSERT'
WHEN UPDATING THEN 'UPDATE'
WHEN DELETING THEN 'DELETE'
END;
INSERT INTO TABLENAME_HISTO
(ACTIVITY_DATE, ACTION, MTC_USER,
old_field1, new_field1, old_field2, new_field2)
VALUES
(SYSDATE, strAction, USERID,
:OLD.field1, :NEW.field1, :OLD.field2, :NEW.field2)
Share and enjoy.
I am using the Function in stored procedure , procedure contain transaction and update the table and insert values in the same table , while the function is call in procedure is also fetch data from same table.
i get the procedure is hang with function.
Can have any solution for the same?
If I'm hearing you right, you're talking about an insert BLOCKING ITSELF, not two separate queries blocking each other.
We had a similar problem, an SSIS package was trying to insert a bunch of data into a table, but was trying to make sure those rows didn't already exist. The existing code was something like (vastly simplified):
INSERT INTO bigtable
SELECT customerid, productid, ...
FROM rawtable
WHERE NOT EXISTS (SELECT CustomerID, ProductID From bigtable)
AND ... (other conditions)
This ended up blocking itself because the select on the WHERE NOT EXISTS was preventing the INSERT from occurring.
We considered a few different options, I'll let you decide which approach works for you:
Change the transaction isolation level (see this MSDN article). Our SSIS package was defaulted to SERIALIZABLE, which is the most restrictive. (note, be aware of issues with READ UNCOMMITTED or NOLOCK before you choose this option)
Create a UNIQUE index with IGNORE_DUP_KEY = ON. This means we can insert ALL rows (and remove the "WHERE NOT IN" clause altogether). Duplicates will be rejected, but the batch won't fail completely, and all other valid rows will still insert.
Change your query logic to do something like put all candidate rows into a temp table, then delete all rows that are already in the destination, then insert the rest.
In our case, we already had the data in a temp table, so we simply deleted the rows we didn't want inserted, and did a simple insert on the rest.
This can be difficult to diagnose. Microsoft has provided some information here:
INF: Understanding and resolving SQL Server blocking problems
A brute force way to kill the connection(s) causing the lock is documented here:
http://shujaatsiddiqi.blogspot.com/2009/01/killing-sql-server-process-with-x-lock.html
Some more Microsoft info here: http://support.microsoft.com/kb/323630
How big is the table? Do you have problem if you call the procedure from separate windows? Maybe the problem is related to the amount of data the procedure is working with and lack of indexes.