I'm working on improving part of an existing ETL layer in Oracle.
A file is loaded in to a temporary table.
Many MERGE statement are executed to resolve surrogate keys.
Some other business logic is applied (which require those surrogate keys).
The results are MERGEd
in to a table (with both the surrogate keys and the business logic
results)
It's step 2 that I want to improve, it seems less than ideal to do this as several steps.
MERGE INTO temp t
USING dimension_1 d1 ON (d1.natural_key = t.d1_natural_key)
WHEN MATCHED THEN UPDATE t.d1_id = d1.id
MERGE INTO temp t
USING dimension_2 d2 ON (d2.natural_key = t.d2_natural_key)
WHEN MATCHED THEN UPDATE t.d2_id = d2.id
MERGE INTO temp t
USING dimension_3 d3 ON (d3.natural_key = t.d3_natural_key)
WHEN MATCHED THEN UPDATE t.d3_id = d3.id
If I was writing this in SQL Server I'd do something like the following:
UPDATE
t
SET
d1_id = COALESCE(d1.id, -1),
d2_id = COALESCE(d2.id, -1),
d3_id = COALESCE(d3.id, -1)
FROM
temp t
LEFT JOIN
dimension_1 d1
ON d1.natural_key = t.d1_natural_key
LEFT JOIN
dimension_2 d2
ON d2.natural_key = t.d2_natural_key
LEFT JOIN
dimension_3 d3
ON d3.natural_key = t.d3_natural_key
For the life of me I can't find what seems like a sensible option in Oracle. The best I have been able to work out is to use UPDATE (while everyone around me is screaming that I 'must' use MERGE) and correlated sub-queries; something like...
UPDATE
temp t
SET
d1_id = COALESCE((SELECT id FROM dimension_1 d1 WHERE d1.natural_key = t.d1_natural_key), -1),
d2_id = COALESCE((SELECT id FROM dimension_2 d2 WHERE d2.natural_key = t.d2_natural_key), -1),
d3_id = COALESCE((SELECT id FROM dimension_3 d3 WHERE d3.natural_key = t.d3_natural_key), -1)
Are there any better alternatives? Or is the correlated sub-query approach actually performant in Oracle?
I think the equivalent of your SQL Server update would be:
UPDATE
temp t1
SET
(d1_id, d2_id, d3_id) = (
SELECT
COALESCE(d1.id, -1),
COALESCE(d2.id, -1),
COALESCE(d3.id, -1)
FROM
temp t2
LEFT JOIN
dimension_1 d1
ON d1.natural_key = t2.d1_natural_key
LEFT JOIN
dimension_2 d2
ON d2.natural_key = t2.d2_natural_key
LEFT JOIN
dimension_3 d3
ON d3.natural_key = t2.d3_natural_key
WHERE
t2.id = t1.id
)
It's still a correlated update; the joining takes place in the subquery, since Oracle doesn't let you join as part of the update itself. Normally you wouldn't need (or want) to refer to the target outer table again in the subquery, but you need something to outer-join against here.
You can also combine the left-join approach with a merge, putting essentially the same subquery into the using clause:
MERGE INTO temp t
USING (
SELECT t.id,
COALESCE(d1.id, -1) AS d1_id,
COALESCE(d2.id, -1) AS d2_id,
COALESCE(d3.id, -1) AS d3_id
FROM
temp t
LEFT JOIN
dimension_1 d1
ON d1.natural_key = t.d1_natural_key
LEFT JOIN
dimension_2 d2
ON d2.natural_key = t.d2_natural_key
LEFT JOIN
dimension_3 d3
ON d3.natural_key = t.d3_natural_key
) d
ON (d.id = t.id)
WHEN MATCHED THEN UPDATE SET
t.d1_id = d.d1_id,
t.d2_id = d.d2_id,
t.d3_id = d.d3_id
I don't see any real benefit of using merge over update in this case though.
Both will overwrite any existing values in your three ID columns, but it sounds like you are not expecting there to be any.
I believe this may be more efficient than Alex's answer -- requiring only one access of the temp table, instead of two. On my quick test of a million rows, performance was about the same, but the plan is better since there is no second access of the temp table. It may be worth trying on your data set.
UPDATE
( SELECT d1.id s_d1_id,
d2.id s_d2_id,
d3.id s_d3_id,
mt.d1_id,
mt.d2_id,
mt.d3_id
FROM temp mt
LEFT JOIN dimension_1 d1 ON d1.natural_key = mt.d1_natural_key
LEFT JOIN dimension_2 d2 ON d2.natural_key = mt.d2_natural_key
LEFT JOIN dimension_3 d3 ON d3.natural_key = mt.d3_natural_key )
SET d1_id = COALESCE (s_d1_id, -1), d2_id = COALESCE (s_d2_id, -1), d3_id = COALESCE (s_d3_id, -1);
The caveat is, you need UNIQUE constraints on the natural_key columns in each dimension table. With these constraints, Oracle knows that temp is key-preserved in the view you are updating, which is what makes the above syntax OK.
One other caveat: I once encountered a situation where the rows from the SELECT view were not in the same order as the table. The result was that performance tanked, as the update had to revisit each block several times. An ORDER BY temp.rowid in the SELECT view would fix that.
Related
I have a flag in a table which value ( 1 for US, or 2 for Global) indicates if the data will be in Table A or Table B.
A solution that works is to left join both tables; however this slows down significantly the scripts (from less than a second to over 15 seconds).
Is there any other clever way to do this? an equivalent of
join TableA only if TableCore.CountryFlag = "US"
join TableB only if TableCore.CountryFlag = "global"
Thanks a lot for the help.
You can try using this approach:
-- US data
SELECT
YourColumns
FROM
TableCore
INNER JOIN TableA AS T ON TableCore.JoinColumn = T.JoinColumn
WHERE
TableCore.CountryFlag = 'US'
UNION ALL
-- Non-US Data
SELECT
YourColumns -- These columns must match in number and datatype with previous SELECT
FROM
TableCore
INNER JOIN TableB AS T ON TableCore.JoinColumn = T.JoinColumn
WHERE
TableCore.CountryFlag = 'global'
However, if the result is still slow, you might want to check if the TableCore table has a index on CountryFlag and JoinColumn, and TableA and TableB an index on JoinColumn.
The basic structure is:
select . . ., coalesce(a.?, b.?) as ?
from tablecore c left join
tablea a
on c.? = a.? and c.countryflag = 'US' left join
tableb b
on c.? b.? and c.counryflag = 'global';
This version of the query can take advantage of indexes on tablea(?) and tableb(?).
If you have a complex query, this portion is probably not responsible for the performance problem.
I am looking for some tips/tricks to improve performance of a stored procedure with multiple SELECT statements inserting into a table. All objects I am joining on are already indexed.
I believe the reason this stored procedure takes almost an hour to run is because there are multiple SELECT statements using following two views: rvw_FinancialLineItemValues and rvw_FinancialLineItems
Also, each SELECT statement uses specific hard-coded values for AccountNumber, LineItemTypeID, and few other field values coming from two views mentioned above.
Would it improve performance if I create a temporary table, which gets ALL data needed for these SELECT statements at once and then using this temporary table in my join instead?
Are there any other ways to improve performance and manageability?
SELECT
#scenarioid,
#portfolioid,
pa.Id,
pa.ExternalID,
(select value from fn_split(i.AccountNumber,'.') where id = 1),
ac.[Description],
cl.Name,
NullIf((select value from fn_split(i.AccountNumber,'.') where id = 2),''),
NullIf((select value from fn_split(i.AccountNumber,'.') where id = 3),''),
ty.Name,
v.[Date],
cast(SUM(v.Amount) as decimal(13,2)),
GETDATE()
FROM rvw_FinancialLineItems i
INNER JOIN rvw_Scenarios sc
ON i.ScenarioId = sc.Id
AND sc.Id = #scenarioid
AND sc.PortfolioId = #portfolioid
INNER JOIN #pa AS pa
ON i.PropertyAssetID = pa.Id
INNER JOIN rvw_FinancialLineItemValues v
ON i.ScenarioId = v.ScenarioId
AND i.PropertyAssetID = v.PropertyAssetID
AND i.Id = v.FinancialLineItemId
AND ((i.BusinessEntityTypeId = 11
AND i.LineItemTypeId = 3002)
OR (i.LineItemTypeId IN (2005, 2010, 2003, 2125, 2209, 5012, 6001)
AND i.ModeledEntityKey = 1))
AND i.AccountNumber not in ('401ZZ','403ZZ')
AND i.AccountNumber not in ('401XX')
AND i.AccountNumber not in ('40310','41110','42010','41510','40190','40110') -- exclude lease-level revenues selected below
AND v.[Date] BETWEEN #fromdate AND
CASE
WHEN pa.AnalysisEnd < #todate THEN pa.AnalysisEnd
ELSE #todate
END
AND v.ResultSet IN (0, 4)
INNER JOIN rvw_Portfolios po
ON po.Id = #portfolioid
INNER JOIN Accounts ac
ON po.ChartOfAccountId = ac.ChartOfAccountId
AND i.AccountNumber = ac.AccountNumber
AND ac.HasSubAccounts = 0
INNER JOIN fn_LookupClassTypes() cl
ON ac.ClassTypeId = cl.Id
INNER JOIN LineItemTypes ty
ON ac.LineItemTypeId = ty.Id
LEFT JOIN OtherRevenues r
ON i.PropertyAssetID = r.PropertyAssetID
AND i.AccountNumber = r.AccountID
AND v.[Date] BETWEEN r.[Begin] AND r.[End]
WHERE (r.IsMemo IS NULL
OR r.IsMemo = 0)
GROUP BY pa.AnalysisBegin
,pa.Id
,pa.ExternalID
,i.AccountNumber
,ac.[Description]
,cl.Name
,ty.Name
,v.[Date]
HAVING SUM(v.amount) <> 0
You should run your query with SET SHOWPLAN ALL ON or with Management Studio Save Execution Plan and look for inefficiencies.
There are some resources online that help in analyzing the results, such as:
http://www.sqlservercentral.com/articles/books/65831/
See also How do I obtain a Query Execution Plan?
First thing, which fn_split() UDF are you using? If you are not using the table-Valued inline UDF, then this is notoriously slow.
Second, is the UDF fn_LookupClassTypes() an inline table valued UDF? If not, convert it to an inline Table-Valued UDF.
Last, your SQL query had some redundancies. Try this and see what it does.
SELECT #scenarioid, #portfolioid, pa.Id, pa.ExternalID,
(select value from fn_split(i.AccountNumber,'.')
where id = 1), ac.[Description], cl.Name,
NullIf((select value from fn_split(i.AccountNumber,'.')
where id = 2),''),
NullIf((select value from fn_split(i.AccountNumber,'.')
where id = 3),''), ty.Name, v.[Date],
cast(SUM(v.Amount) as decimal(13,2)), GETDATE()
FROM rvw_FinancialLineItems i
JOIN rvw_Scenarios sc ON sc.Id = i.ScenarioId
JOIN #pa AS pa ON pa.Id = i.PropertyAssetID
JOIN rvw_FinancialLineItemValues v
ON v.ScenarioId = i.ScenarioId
AND v.PropertyAssetID = i.PropertyAssetID
AND v.FinancialLineItemId = i.Id
JOIN rvw_Portfolios po ON po.Id = sc.portfolioid
JOIN Accounts ac
ON ac.ChartOfAccountId = po.ChartOfAccountId
AND ac.AccountNumber = i.AccountNumber
JOIN fn_LookupClassTypes() cl On cl.Id = ac.ClassTypeId
JOIN LineItemTypes ty On ty.Id = ac.LineItemTypeId
Left JOIN OtherRevenues r
ON r.PropertyAssetID = i.PropertyAssetID
AND r.AccountID = i.AccountNumber
AND v.[Date] BETWEEN r.[Begin] AND r.[End]
WHERE i.ScenarioId = #scenarioid
and ac.HasSubAccounts = 0
and sc.PortfolioId = #portfolioid
and IsNull(r.IsMemo, 0) = 0)
and v.ResultSet In (0, 4)
and i.AccountNumber not in
('401XX', '401ZZ','403ZZ','40310','41110',
'42010','41510','40190','40110')
and v.[Date] BETWEEN #fromdate AND
CASE WHEN pa.AnalysisEnd < #todate
THEN pa.AnalysisEnd ELSE #todate END
and ((i.LineItemTypeId = 3002 and i.BusinessEntityTypeId = 11) OR
(i.ModeledEntityKey = 1 and i.LineItemTypeId IN
(2005, 2010, 2003, 2125, 2209, 5012, 6001)))
GROUP BY pa.AnalysisBegin,pa.Id, pa.ExternalID, i.AccountNumber,
ac.[Description],cl.Name,ty.Name,v.[Date]
HAVING SUM(v.amount) <> 0
I would look to the following first. What are the wait types relevant to your stored procedure here? Are you seeing a lot of disk io time? Are things being done in memory? Maybe there's network latency pulling that much information.
Next, what does the plan look like for the procedure, where does it show all the work is being done?
The views definitely could be an issue as you mentioned. You could maybe have pre-processed tables so you don't have to do as many joins. Specifically the joins where you are seeing the most amount of CPU spent.
Correlated subqueries are generally slow and should never be used when you are trying for performance. Use the fn_split to create a temp table Index it if need be and then join to it to get the value you need. You might need to join multiple times for different value, without actually knowing the data I am having a hard time visualizing.
It is also not good for performance to use OR. Use UNION ALL in a derived table instead.
Since you have all those conditions on the view rvw_FinancialLineItems, yes it might work to pull those out to a temp table and then index the temp table.
YOu might also see if using the views is even a good idea. Often views have joins to many table that you aren't getting data from and thus are less performant than querying only the tables you actually need. This is especially true if your organization was dumb enough to make views that call views.
Tables and requested output
I'm using National Instrumets Teststand default database setup. I've tried to simplify the DB layout in the picture above.
I can manage to get what i want through some rather "complicated" sql, and it's very slow.
I think there is a better way, and then i stumbled over SELF JOIN. Basically what I want is to get data values from several different rows, from one "serial number".
My problem is to combine the self Join with the "general" join of my tables.
I'm using an Access Databdase at the moment.
This will give you the output you're aiming for with the sample data:
with x as (
select
row_number() over (partition by t1.Serial order by t1.Serial) as [RN],
t1.Serial,
case when t3.Sub_Test_Name = 'AAA' then t3.Value end as [AAA],
case when t3.Sub_Test_Name = 'BBB' then t3.Value end as [BBB],
case when t3.Sub_Test_Name = 'CCC' then t3.Value end as [CCC],
case when t3.Sub_Test_Name = 'DDD' then t3.Value end as [DDD]
from Table_1 t1
inner join Table_2 t2 on t2.Table_1_Id = t1.Id
inner join Table_3 t3 on t3.Table_2_Id = t2.Id
)
select
x.Serial,
AAA.AAA,
BBB.BBB,
CCC.CCC,
DDD.DDD
from x
left outer join x AAA on AAA.Serial = x.Serial and AAA.RN = x.rn + 0
left outer join x BBB on BBB.Serial = x.Serial and BBB.RN = x.rn + 1
left outer join x CCC on CCC.Serial = x.Serial and CCC.RN = x.rn + 2
left outer join x DDD on DDD.Serial = x.Serial and DDD.RN = x.rn + 3
where x.rn = 1
This uses self joins as you mentioned (where you see x being left joined to itself multiple times in the final select statement).
I've deliberately added extra columns CCC and DDD so it is easier to see how you would build this out for a larger data set, incrementing the row_number offset for each join.
I've tested this in SQL Fiddle and you're welcome to play around with it. If you need to apply additional filters, your where clause should be placed inside the CTE.
Note, you're effectively pivoting the data with this sort of query (except we're not aggregating anything, so we can't use the built in PIVOT option). The downside of both this method and real pivots is that you have to manually specify every column header with its own CASE statement in the CTE, and a left join in the final select statement. This can get unwieldy in medium - large data sets, so it best suited in cases where you will have a small number of known column headers in your results.
SQL Gurus,
I have a query that uses the "old" style of join syntax as follows using 7 tables (table and column names changed to protect the innocent), as shown below:
SELECT v1_col, p1_col
FROM p1_tbl, p_tbl, p2_tbl, p3_tbl, v1_tbl, v2_tbl, v3_tbl
WHERE p1_code = 1
AND v1_code = 1
AND p1_date >= v1_date
AND p_uid = p1_uid
AND p2_uid = p1_uid AND p2_id = v2_id
AND p3_uid = p1_uid AND p3_id = v3_id
AND v2_uid = v1_uid
AND v3_uid = v1_uid
The query works just fine and produces the results it is supposed to, but as an academic exercise, I tried to rewrite the query using the more standard JOIN syntax, for example, below is one version I tried:
SELECT V1.v1_col, P1.p1_col
FROM p1_tbl P1, v1_tbl V1
JOIN p_tbl P ON ( P.p_uid = P1.p1_uid )
JOIN p2_tbl P2 ON ( P2.p2_uid = P1.p1_uid AND P2.p2_id = V2.v2_id )
JOIN p3_tbl P3 ON ( P3.p3_uid = P1.p1_uid AND P3.p3_id = V3.v3_id )
JOIN v2_tbl V2 ON ( V2.v2_uid = V1.v1_uid )
JOIN v3_tbl V3 ON ( V3.v3_uid = V1.v1_uid )
WHERE P1.p1_code = 1
AND V1.v1_code = 1
AND P1.p1_date >= V1.v1_date
But, no matter how I arrange the JOINs (using MS SQL 2008 R2), I keep running into the error:
The Multi-part identifier "col-name" could not be bound,
where "col-name" varies depending on the order of the JOINs I am attempting...
Does anyone have any good examples on how use the JOIN syntax with this number of tables??
Thanks in advance!
When you use JOIN-syntax you can only access columns from tables in your current join or previous joins. In fact it's easier to write the old syntax, but it's more error-prone, e.g. you can easily forget a join-condition.
This should be what you want.
SELECT v1_col, p1_col
FROM p1_tbl
JOIN v1_tbl ON p1_date >= v1_date
JOIN v2_tbl ON v2_uid = v1_uid
JOIN v3_tbl ON v3_uid = v1_uid
JOIN p_tbl ON p_uid = p1_uid
JOIN p2_tbl ON p2_uid = p1_uid AND p2_id = v2_id
JOIN p3_tbl ON p3_uid = p1_uid AND p3_id = v3_id
WHERE p1_code = 1
AND v1_code = 1
You are not naming the tables in your join such that it doesn't know which column is from which table. Try something like:
SELECT a.v1_col, b.p1_col
FROM p1_tbl b
JOIN p_tbl a ON b.p_uid = a.p1_uid
WHERE b.p1_code = 1
From your query above, I am assuming a naming convention of p2_uid comes from p2_tbl. Below id my best interpretation of WHERE joins to using INNER joins.
SELECT
v1_col, p1_col
FROM
p1_tbl
INNER JOIN p1_tbl
ON p1_tbl.p1_date >= v1_tbl.v1_date
INNER JOIN p_tbl
ON p_tbl.p_uid = p1_tbl.p1_uid
INNER JOIN p2_tbl
ON p2_tbl.p2_uid = p1_tbl.p1_uid
INNER JOIN v2_tbl
ON p2_tbl.p2_id = v2_tbl.v2_id
INNER JOIN p3_tbl
ON p3_tbl.p3_uid = p1_tbl.p1_uid
INNER JOIN v3_tbl
ON p3_tbl.p3_id = v3_tbl.v3_id
INNER JOIN v1_tbl
ON v1_tbl.v1_uid = v2_tbl.v2_uid
AND v1_tbl.v1_uid = v3_tbl.v2_uid
WHERE
p1_code = 1
AND
v1_code = 1
Some general points I have found useful in SQL statements with many joins.
Always fully qualify the names. I.e dont use ID , rahter use
TableName.ID
Dont use aliases unless there is meaning. (I.e. joining a table to
its self where aliasing is needed.)
I have a deadlock when I execute this stored procedure :
-- Delete transactions
delete from ADVICESEQUENCETRANSACTION
where ADVICESEQUENCETRANSACTION.id in (
select TR.id from ADVICESEQUENCETRANSACTION TR
inner join ACCOUNTDESCRIPTIONITEM IT on TR.ACCOUNTDESCRIPTIONITEMID = IT.id
inner join ACCOUNTDESCRIPTION ACC on IT.ACCOUNTDESCRIPTIONID = ACC.id
inner join RECOMMENDATIONDESCRIPTION RD on ACC.RECOMMENDATIONDESCRIPTIONID = RD.id
inner join RECOMMENDATION REC on REC.id = RD.RECOMMENDATIONID
inner join ADVICESEQUENCE ADV on ADV.id = REC.ADVICESEQUENCEID
where adv.Id = #AdviceSequenceId AND (#RecommendationState is NULL OR #RecommendationState=REC.[State])
);
Here is the schema of the table :
Here is the deadlock graph :
you can see the detail of the deadlock graph here
So, when I retrieve the associatedobjid of the ressource node, I identify that it's the primary key and an index of the table AdviceSequenceTransaction :
SELECT OBJECT_SCHEMA_NAME([object_id]), * ,
OBJECT_NAME([object_id])
FROM sys.partitions
WHERE partition_id = 72057595553120256 OR partition_id = 72057595553316864;
SELECT name FROM sys.indexes WHERE object_id = 31339176 and (index_id = 1 or index_id = 4)
PK_AdviceSequenceTransaction
IX_ADVICESEQUENCEID_ADVICE
As there is a relation on the table AdviceSequenceTransaction on the key ParentTransactionId and the key Primary key, I have created an index on the column ParentTransactionId.
And I have no more Deadlock. But the problem is I don't know exactly why there is no more deadlock :-/
Moreover, on the set of data to test it, there is no data in ParentTransactionId. All are NULL.
So, Even is there no data (null) in the ParentTransactionId, is there an access to the Primary key by SQL Server ???
An other thing is that I want to remove a join in the delete statement :
delete from ADVICESEQUENCETRANSACTION
where ADVICESEQUENCETRANSACTION.id in (
select TR.id from ADVICESEQUENCETRANSACTION TR
inner join ACCOUNTDESCRIPTIONITEM IT on TR.ACCOUNTDESCRIPTIONITEMID = IT.id
inner join ACCOUNTDESCRIPTION ACC on IT.ACCOUNTDESCRIPTIONID = ACC.id
inner join RECOMMENDATIONDESCRIPTION RD on ACC.RECOMMENDATIONDESCRIPTIONID = RD.id
inner join RECOMMENDATION REC on REC.id = RD.RECOMMENDATIONID
inner join ADVICESEQUENCE ADV on ADV.id = REC.ADVICESEQUENCEID
where adv.Id = #AdviceSequenceId AND (#RecommendationState is NULL OR #RecommendationState=REC.[State])
);
into :
delete from ADVICESEQUENCETRANSACTION
where ADVICESEQUENCETRANSACTION.id in (
select TR.id from ADVICESEQUENCETRANSACTION TR
inner join ACCOUNTDESCRIPTIONITEM IT on TR.ACCOUNTDESCRIPTIONITEMID = IT.id
inner join ACCOUNTDESCRIPTION ACC on IT.ACCOUNTDESCRIPTIONID = ACC.id
inner join RECOMMENDATIONDESCRIPTION RD on ACC.RECOMMENDATIONDESCRIPTIONID = RD.id
inner join RECOMMENDATION REC on REC.id = RD.RECOMMENDATIONID
where TR.AdviceSequenceId = #AdviceSequenceId AND (#RecommendationState is NULL OR #RecommendationState=REC.[State])
);
I removed the last join. But if I do this, I have again the deadlock ! And here again, I don't know why...
Thank you for your enlightment :)
Using a complex, compound join in your WHERE clause can often cause problems. SQL Server processes the clauses using the following Logical Processing Order (view here):
FROM
ON
JOIN
WHERE
GROUP BY
WITH CUBE or WITH ROLLUP
HAVING
SELECT
DISTINCT
ORDER BY
TOP
Using views, or derived tables or views greatly reduces the number of iterative (TABLE) scans required to obtain the desired result because the emphasis in your query is better aligned to use the logical execution order. The FROM clause for each of the derived tables (or views) is executed first, limiting the result set passed to the ON cause, then the JOIN clause and so on, because you're passing your parameters to an "inside FROM" rather than the "outermost WHERE".
So your code could look something like this:
delete from (SELECT ADVICESEQUENCETRANSACTION
FROM (SELECT tr.id
FROM ADVICESEQUENCETRANSACTION WITH NOLOCK
WHERE AdviceSequenceId = #AdviceSequenceId
)TR
INNER JOIN (SELECT [NEXT COLUMN]
FROM [NEXT TABLE] WITH NOLOCK
WHERE COLUMN = [PARAM]
)B
ON TR.COL = B.COL
)ALIAS
WHERE [COLUMN] = COL.PARAM
);
and so on...
(I know the code isn't cut and paste usable, but it should serve to convey the general idea)
In this way, you're passing your parameters to "inner queries" first, pre-loading your limited result set (particularly if you should use views), and then working outward. Using Locking Hints everywhere appropriate will also help to prevent some of the problems you might otherwise encounter. This technique can also help make the execution plan more effective in helping you diagnose where your blocks are coming from, should you still have any.
one approach is to use WITH (nolock) on table ADVICESEQUENCETRANSACTION TR if you dont mind dirty reads.