Improve Query Performance, Adding where clause grids query to a halt - sql

Running the following SQL results in a query that runs in around 0.338s
adding a where clause and query times out. All I want to achieve is a list of test results for a particular test_code
Result_Set will have many Test_Results on the index Result_Set_Row_ID
Date_Received_Index will have many Result_Sets on the index Result_Set_Row_ID
I have tried altering the order of JOINS, adding clauses to the join statements.
SELECT
Date_Received_Index.Registration_Number,
Date_Received_Index.Specimen_Number,
Result,
Result_Comment,
Result_Comment_Exp ,
Result_Exp,
Short_Exp,
Test_Code,
Test_Exp,
Test_Row_ID,
Units,
Result_Set.Set_Code ,
Result_Set.Date_Time_Authorised,
Result_Set.Date_Booked_In ,
Date_Received_Index.Discipline,
Date_Received_Index.Namespace
FROM
Result_Set
INNER JOIN Test_Result ON Result_Set.Result_Set_Row_ID = Test_Result.Result_Set_Row_ID
INNER JOIN Date_Received_Index ON (Date_Received_Index.Request_Row_ID = Result_Set.Request_Row_ID)
WHERE
DATEDIFF('D', Date_Received_Index.Date_Received, current_timestamp) < 1 AND
Date_Received_Index.Namespace = 'CHM'
adding a WHERE clause e.g.
DATEDIFF('D', Date_Received_Index.Date_Received, current_timestamp) < 1 AND
Date_Received_Index.Namespace = 'CHM'
AND Test_Code = 'K'
results in the query timing out
I would like to be able to construct an SQL statement that is performant and just selects the test_code specified in the where clause.

This comes down to the Query Plan. Can you share the query plan?
My suspicion is that the column Test_Code is not indexed and the addition to the WHERE clause is causing the optimizer to select the wrong query plan.
I think the SQL optimizer is not able to optimize the portion
DATEDIFF('D', Date_Received_Index.Date_Received, current_timestamp) < 1
without knowing your schema my question would be of the three columns used in the where clause Date_Received_Index.Date_Received, Date_Received_Index.Namespace, and Test_Code are any of these columns index. You already indicated Test_Code is not.
Depending on what version of Cache you are using you might try
SELECT
Date_Received_Index.Registration_Number,
Date_Received_Index.Specimen_Number,
Result,
Result_Comment,
Result_Comment_Exp ,
Result_Exp,
Short_Exp,
Test_Code,
Test_Exp,
Test_Row_ID,
Units,
Result_Set.Set_Code ,
Result_Set.Date_Time_Authorised,
Result_Set.Date_Booked_In ,
Date_Received_Index.Discipline,
Date_Received_Index.Namespace
FROM %PARALLEL
Result_Set
INNER JOIN Test_Result ON Result_Set.Result_Set_Row_ID = Test_Result.Result_Set_Row_ID
INNER JOIN Date_Received_Index ON (Date_Received_Index.Request_Row_ID = Result_Set.Request_Row_ID)
WHERE
DATEDIFF('D', Date_Received_Index.Date_Received, current_timestamp) < 1
AND Date_Received_Index.Namespace = 'CHM'
AND Test_Code = 'K'
Use of %PARALLEL can cause the query to be run using multiple threads. If the server has a large number of CPUs it may run faster even if it's not optimized.

Related

Slow query with many joins - expanding the magic pill?

I have a query that takes 20 minutes to run, even though I have an index for every column in the where clauses, and every column being joined:
SELECT DISTINCT skt.VCDRAWING_REG_NO, skb.NDRAWING_ORG_NO, skb.NDRAWING_ORG_REV_NO, skb.CAPPLY_START_DATE, skb.CAPPLY_END_DATE, skto.*
FROM SPM_ABS_TRANBASE skt
JOIN SPM_ABS_BASE skb
ON skt.NDRAWING_ORG_REV_NO = skb.NDRAWING_ORG_REV_NO
AND skt.NDRAWING_ORG_NO = skb.NDRAWING_ORG_NO
JOIN SPM_ABS_MODEL skm
ON skb.NDRAWING_ORG_REV_NO = skm.NDRAWING_ORG_REV_NO
AND skb.NDRAWING_ORG_NO = skm.NDRAWING_ORG_NO
JOIN SPM_ABS_TRANOPT skto
ON skt.NDRAWING_SYSTEM_NO = skto.NDRAWING_SYSTEM_NO
JOIN ModelImport mi
ON skm.CMODEL = mi.ModelCode
WHERE (skb.CAPPLY_START_DATE <= DATEADD(day, 2, GETDATE()) OR skb.CAPPLY_START_DATE IS NULL)
AND (skb.CAPPLY_END_DATE >= DATEADD(day, -2, GETDATE()) OR skb.CAPPLY_END_DATE IS NULL)
Here is my query plan.
One thing that puzzles me is this: If I add the following WHERE clause, the query returns in about 0.5 seconds:
AND mi.ModelCode = '3FBK5'
Now you're saying, well, duh, of course it gets much faster with that - the thing is, the ModelImport table contains only 351 records. Which means if I were to split up the query above into 351 queries, each with its own where clause for a distinct ModelCode - then I can get 100% of my query results in about 175 seconds, or 2.9 minutes. This is dramatically faster. Which tells me that something in the wide-open query is grossly inefficient, and the query plan is bad.
Here is my query plan with AND mi.ModelCode = '3FBK5' added.
After viewing my query plan, any ideas how I can speed this up?
Is it possible that you eliminate some of the joins as you are not selecting anything from those tables or applying any where conditions to those tables, e.g., skm and mi?
Without having the table schema and sizes it's a little hard to give an exact answer, but here are some updates to try.
Use group by instead of distinct
Don't use * in the select results (particularly with distinct) instead give specific list of columns to return
Avoid "or" statements in where clause (maybe use ISNULL instead)
Here is what the query might look like with these updates (though there probably some other columns from skto you would want to add)
SELECT skt.VCDRAWING_REG_NO,
skb.NDRAWING_ORG_NO,
skb.NDRAWING_ORG_REV_NO,
skb.CAPPLY_START_DATE,
skb.CAPPLY_END_DATE,
skto.NDRAWING_SYSTEM_NO
FROM SPM_ABS_TRANBASE skt
JOIN SPM_ABS_BASE skb ON
skt.NDRAWING_ORG_REV_NO = skb.NDRAWING_ORG_REV_NO
AND skt.NDRAWING_ORG_NO = skb.NDRAWING_ORG_NO
JOIN SPM_ABS_MODEL skm ON
skb.NDRAWING_ORG_REV_NO = skm.NDRAWING_ORG_REV_NO
AND skb.NDRAWING_ORG_NO = skm.NDRAWING_ORG_NO
JOIN SPM_ABS_TRANOPT skto ON
skt.NDRAWING_SYSTEM_NO = skto.NDRAWING_SYSTEM_NO
JOIN ModelImport mi ON
skm.CMODEL = mi.ModelCode
WHERE ISNULL(skb.CAPPLY_START_DATE, DATEADD(day, 2, GETDATE())) <= DATEADD(day, 2, GETDATE())
AND ISNULL(skb.CAPPLY_END_DATE,DATEADD(day, -2, GETDATE())) >= DATEADD(day, -2, GETDATE())
GROUP BY skt.VCDRAWING_REG_NO,
skb.NDRAWING_ORG_NO,
skb.NDRAWING_ORG_REV_NO,
skb.CAPPLY_START_DATE,
skb.CAPPLY_END_DATE,
skto.NDRAWING_SYSTEM_NO

Position of ON and WHERE clauses and the efficiency performance

I have two tables, one called Health_User and the other called Diary. They have users' demographic information, and their recorded values respectively. What I want to do is retrieving the recorded values, but:
Excluding testers (not real users) with the "is_tester" column (boolean values) in Health_User, and
Excluding unreasonable values with too high or too low measurements in Diary.
So I have several queries which should get the same results:
# Query 1
SELECT d.user_id, d.id AS diary_id, d.glucose_value, d.unit
FROM Diary AS d
JOIN (
SELECT id
FROM Health_User
WHERE is_tester = false
) AS u
ON d.user_id = u.id
WHERE ((d.glucose_value >= 20 AND d.glucose_value <= 600 AND d.unit = 'mg/dL')
OR (d.glucose_value >= 20/18.02 AND d.glucose_value <= 600/18.02 AND d.unit = 'mmol/L'));
# Query 2
SELECT d.user_id, d.id AS diary_id, d.glucose_value, d.unit
FROM Diary AS d
JOIN Health_User AS u
ON d.user_id = u.id
WHERE u.is_tester = false
AND ((d.glucose_value >= 20 AND d.glucose_value <= 600 AND d.unit = 'mg/dL')
OR (d.glucose_value >= 20/18.02 AND d.glucose_value <= 600/18.02 AND d.unit = 'mmol/L'));
# Query 3
SELECT d.user_id, d.id AS diary_id, d.glucose_value, d.unit
FROM Health_User AS u
JOIN (
SELECT id, user_id, glucose_value, unit
FROM Diary
WHERE ((glucose_value >= 20 AND glucose_value <= 600 AND unit = 'mg/dL')
OR (glucose_value >= 20/18.02 AND glucose_value <= 600/18.02 AND unit = 'mmol/L'))
) AS d
ON d.user_id = u.id
WHERE u.is_tester = false;
Here I have three questions:
Question 1: I would speculate that Query 1 would have better performance than Query 2, because a) it joins only one column instead of the whole table of Health_User and b) it filters out testers before joining the tables. Am I correct?
Question 2: The conditional limitation is more complex for Diary (See the last WHERE clause in Query 1). Is it better to switch Diary inside the JOIN and make Health_User outside like Query 3, or it makes no difference?
Question 3: Is there any even better solution in terms of performance?
There would be a difference if the database executed the queries in the order your queries suggest (first filter, then join or vice versa).
As it is, PostgreSQL has a query optimizer that rearranges the query to find the most efficient execution order, and all your queries will end up with the same execution plan, which you can verify using the SQL statement EXPLAIN.
For inner joins, it does not influence the result if you filter before or after the join; you could also write all the conditions into the join condition without changing the result. The optimizer knows that.
You can speed up execution by creating appropriate indexes. It depends on the distribution of the data to know if a certain index is useful. The rule of thumb is that indexes on selective conditions (that filter out many data) are more useful. Work with EXPLAIN to find the best indexes.

Missing or incorrect data in Oracle EBS backend query (SQL)

I am learning Oracle E-Business Suite. Using an SQL Query, I want to fetch all items (ASSEMBLY_ITEM) which are BOM enabled and whose job creation date is within the last 3 years. The query I have written gives me the desired results but somehow, the component item list (COMPONENT_ITEM) has incomplete data or (Null) for some items. For example, some items which I know has 5 components but in the SQL query output, that same item shows up as having a single component. I am verifying the data from the Oracle Apps frontend part.
Here is my query so far:
SELECT DISTINCT BOM.BILL_SEQUENCE_ID,
MSI.ORGANIZATION_ID,
MSI.SEGMENT1 ASSEMBLY_ITEM,
(SELECT SEGMENT1
FROM MTL_SYSTEM_ITEMS_B
WHERE INVENTORY_ITEM_ID=BIC.COMPONENT_ITEM_ID
AND rownum =1
) "COMPONENT_ITEM"
FROM BOM_OPERATIONAL_ROUTINGS BOR,
BOM_BILL_OF_MATERIALS BOM,
BOM_INVENTORY_COMPONENTS BIC,
MTL_SYSTEM_ITEMS_B MSI
WHERE BOR.ASSEMBLY_ITEM_ID = BOM.ASSEMBLY_ITEM_ID
AND BOR.ORGANIZATION_ID = BOM.ORGANIZATION_ID
AND BOM.BILL_SEQUENCE_ID = BIC.BILL_SEQUENCE_ID(+)
AND BOM.ASSEMBLY_ITEM_ID = MSI.INVENTORY_ITEM_ID(+)
AND BOM.ORGANIZATION_ID = MSI.ORGANIZATION_ID
AND MSI.ORGANIZATION_ID IN (203, 204, 328)
AND MSI.BOM_ENABLED_FLAG = 'Y'
AND NVL (MSI.ENABLED_FLAG, 'X') = 'Y'
AND BOR.ASSEMBLY_ITEM_ID IN
(SELECT DISTINCT PRIMARY_ITEM_ID
FROM WIP_DISCRETE_JOBS WDJ
WHERE BOR.ASSEMBLY_ITEM_ID = WDJ.PRIMARY_ITEM_ID
AND WDJ.CREATION_DATE >= ADD_MONTHS(SYSDATE, -12*3)
)
My query should return unique assembly items, each of which may have one or more components.
Ideally, I should be getting , all unique assembly items along with their component items. I have achieved this by removing the join on BOM.BILL_SEQUENCE_ID = BIC.BILL_SEQUENCE_ID (+), and also removing the semi-join at the end of the query, the "AND BOR.ASSEMBLY_ITEM_ID IN" part, which essentially filters the results since the last 3 years.
The join part isn't the real issue. Is there any way to filter the result based on items created in the last 3 years, using something other than this method?
What am I missing here?
There are several things you could do better here.
First, use ANSI JOINS. I see at least some of your problems are due to incorrect attempt as LEFT OUTER joins. Using the ANSI syntax will help you avoid such problems.
Second, don't use DISTINCT unless you understand WHY it is necessary. It should not be necessary to write the query you say you want.
Third, You don't need to join BOM_OPERATIONAL_ROUTINGS -- you're not using it for anything (unless you are trying to use it as a filter -- to limit to BOMs that have a routing?)
Fourth, use EXISTS instead of IN. Oracle's optimizer is mostly smart enough to protect you from the performance penalty that IN would have hit you with in the past. Nevertheless, EXISTS better represents your intent (imo) and places less reliance on the optimizer being smart.
Finally, the ROWNUM=1 to get the component item isn't necessary. You just need to limit by ORGANIZATION_ID as well.
Putting it all together, the following query should be pretty close to what you are looking for:
SELECT bom.bill_sequence_id,
bom.organization_id,
msi.segment1 assembly_item_id,
bic.operation_seq_num,
msic.segment1 component_item
FROM bom_bill_of_materials bom
LEFT JOIN mtl_system_items msi
ON msi.organization_id = bom.organization_id
AND msi.inventory_item_id = bom.assembly_item_id
AND msi.enabled_flag = 'Y'
AND msi.bom_enabled_flag = 'Y'
LEFT JOIN bom_inventory_components bic ON bic.bill_sequence_id = bom.common_bill_sequence_id
LEFT JOIN mtl_system_items msic
ON msic.organization_id = bom.organization_id
AND msic.inventory_item_id = bic.component_item_id
WHERE bom.organization_id IN (203, 204, 328)
AND EXISTS
(SELECT 'job for item in last 3 years'
FROM wip_discrete_jobs wdj
WHERE wdj.primary_item_id = bom.assembly_item_id
AND wdj.creation_date >= ADD_MONTHS (SYSDATE, -36))
-- Unless you are loading a table with this, you may want to ORDER BY...
;

Query including subquery and group by slower than expected

The whole query below runs incredibly slowly.
The subquery query [alias Stage_1] takes only 1.37 minutes returning 9514 records, however the whole query takes over 20 minutes, returning 2606 records.
I could use a #temp table to hold the subquery to improve the performance however I would prefer not to.
An overview of the query is that table WeeklySpace inner joins to Spaceblock_Name_to_PG table on SpaceblockName_SID, this cuts down the results in WeeklySpace and includes PG_Code with the results in WeeklySpace. WeeklySpace is then Full Outer Joined to Sales_PG_Wk across 3 fields. The where clause focuses the results, and may be changed. The results from the subquery are then sum'd. You cannot do the final sum'ing in the subquery due to the group by and sum over used.
I believe the issue is due to the subquery re calculation repeatedly during the group by in the final sum'ing. The field SpaceblockName_SID also appears to be involved in causing the issue as without it the run time with a group by in the subquery isn't affected.
I have read though loads of suggestion, trying them all to resolve the issue.
These include;
Adding TOP 2147483647 with Order by to force intermediate
materialization, both in the subquery and using a CTE.
Adding a join after stage_1.
Cast'ing SpaceblockName_SID from an int to a varchar and back again
The execution plan (cut in two parts, shown below the code) for both the subquery and the whole query appear similar. The cost is around the Full Outer Join (Hash Match), which I expected.
The query is running on T-SQL 2005.
Any help greatly appreciated!
select
Cost_centre
, Fin_week
, SpaceblockName_SID
, sum(Propor_rep_SRV) as Total_SpaceblockName_SID_SRV
from
(
select
coalesce(space_side.fin_week , sales_side.fin_week) as Fin_week
,coalesce(space_side.cost_centre , sales_side.cost_Centre) as Cost_centre
,space_side.SpaceblockName_SID
,case
when space_side.SpaceblockName_SID is null
then sales_side.SalesExVAT
else sum(space_side.TLM)
/nullif(sum (sum(space_side.TLM) ) over (partition by coalesce(space_side.fin_week , sales_side.fin_week)
, coalesce(space_side.cost_centre , sales_side.cost_Centre)
, coalesce( Spaceblock_Name_to_PG.PG_Code, sales_side.PG_Code)) ,0)*sales_side.SalesExVAT
end as Propor_rep_SRV
from
WeeklySpace as space_side
INNER JOIN
Spaceblock_Name_to_PG
ON space_side.SpaceblockName_SID = Spaceblock_Name_to_PG.SpaceblockName_SID
and Spaceblock_Name_to_PG.PG_Code < 10000
full outer join
sales_pg_wk as sales_side
on space_side.fin_week = sales_side.fin_week
and space_side.Cost_Centre = sales_side.Cost_Centre
and Spaceblock_Name_to_PG.PG_code = sales_side.pg_code
where
coalesce(space_side.fin_week, sales_side.fin_week) between 201538 and 201550
and
coalesce(space_side.cost_centre, sales_side.cost_Centre) in (3, 2800)
group by
coalesce(space_side.fin_week, sales_side.fin_week)
,coalesce(space_side.cost_centre, sales_side.cost_Centre)
,coalesce( Spaceblock_Name_to_PG.PG_Code, sales_side.PG_Code)
,sales_side.SalesExVAT
,space_side.SpaceblockName_SID
) as stage_1
group by
Cost_centre
, Fin_week
, SpaceblockName_SID
Execution plan left hand side
Execution plan right hand side
You didn't mentioned about indices are created or not on those columns those you used in your query. If not then create and check performance of the query
In looking at you logic I think you split this in two with a UNION
One with Spaceblock_Name_to_PG.PG_Code < 10000 and the other with Spaceblock_Name_to_PG.PG_Code >= 10000
And consider this change
If may be doing a bunch of join that you are going to throw out anyway
full outer join sales_pg_wk as sales_side
on space_side.fin_week = sales_side.fin_week
and space_side.Cost_Centre = sales_side.Cost_Centre
and Spaceblock_Name_to_PG.PG_code = sales_side.pg_code
and space_side.fin_week between 201538 and 201550
and sales_side.fin_week between 201538 and 201550
and space_side.cost_centre in (3, 2800)
and sales_side.cost_Centre in (3, 2800)

Query taking too long - Optimization

I am having an issue with the following query returning results a bit too slow and I suspect I am missing something basic. My initial guess is the 'CASE' statement is taking too long to process its result on the underlying data. But it could be something in the derived tables as well.
The question is, how can I speed this up? Are there any glaring errors in the way I am pulling the data? Am I running into a sorting or looping issues somewhere? The query runs for about 40 seconds, which seems quite long. C# is my primary expertise, SQL is a work in progress.
Note I am not asking "write my code" or "fix my code". Just for a pointer in the right direction, I can't seem to figure out where the slow down occurs. Each derived table runs very quickly (less than a second) by themselves, the joins seem correct and the result set is returning exactly what I need. It's just too slow and I'm sure there are better SQL scripter's out there ;) Any tips would be greatly appreciated!
SELECT
hdr.taker
, hdr.order_no
, hdr.po_no as display_po
, cust.customer_name
, hdr.customer_id
, 'INCORRECT-LARGE ORDER' + CASE
WHEN (ext_price_calc >= 600.01 and ext_price_calc <= 800) and fee_price.unit_price <> round(ext_price_calc * -.01,2)
THEN '-1%: $' + cast(cast(ext_price_calc * -.01 as decimal(18,2)) as varchar(255))
WHEN ext_price_calc >= 800.01 and ext_price_calc <= 1000 and fee_price.unit_price <> round(ext_price_calc * -.02,2)
THEN '-2%: $' + cast(cast(ext_price_calc * -.02 as decimal(18,2)) as varchar(255))
WHEN ext_price_calc > 1000 and fee_price.unit_price <> round(ext_price_calc * -.03,2)
THEN '-3%: $' + cast(cast(ext_price_calc * -.03 as decimal(18,2)) as varchar(255))
ELSE
'OK'
END AS Status
FROM
(myDb_view_oe_hdr hdr
LEFT OUTER JOIN myDb_view_customer cust
ON hdr.customer_id = cust.customer_id)
LEFT OUTER JOIN wpd_view_sales_territory_by_customer territory
ON cust.customer_id = territory.customer_id
LEFT OUTER JOIN
(select
order_no,
SUM(ext_price_calc) as ext_price_calc
from
(select
hdr.order_no,
line.item_id,
(line.qty_ordered - isnull(qty_canceled,0)) * unit_price as ext_price_calc
from myDb_view_oe_hdr hdr
left outer join myDb_view_oe_line line
on hdr.order_no = line.order_no
where
line.delete_flag = 'N'
AND line.cancel_flag = 'N'
AND hdr.projected_order = 'N'
AND hdr.delete_flag = 'N'
AND hdr.cancel_flag = 'N'
AND line.item_id not in ('LARGE-ORDER-1%','LARGE-ORDER-2%', 'LARGE-ORDER-3%', 'FUEL','NET-FUEL', 'CONVENIENCE-FEE')) as line
group by order_no) as order_total
on hdr.order_no = order_total.order_no
LEFT OUTER JOIN
(select
order_no,
count(order_no) as convenience_count
from oe_line with (nolock)
left outer join inv_mast inv with (nolock)
on oe_line.inv_mast_uid = inv.inv_mast_uid
where inv.item_id in ('LARGE-ORDER-1%','LARGE-ORDER-2%', 'LARGE-ORDER-3%')
and oe_line.delete_flag <> 'Y'
group by order_no) as fee_count
on hdr.order_no = fee_count.order_no
INNER JOIN
(select
order_no,
unit_price
from oe_line line with (nolock)
where line.inv_mast_uid in (select inv_mast_uid from inv_mast with (nolock) where item_id in ('LARGE-ORDER-1%','LARGE-ORDER-2%', 'LARGE-ORDER-3%'))) as fee_price
ON fee_count.order_no = fee_price.order_no
WHERE
hdr.projected_order = 'N'
AND hdr.cancel_flag = 'N'
AND hdr.delete_flag = 'N'
AND hdr.completed = 'N'
AND territory.territory_id = ‘CUSTOMERTERRITORY’
AND ext_price_calc > 600.00
AND hdr.carrier_id <> '100004'
AND fee_count.convenience_count is not null
AND CASE
WHEN (ext_price_calc >= 600.01 and ext_price_calc <= 800) and fee_price.unit_price <> round(ext_price_calc * -.01,2)
THEN '-1%: $' + cast(cast(ext_price_calc * -.01 as decimal(18,2)) as varchar(255))
WHEN ext_price_calc >= 800.01 and ext_price_calc <= 1000 and fee_price.unit_price <> round(ext_price_calc * -.02,2)
THEN '-2%: $' + cast(cast(ext_price_calc * -.02 as decimal(18,2)) as varchar(255))
WHEN ext_price_calc > 1000 and fee_price.unit_price <> round(ext_price_calc * -.03,2)
THEN '-3%: $' + cast(cast(ext_price_calc * -.03 as decimal(18,2)) as varchar(255))
ELSE
'OK' END <> 'OK'
Just as a clue to the right direction for optimization:
When you do an OUTER JOIN to a query with calculated columns, you are guaranteeing not only a full table scan, but that those calculations must be performed against every row in the joined table. It appears that you can actually do your join to oe_line without the column calculations (i.e. by filtering ext_price_calc to a specific range).
You don't need to do most of the subqueries that are in your query--the master query can be recrafted to use regular table join syntax. Joins to subqueries containing subqueries presents a challenge to the SQL optimizer that it may not be able to meet. But by using regular joins, the optimizer has a much better chance at identifying more efficient query strategies.
You don't tag which SQL engine you're using. Every database has proprietary extensions that may allow for speedier or more efficient queries. It would be easier to provide useful feedback if you indicated whether you were using MySQL, SQL Server, Oracle, etc.
Regardless of the database you're using, reviewing the query plan is always a good place to start. This will tell you where most of the I/O and time in your query is being spent.
Just on general principle, make sure your statistics are up-to-date.
It's may not be solvable by any of us without the real stuff to test with.
IF that's the case and nobody else posts the answer, I can still help. Here is how to trouble shoot it.
(1) take joins and pieces out one by one.
(2) this will cause errors. Remove or fake the references to get rid of them.
(3) see how that works.
(4) Put items back before you try taking something else out
(5) keep track...
(6) also be aware where a removal of something might drastically reduce the result set.
You might find you're missing an index or some other smoking gun.
I was having the same problem and I was able to solve it by indexing one of the tables and setting a primary key.
I strongly suspect that the problem lies in the number of joins you're doing. A lot of databases do joins basically by systemically checking all possible combinations of the various tables as being valid - so if you're joinging table A and B on column C, and A looks like:
Name:C
Fred:1
Alice:2
Betty:3
While B looks like:
C:Pet
1:Alligator
2:Lion
3:T-Rex
When you do the join, it checks all 9 possibilities:
Fred:1:1:Alligator
Fred:1:2:Lion
Fred:1:3:T-Rex
Alice:2:1:Alligator
Alice:2:2:Lion
Alice:2:3:T-Rex
Betty:3:1:Alligator
Betty:3:2:Lion
Betty:3:3:T-Rex
And goes through and deletes the non-matching ones:
Fred:1:1:Alligator
Alice:2:2:Lion
Betty:3:3:T-Rex
... which means with three entries in each table, it creates nine temporary records, sorts through them all, and deletes six of them ... all before it actually sorts through the results for what you're after (so if you are looking for Betty's Pet, you only want one row on that final result).
... and you're doing how many joins and sub-queries?