SQL Server 2012 join perfomance features? - sql

Consider a simple 3 table database i SQL Server 2012.
Table A
AId
Name
Other1
Other2
Table B
BId
Name
Table A_B
BId
AId
Simple example query:
SELECT TOP(20) A.Aid, A.Name, B.Bid, B.Name
FROM A
INNER JOIN A_B ON A.AId = A_B.Aid
INNER JOIN A as AA ON AA.Aid = A_B.Aid
INNER JOIN B ON B.BId = A_B.Bid
WHERE AA.Aid = #aid
AND A.Other1 = #other1
There are millions of rows in table A.
There are thousands of rows in table B.
There are ten times more rows in table A_B than A.
The Other1 and Other2 fields can be used to filter the queries.
Join queries using Top(20) could be done at a rate of 100 requests per second or more (specs are unclear).
The queries will almost always be using different parameters so result caching would not help that much.
What features in SQL Server 2012 can help to improve join query perfomance given the example above?
My initial thought is that since it's all PK int joins there isn't much that I could do. However I don't know if partitioned views could help.
I'm thinking that probably it's just about adding memory.

Well the first thing to understand (well maybe not the first) is that a performance model is built into all current versions which is dependant on head seek times vs continuous reads, This may well change with solidstate drives. Your choice of clusted indexes will be important keeping likely frequently queried data together. Also having a covering index for each part of the query will mean that the data can be accessed without reading the table its self. Partitoning may help (but its probably a long way down the list). Keeping stats up do date is essential. To often poor performance comes from undermaintained indexes and stats. Actully all these things are true right back to SQL7 (except I dont think SQL7 had partitioned views). Having the right RAID structure can alter performace by a factor of 4. The number of tempdbs should be equivalent to the number of processors (upto about 16) and the tempdb load balancing option should be set to true. Having Tempdbs, logs and data distributed across diffent i/os. No auto shrink - its evil. These are the more obvious ones. If you really want to get to grips with large db, then "Inside SQL" by Kalen Delany is almost mandatory reading though probably costs more that a few GB of RAM. And as you said - more RAM.

First yes have a clustered index for the PK
If Table B is smaller than Int16 use Int16
Not for disk space but for more rows in the same amount of memory
The interesting part is Table A_B
The order of that PK will probably effect in performance
Against just a single PK index which ever is second will be a slower join
Try the order each way
Check the query plan
Check the tuning adviser
My thought is
PK AId, BId
Non clustered index on BId based on that index is smaller
Then flop them around and compare
If the same then go with AId, BId for smaller index size and speed of insert
Then you can go into hints on the joins
Defrag on a regular basis
Insert in the order of the PK
If the data comes in natural order and insert speed is an issue then use that order for the PK
If insert speed is a problem then it may help to disable the non clustered index, insert, and then rebuild the non clustered index
Millions and thousands is still not enormous.
And I would not write the query like that
Keep the number joins down
SELECT TOP(20) A.Aid, A.Name, B.Bid, B.Name
FROM A_B
JOIN A
ON A.Aid = A_B.Aid
JOIN B
ON B.BId = A_B.Bid
WHERE AA.Aid = #aid
AND A.Other1 = #other1
That query is very wasteful
Why join on all A.Aid = A_B.Aid to filter to a single AA.Aid in the where
Get the filter to execute early
This may perform better
SELECT TOP(20) A.Aid, A.Name, B.Bid, B.Name
FROM A_B
JOIN A
ON A.Aid = A_B.Aid
AND A.Aid = #aid
AND A.Other1 = #other1
JOIN B
ON B.BId = A_B.Bid
If you can get it to filter before it joins then less work
Check the query plan
A CTE on A with the conditions may coerce it to perform the filter first.
If you cannot get the filter to happen first with a single statement then create a #tempA with ID as a declared PK
(not a CTE the purpose is to materialize)
Insert into #tempA
select Id, Name
from Table A
where A.Aid = #aid
AND A.Other1 = #other1
If Id is PK on Table A then that query returns 0 or 1 records
The join to #tempA is trivial

Related

Improving SQL Query Join

I have a SQL query that is taking hours to run. My join is on the descriptions of products. Would it be more efficient to create a unique numerical id and join on this instead since the product description is a few sentences long?
Example:
SELECT A*, B.something
FROM tableA A JOIN TABLE B
ON A.product_details = B.product_details
For this query:
SELECT A.*, B.something
FROM tableA A JOIN
TABLE B
ON A.product_details = B.product_details
The best index is on B(product_details, something) -- however product_details is most important as the first key.
I generally recommend a numeric index. They are a bit more efficient. And they reduce the number of things to worry about, such as spaces at the ends of keys and collation conflicts.

Is there any performance difference between using inner join vs left join? [duplicate]

I've created SQL command that uses INNER JOIN on 9 tables, anyway this command takes a very long time (more than five minutes). So my folk suggested me to change INNER JOIN to LEFT JOIN because the performance of LEFT JOIN is better, despite what I know. After I changed it, the speed of query got significantly improved.
I would like to know why LEFT JOIN is faster than INNER JOIN?
My SQL command look like below:
SELECT * FROM A INNER JOIN B ON ... INNER JOIN C ON ... INNER JOIN D and so on
Update:
This is brief of my schema.
FROM sidisaleshdrmly a -- NOT HAVE PK AND FK
INNER JOIN sidisalesdetmly b -- THIS TABLE ALSO HAVE NO PK AND FK
ON a.CompanyCd = b.CompanyCd
AND a.SPRNo = b.SPRNo
AND a.SuffixNo = b.SuffixNo
AND a.dnno = b.dnno
INNER JOIN exFSlipDet h -- PK = CompanyCd, FSlipNo, FSlipSuffix, FSlipLine
ON a.CompanyCd = h.CompanyCd
AND a.sprno = h.AcctSPRNo
INNER JOIN exFSlipHdr c -- PK = CompanyCd, FSlipNo, FSlipSuffix
ON c.CompanyCd = h.CompanyCd
AND c.FSlipNo = h.FSlipNo
AND c.FSlipSuffix = h.FSlipSuffix
INNER JOIN coMappingExpParty d -- NO PK AND FK
ON c.CompanyCd = d.CompanyCd
AND c.CountryCd = d.CountryCd
INNER JOIN coProduct e -- PK = CompanyCd, ProductSalesCd
ON b.CompanyCd = e.CompanyCd
AND b.ProductSalesCd = e.ProductSalesCd
LEFT JOIN coUOM i -- PK = UOMId
ON h.UOMId = i.UOMId
INNER JOIN coProductOldInformation j -- PK = CompanyCd, BFStatus, SpecCd
ON a.CompanyCd = j.CompanyCd
AND b.BFStatus = j.BFStatus
AND b.ProductSalesCd = j.ProductSalesCd
INNER JOIN coProductGroup1 g1 -- PK = CompanyCd, ProductCategoryCd, UsedDepartment, ProductGroup1Cd
ON e.ProductGroup1Cd = g1.ProductGroup1Cd
INNER JOIN coProductGroup2 g2 -- PK = CompanyCd, ProductCategoryCd, UsedDepartment, ProductGroup2Cd
ON e.ProductGroup1Cd = g2.ProductGroup1Cd
A LEFT JOIN is absolutely not faster than an INNER JOIN. In fact, it's slower; by definition, an outer join (LEFT JOIN or RIGHT JOIN) has to do all the work of an INNER JOIN plus the extra work of null-extending the results. It would also be expected to return more rows, further increasing the total execution time simply due to the larger size of the result set.
(And even if a LEFT JOIN were faster in specific situations due to some difficult-to-imagine confluence of factors, it is not functionally equivalent to an INNER JOIN, so you cannot simply go replacing all instances of one with the other!)
Most likely your performance problems lie elsewhere, such as not having a candidate key or foreign key indexed properly. 9 tables is quite a lot to be joining so the slowdown could literally be almost anywhere. If you post your schema, we might be able to provide more details.
Edit:
Reflecting further on this, I could think of one circumstance under which a LEFT JOIN might be faster than an INNER JOIN, and that is when:
Some of the tables are very small (say, under 10 rows);
The tables do not have sufficient indexes to cover the query.
Consider this example:
CREATE TABLE #Test1
(
ID int NOT NULL PRIMARY KEY,
Name varchar(50) NOT NULL
)
INSERT #Test1 (ID, Name) VALUES (1, 'One')
INSERT #Test1 (ID, Name) VALUES (2, 'Two')
INSERT #Test1 (ID, Name) VALUES (3, 'Three')
INSERT #Test1 (ID, Name) VALUES (4, 'Four')
INSERT #Test1 (ID, Name) VALUES (5, 'Five')
CREATE TABLE #Test2
(
ID int NOT NULL PRIMARY KEY,
Name varchar(50) NOT NULL
)
INSERT #Test2 (ID, Name) VALUES (1, 'One')
INSERT #Test2 (ID, Name) VALUES (2, 'Two')
INSERT #Test2 (ID, Name) VALUES (3, 'Three')
INSERT #Test2 (ID, Name) VALUES (4, 'Four')
INSERT #Test2 (ID, Name) VALUES (5, 'Five')
SELECT *
FROM #Test1 t1
INNER JOIN #Test2 t2
ON t2.Name = t1.Name
SELECT *
FROM #Test1 t1
LEFT JOIN #Test2 t2
ON t2.Name = t1.Name
DROP TABLE #Test1
DROP TABLE #Test2
If you run this and view the execution plan, you'll see that the INNER JOIN query does indeed cost more than the LEFT JOIN, because it satisfies the two criteria above. It's because SQL Server wants to do a hash match for the INNER JOIN, but does nested loops for the LEFT JOIN; the former is normally much faster, but since the number of rows is so tiny and there's no index to use, the hashing operation turns out to be the most expensive part of the query.
You can see the same effect by writing a program in your favourite programming language to perform a large number of lookups on a list with 5 elements, vs. a hash table with 5 elements. Because of the size, the hash table version is actually slower. But increase it to 50 elements, or 5000 elements, and the list version slows to a crawl, because it's O(N) vs. O(1) for the hashtable.
But change this query to be on the ID column instead of Name and you'll see a very different story. In that case, it does nested loops for both queries, but the INNER JOIN version is able to replace one of the clustered index scans with a seek - meaning that this will literally be an order of magnitude faster with a large number of rows.
So the conclusion is more or less what I mentioned several paragraphs above; this is almost certainly an indexing or index coverage problem, possibly combined with one or more very small tables. Those are the only circumstances under which SQL Server might sometimes choose a worse execution plan for an INNER JOIN than a LEFT JOIN.
There is one important scenario that can lead to an outer join being faster than an inner join that has not been discussed yet.
When using an outer join, the optimizer is always free to drop the outer joined table from the execution plan if the join columns are the PK of the outer table, and none of the outer table columns are referenced outside of the outer join itself. For example SELECT A.* FROM A LEFT OUTER JOIN B ON A.KEY=B.KEY and B.KEY is the PK for B. Both Oracle (I believe I was using release 10) and Sql Server (I used 2008 R2) prune table B from the execution plan.
The same is not necessarily true for an inner join: SELECT A.* FROM A INNER JOIN B ON A.KEY=B.KEY may or may not require B in the execution plan depending on what constraints exist.
If A.KEY is a nullable foreign key referencing B.KEY, then the optimizer cannot drop B from the plan because it must confirm that a B row exists for every A row.
If A.KEY is a mandatory foreign key referencing B.KEY, then the optimizer is free to drop B from the plan because the constraints guarantee the existence of the row. But just because the optimizer can drop the table from the plan, doesn't mean it will. SQL Server 2008 R2 does NOT drop B from the plan. Oracle 10 DOES drop B from the plan. It is easy to see how the outer join will out-perform the inner join on SQL Server in this case.
This is a trivial example, and not practical for a stand-alone query. Why join to a table if you don't need to?
But this could be a very important design consideration when designing views. Frequently a "do-everything" view is built that joins everything a user might need related to a central table. (Especially if there are naive users doing ad-hoc queries that do not understand the relational model) The view may include all the relevent columns from many tables. But the end users might only access columns from a subset of the tables within the view. If the tables are joined with outer joins, then the optimizer can (and does) drop the un-needed tables from the plan.
It is critical to make sure that the view using outer joins gives the correct results. As Aaronaught has said - you cannot blindly substitute OUTER JOIN for INNER JOIN and expect the same results. But there are times when it can be useful for performance reasons when using views.
One last note - I haven't tested the impact on performance in light of the above, but in theory it seems you should be able to safely replace an INNER JOIN with an OUTER JOIN if you also add the condition <FOREIGN_KEY> IS NOT NULL to the where clause.
If everything works as it should it shouldn't, BUT we all know everything doesn't work the way it should especially when it comes to the query optimizer, query plan caching and statistics.
First I would suggest rebuilding index and statistics, then clearing the query plan cache just to make sure that's not screwing things up. However I've experienced problems even when that's done.
I've experienced some cases where a left join has been faster than a inner join.
The underlying reason is this:
If you have two tables and you join on a column with an index (on both tables).
The inner join will produce the same result no matter if you loop over the entries in the index on table one and match with index on table two as if you would do the reverse: Loop over entries in the index on table two and match with index in table one.
The problem is when you have misleading statistics, the query optimizer will use the statistics of the index to find the table with least matching entries (based on your other criteria).
If you have two tables with 1 million in each, in table one you have 10 rows matching and in table two you have 100000 rows matching. The best way would be to do an index scan on table one and matching 10 times in table two. The reverse would be an index scan that loops over 100000 rows and tries to match 100000 times and only 10 succeed. So if the statistics isn't correct the optimizer might choose the wrong table and index to loop over.
If the optimizer chooses to optimize the left join in the order it is written it will perform better than the inner join.
BUT, the optimizer may also optimize a left join sub-optimally as a left semi join. To make it choose the one you want you can use the force order hint.
Try both queries (the one with inner and left join) with OPTION (FORCE ORDER) at the end and post the results. OPTION (FORCE ORDER) is a query hint that forces the optimizer to build the execution plan with the join order you provided in the query.
If INNER JOIN starts performing as fast as LEFT JOIN, it's because:
In a query composed entirely by INNER JOINs, the join order doesn't matter. This gives freedom for the query optimizer to order the joins as it sees fit, so the problem might rely on the optimizer.
With LEFT JOIN, that's not the case because changing the join order will alter the results of the query. This means the engine must follow the join order you provided on the query, which might be better than the optimized one.
Don't know if this answers your question but I was once in a project that featured highly complex queries making calculations, which completely messed up the optimizer. We had cases where a FORCE ORDER would reduce the execution time of a query from 5 minutes to 10 seconds.
Have done a number of comparisons between left outer and inner joins and have not been able to find a consisten difference. There are many variables. Am working on a reporting database with thousands of tables many with a large number of fields, many changes over time (vendor versions and local workflow) . It is not possible to create all of the combinations of covering indexes to meet the needs of such a wide variety of queries and handle historical data. Have seen inner queries kill server performance because two large (millions to tens of millions of rows) tables are inner joined both pulling a large number of fields and no covering index exists.
The biggest issue though, doesn't seem to appeaer in the discussions above. Maybe your database is well designed with triggers and well designed transaction processing to ensure good data. Mine frequently has NULL values where they aren't expected. Yes the table definitions could enforce no-Nulls but that isn't an option in my environment.
So the question is... do you design your query only for speed, a higher priority for transaction processing that runs the same code thousands of times a minute. Or do you go for accuracy that a left outer join will provide. Remember that inner joins must find matches on both sides, so an unexpected NULL will not only remove data from the two tables but possibly entire rows of information. And it happens so nicely, no error messages.
You can be very fast as getting 90% of the needed data and not discover the inner joins have silently removed information. Sometimes inner joins can be faster, but I don't believe anyone making that assumption unless they have reviewed the execution plan. Speed is important, but accuracy is more important.
Outer joins can offer superior performance when used in views.
Say you have a query that involves a view, and that view is comprised of 10 tables joined together. Say your query only happens to use columns from 3 out of those 10 tables.
If those 10 tables had been inner-joined together, then the query optimizer would have to join them all even though your query itself doesn't need 7 out of 10 of the tables. That's because the inner joins themselves might filter down the data, making them essential to compute.
If those 10 tables had been outer-joined together instead, then the query optimizer would only actually join the ones that were necessary: 3 out of 10 of them in this case. That's because the joins themselves are no longer filtering the data, and thus unused joins can be skipped.
Source:
http://www.sqlservercentral.com/blogs/sql_coach/2010/07/29/poor-little-misunderstood-views/
Your performance problems are more likely to be because of the number of joins you are doing and whether the columns you are joining on have indexes or not.
Worst case you could easily be doing 9 whole table scans for each join.
I found something interesting in SQL server when checking if inner joins are faster than left joins.
If you dont include the items of the left joined table, in the select statement, the left join will be faster than the same query with inner join.
If you do include the left joined table in the select statement, the inner join with the same query was equal or faster than the left join.
From my comparisons, I find that they have the exact same execution plan. There're three scenarios:
If and when they return the same results, they have the same speed. However, we must keep in mind that they are not the same queries, and that LEFT JOIN will possibly return more results (when some ON conditions aren't met) --- this is why it's usually slower.
When the main table (first non-const one in the execution plan) has a restrictive condition (WHERE id = ?) and the corresponding ON condition is on a NULL value, the "right" table is not joined --- this is when LEFT JOIN is faster.
As discussed in Point 1, usually INNER JOIN is more restrictive and returns fewer results and is therefore faster.
Both use (the same) indices.

Horrible Oracle update performance

I am performing an update with a query like this:
UPDATE (SELECT h.m_id,
m.id
FROM h
INNER JOIN m
ON h.foo = m.foo)
SET m_id = id
WHERE m_id IS NULL
Some info:
Table h is roughly ~5 million rows
All rows in table h have NULL values for m_id
Table m is roughly ~500 thousand rows
m_id on table h is an indexed foreign key pointing to id on table m
id on table m is the primary key
There are indexes on m.foo and h.foo
The EXPLAIN PLAN for this query indicated a hash join and full table scans, but I'm no DBA, so I can't really interpret it very well.
The query itself ran for several hours and did not complete. I would have expected it to complete in no more than a few minutes. I've also attempted the following query rewrite:
UPDATE h
SET m_id = (SELECT id
FROM m
WHERE m.foo = h.foo)
WHERE m_id IS NULL
The EXPLAIN PLAN for this mentioned ROWID lookups and index usage, but it also went on for several hours without completing. I've also always been under the impression that queries like this would cause the subquery to be executed for every result from the outer query's predicate, so I would expect very poor performance from this rewrite anyway.
Is there anything wrong with my approach, or is my problem related to indexes, tablespace, or some other non-query-related factor?
Edit:
I'm also having abysmal performance from simple count queries like this:
SELECT COUNT(*)
FROM h
WHERE m_id IS NULL
These queries are taking anywhere from ~30 seconds to sometimes ~30 minutes(!).
I am noticing no locks, but the tablespace for these tables is sitting at 99.5% usage (only ~6MB free) right now. I've been told that this shouldn't matter as long as indexes are being used, but I don't know...
Some points:
Oracle does not index NULL values (it will index a NULL that is part of a globally non-null tuple, but that's about it).
Oracle is going for a HASH JOIN because of the size of both h and m. This is likely the best option performance-wise.
The second UPDATE might get Oracle to use indexes, but then Oracle is usually smart about merging subqueries. And it would be a worse plan anyway.
Do you have recent, reasonable statistics for your schema? Oracle really needs decent statistics.
In your execution plan, which is the first table in the HASH JOIN? For best performance it should be the smaller table (m in your case). If you don't have good cardinality statistics, Oracle will get messed up. You can force Oracle to assume fixed cardinalities with the cardinality hint, it may help Oracle get a better plan.
For example, in your first query:
UPDATE (SELECT /*+ cardinality(h 5000000) cardinality(m 500000) */
h.m_id, m.id
FROM h
INNER JOIN m
ON h.foo = m.foo)
SET m_id = id
WHERE m_id IS NULL
In Oracle, FULL SCAN reads not only every record in the table, it basically reads all storage allocated up to the maximum used (the high water mark in Oracle documentation). So if you have had a lot of deleted rows your tables might need some cleaning up. I have seen a SELECT COUNT(*) on an empty table consume 30+ seconds because the table in question had like 250 million deleted rows. If that is the case, I suggest analyzing your specific case with a DBA, so he/she can reclaim space from deleted rows and lower the high water mark.
As far as I remember, a WHERE m_id IS NULL performs a full-table scan, since NULL values cannot be indexed.
Full-table scan means, that the engine needs to read every record in the table to evaluate the WHERE condition, and cannot use an index.
You could try to add a virtual column set to a not-null value if m_id IS NULL, and index this column, and use this column in the WHERE condition.
Then you could also move the WHERE condition from the UPDATE statement to the sub-select, which will probably make the statement faster.
Since JOINs are expensive, rewriting INNER JOIN m ON h.foo = m.foo as
WHERE h.foo IN (SELECT m.foo FROM m WHERE m.foo IS NOT NULL)
may also help.
For large tables, MERGE is often much faster than UPDATE. Try this (untested):
MERGE INTO h USING
(SELECT h.h_id,
m.id as new_m_id
FROM h
INNER JOIN m
ON h.foo = m.foo
WHERE h.m_id IS NULL
) new_data
ON (h.h_id = new_data.h_id)
WHEN MATCHED THEN
UPDATE SET h.m_id = new_data.new_m_id;
Try undocumented hint /*+ BYPASS_UJVC */. If it works, add an UNIQUE/PK constraint on m.foo.
I would update the table in iterations, for example, add a condition according to where h.date_created > sysdate-30 and after it finishes I would run the same query and change the condition to: where h.date_created between sysdate-30 and sysdate-60 etc. If you don't have a column like date_created maybe there's another column you can filter by ? for example: WHERE m.foo = h.foo AND m.foo between 1 and 10
Only the result of plan can explain why the cost of this update is high, but an educated guess will be that both tables are very big and that there are many NULL values as well as a lot of matching (m.foo = h.foo)...

Why does 'HASH JOIN' or 'LOOP JOIN' improve this stored proc?

I have a basic query that goes from 6 seconds to 1 second just by changing one join from LEFT JOIN to LEFT HASH JOIN or 'LEFT LOOP JOIN'. Can anyone explain why this would cause such a large increase in performance and why SQL's optimizer isn't figuring it out on it's own?
Here is roughly what the SQL looks like:
SELECT
a.[ID]
FROM
[TableA] a
LEFT HASH JOIN
[TableB] b
ON b.[ID] = a.[TableB_ID]
JOIN
[TableC] c
ON c.[ID] = a.[TableC_ID]
WHERE
a.[SomeDate] IS NULL AND
a.[SomeStatus] IN ('X', 'Y', 'Z') AND
c.[SomethingElse] = 'ABC'
Table A and B have millions of records and indexes on all the ID fields. Using SQL Server 2005.
Edit: A collegue suggested a LEFT LOOP JOIN and it seems to have made it even faster... SQL is not one of my strengths so I am trying to understand how these 'hints' are helping.
HASH JOIN is useful when the large percent of rows contributes to the resultset.
In your case, building a HASH TABLE on either A or B and scanning another table is cheaper than either performing NESTED LOOPS over the index on B.ID or merging the sorted resultsets which the optimizer used before the hint.
SQL Server's optimizer did not see that: probably because you didn't gather statistics, probably because your data distribution is skewed.
Update:
Since you mentioned that LOOP JOIN improved the speed, it may be so that the JOIN order was chosen incorrectly by the optimizer.

How To Create a SQL Index to Improve ORDER BY performance

I have some SQL similar to the following, which joins four tables and then orders the results by the "status" column of the first:
SELECT *
FROM a, b, c, d
WHERE b.aid=a.id AND c.id=a.cid AND a.did=d.id AND a.did='XXX'
ORDER BY a.status
It works. However, it's slow. I've worked out this is because of the ORDER BY clause and the lack of any index on table "a".
All four tables have the PRIMARY KEYs set on the "id" column.
So, I know I need to add an index to table a which includes the "status" column but what else does it need to include? Should "bid", "cid" and "did" be in there too?
I've tried to ask this in a general SQL sense but, if it's important, the target is SQLite for use with Gears.
Thanks in advance,
Jake (noob)
I would say it's slow because the engine is doing scans all over the place instead of seeks. Did you mean to do SELECT a.* instead? That would be faster as well, SELECT * here is equivalent to a.*, b.*, c.*, d.*.
You will probably get better results if you put a separate index on each of these columns:
a.did (so that a.did = 'XXX' is a seek instead of a scan, also helps a.did = d.id)
a.cid (for a.cid = c.id)
b.aid (for a.id = b.aid)
You could try adding Status to the first and second indexes with ASCENDING order, for additional performance - it doesn't hurt.
I'd be curious as to how you worked out that the problem is 'the ORDER BY clause and the lack of any index on table "a".' I find this a little suspicious because there is an index on table a, on the primary key, you later say.
Looking at the nature of the query and what I can guess about the nature of the data, I would think that this query would generally produce relatively few results compared to the size of the tables it's using, and that thus the ORDER BY would be extremely cheap. Of course, this is just a guess.
Whether an index will even help at all is dependent on the data in the table. What indices your query optimizer will use when doing a query is dependent on a lot of different factors, one of the big ones being the expected number of results produced from a lookup.
One thing that would help a lot is if you would post the output of EXPLAINing your query.
have you tried joins?
select * from a inner join b on a.id = b.aid inner join c on a.cid = c.id inner join d on a.did=d.id where a.did='XXX'
ORDER BY a.status
the correct use of joins (left, richt, inner, outer) depends on structure of tables
hope this helps