I have several SQL Server 2014 queries that pull back a data set where we need to get a count on related, but different criteria along with that data. We do this with a sub query, but that is slowing it down immensely. It was fine until now where we are getting more data in our database to count on. Here is the query:
SELECT
T.*,
ISNULL((SELECT COUNT(1)
FROM EventRegTix ERT, EventReg ER
WHERE ER.EventRegID = ERT.EventRegID
AND ERT.TicketID = T.TicketID
AND ER.OrderCompleteFlag = 1), 0) AS NumTicketsSold
FROM
Tickets T
WHERE
T.EventID = 12345
AND T.DeleteFlag = 0
AND T.ActiveFlag = 1
ORDER BY
T.OrderNumber ASC
I am pretty sure its mostly due to the relation back outside of the sub query to the Tickets table. If I change the T.TicketID to an actual ticket # (999 for example), the query is MUCH faster.
I have attempted to join together these queries into one, but since there are other fields in the sub query, I just cannot get it to work properly. I was playing around with
COUNT(1) OVER (PARTITION BY T.TicketID) AS NumTicketsSold
but could not figure that out either.
Any help would be much appreciated!
I would write this as:
SELECT T.*,
(SELECT COUNT(1)
FROM EventRegTix ERT JOIN
EventReg ER
ON ER.EventRegID = ERT.EventRegID
WHERE ERT.TicketID = T.TicketID AND ER.OrderCompleteFlag = 1
) AS NumTicketsSold
FROM Tickets T
WHERE T.EventID = 12345 AND
T.DeleteFlag = 0 AND
T.ActiveFlag = 1
ORDER BY T.OrderNumber ASC;
Proper, explicit, standard JOIN syntax does not improve performance; it is just the correct syntax. COUNT(*) cannot return NULL values, so COALESCE() or a similar function is unnecessary.
You need indexes. The obvious ones are on Tickets(EventID, DeleteFlag, ActiveFlag, OrderNumber), EventRegTix(TicketID, EventRegID), and EventReg(EventRegID, OrderCompleteFlag).
I would try with OUTER APPLY :
SELECT T.*, T1.*
FROM Tickets T OUTER APPLY
(SELECT COUNT(1) AS NumTicketsSold
FROM EventRegTix ERT JOIN
EventReg ER
ON ER.EventRegID = ERT.EventRegID
WHERE ERT.TicketID = T.TicketID AND ER.OrderCompleteFlag = 1
) T1
WHERE T.EventID = 12345 AND
T.DeleteFlag = 0 AND
T.ActiveFlag = 1
ORDER BY T.OrderNumber ASC;
And, obvious you need indexes Tickets(EventID, DeleteFlag, ActiveFlag, OrderNumber), EventRegTix(TicketID, EventRegID), and EventReg(EventRegID, OrderCompleteFlag) to gain the performance.
Fixed this - query went from 5+ seconds to 1/2 second or less. Issues were:
1) No indexes. Did not know all FK fields needed indexes as well. I indexed all the fields that we joined or were in WHERE clause.
2) Used SQL Execution Plan to see the place where the bottle neck was. Told me no index, hence 1) above! :)
Thanks for all your help guys, hopefully this post helps someone else.
Dennis
PS: Changed the syntax too!
Related
I am new to sql query things. Below sql query is taking almost 4 minutes to run because of which page remains stuck for that much time.We need to optimise this query so that user is not stuck there for that much time
Please help to optimise.
We are using oracle db
Some improvements can be made by having 2 pre-computed columns in aaa_soldto_xyz table:
aaa_soldto_xyz.ID1 = Substr(aaa_soldto_xyz.xyz_id, 0, 6)
aaa_soldto_xyz.ID2 = Substr(aaa_soldto_xyz.xyz_id, 1, Length ( aaa_soldto_xyz.xyz_id) - 5) )
Those can make better use of existing or new indexes.
We can not help you in optimizing the query without explain plan.
But obvious improvement needed in this query is: remove abc_customer_address table from subquery(select clause) and do left join in from list and check for the result.
You need to change following clauses.
From clause
Left join (SELECT ADDR.zip_code,
ADDR.ID,
ROW_NUMBER() OVER (PARTITION BY ADDR.ID ORDER BY 1) AS RN
FROM abc_customer_address ADDR
WHERE ADDR.id = abc_customer.billing ) ADDR
ON (ADDR.id = abc_customer.billing AND ADDR.RN = 1)
Select clause:
CASE WHEN abc_customer_address.zip_code IS NULL THEN ADDR.zip_code
ELSE abc_customer_address.zip_code
END AS ZIP_CODE,
Cheers!!
I am running simple select script, which inner join with other 3 table . all the tables are big ( lots of data ) its taking around 20 sec to run. want to optimized it.
I tried to used nolock , but not much deference
SELECT RR.ReportID,
RR.RequestFormat,
RRP.SequenceNumber,
RRP.ParameterName,
RRP.ParameterValue
CASE WHEN RP.ParameterLabelOvrrd IS NULL THEN P.ParameterLabel ELSE .ParameterLabelOvrrd END AS ParameterLabelChosen,
RRP.ParameterValueEntered
FROM ReportRequestParameters AS RRP WITH (NOLOCK)
INNER JOIN ReportRequests AS RR WITH (NOLOCK) ON RRP.RequestID = RR.RequestID
INNER JOIN ReportParameter AS RP WITH (NOLOCK) ON RP.ReportID = RR.ReportID
AND RP.SequenceNumber = RRP.SequenceNumber
INNER JOIN Parameter AS P WITH (NOLOCK) ON P.ParameterID = RP.ParameterID
WHERE RRP.RequestID = '2226765'
ORDER BY SequenceNumber;
Please advice.
This is your query:
SELECT RR.ReportID, RR.RequestFormat, RRP.SequenceNumber,
RRP.ParameterName, RRP.ParameterValue
COALESCE(RP.ParameterLabelOvrrd, P.ParameterLabel) as ParameterLabelChosen,
RRP.ParameterValueEntered
FROM ReportRequestParameters RRP JOIN
ReportRequests RR
ON RRP.RequestID = RR.RequestID JOIN
ReportParameter RP
ON RP.ReportID = RR.ReportID AND
RP.SequenceNumber = RRP.SequenceNumber JOIN
Parameter P
ON P.ParameterID = RP.ParameterID
WHERE RRP.RequestID = 2226765
ORDER BY RRP.SequenceNumber;
I have removed the single quotes on 2226765, assuming that the id is a number. Mixing types can impede the optimizer.
Then, I recommend an index on ReportRequestParameters(RequestID, SequenceNumber). I assume the other tables have indexes on the appropriate columns, but these are:
ReportRequests(RequestID, ReportID, SequenceNumber)
ReportParameter(ReportID, SequenceNumber, ParameterID)
Parameter(ParameterID)
I strongly advise you not to use nolock, unless you know what you are doing. Aaron Bertrand has a good blog post on this subject.
I would suggest running with the execution plan turned on and see if SSMS can advise you on additional indexing.
Other than that your query looks straight-forward, nothing code wise that is going to help make it faster, other than perhaps getting rid of the case statement and definitely getting rid of the NOLOCK statements.
I'm very new to SQL, and still learning. I'm using a reporting tool called Solarwinds Orion, and I'm honestly not sure how specific the query I have written is to the program, so if there's anything in the query that's confusing, let me know and I'll try to figure out if it's specific to the program or not.
The problem with the query I'm running is that it times out after a very long time (maybe an hour) of running. The database I'm using is huge. Unfortunately I don't really know how huge, but I've been told it's huge.
Is there anything I am doing wrong that would have a huge performance impact?
SELECT TOP 10000
Nodes.Caption AS NodeName,
NetflowApplicationSummary.AppName AS Application_Name,
SUM(NetflowApplicationSummary.TotalBytes) AS SUM_of_Bytes_Transferred,
AVG(Case OutBandwidth
When 0 Then 0
Else (NetflowApplicationSummary.TotalBytes/OutBandwidth) * 100
End) AS TEST_PERCENT
FROM
((NetflowApplicationSummary
INNER JOIN Nodes ON (NetflowApplicationSummary.NodeID = Nodes.NodeID))
INNER JOIN InterfaceTraffic ON (Nodes.NodeID = InterfaceTraffic.InterfaceID))
INNER JOIN Interfaces ON (Nodes.NodeID = Interfaces.NodeID)
WHERE
( InterfaceTraffic.DateTime > (GetDate()-30) )
AND
(Nodes.WANCircuit = 1)
GROUP BY Nodes.Caption, NetflowApplicationSummary.AppName
EDIT: I ran COUNT() on each of my tables with the below result.
SELECT COUNT(*) FROM NetflowApplicationSummary # 50671011
SELECT COUNT(*) FROM Nodes # 898
SELECT COUNT(*) FROM InterfaceTraffic # 18000166
SELECT COUNT(*) FROM Interfaces # 3938
# Total : 68,676,013
I really have no idea if 68 million items is a huge database to be honest.
A couple of notes:
The INNER JOIN operator is associative, so get rid of those parenthesis in the FROM clause and let the optimizer figure out the best join order.
You may have an implied cursor from the getdate() function being called for every row. Store the value in a local variable and compare to that.
The resulting SQL should look like this:
DECLARE #Date as datetime = getdate() - 30;
SELECT TOP 10000
Nodes.Caption AS NodeName,
NetflowApplicationSummary.AppName AS Application_Name,
SUM(NetflowApplicationSummary.TotalBytes) AS SUM_of_Bytes_Transferred,
AVG(Case OutBandwidth
When 0 Then 0
Else (NetflowApplicationSummary.TotalBytes/OutBandwidth) * 100
End) AS TEST_PERCENT
FROM NetflowApplicationSummary
INNER JOIN Nodes ON NetflowApplicationSummary.NodeID = Nodes.NodeID
INNER JOIN InterfaceTraffic ON Nodes.NodeID = InterfaceTraffic.InterfaceID
INNER JOIN Interfaces ON Nodes.NodeID = Interfaces.NodeID
WHERE InterfaceTraffic.DateTime > #Date
AND Nodes.WANCircuit = 1
GROUP BY Nodes.Caption, NetflowApplicationSummary.AppName
Also, make sure you have an index on table InterfaceTraffic with a leading field of DateTime. If this doesn't exist you may need to pay the penalty of a first time creation of it.
If this doesn't help, then you may need to post the execution plan where it can be inspected.
Out of interest, also perform a count() on all four tables and post that result, just so members here can make their own assessment of how big your database really is. It is amazing how many non-technical people still think a 1 or 10 GB database is huge, while I run that easily on my workstation!
I have a query joining 4 tables with a lot of conditions in the WHERE clause. The query also includes ORDER BY clause on a numeric column. It takes 6 seconds to return which is too long and I need to speed it up. Surprisingly I found that if I remove the ORDER BY clause it takes 2 seconds. Why the order by makes so massive difference and how to optimize it? I am using SQL server 2005. Many thanks.
I cannot confirm that the ORDER BY makes big difference since I am clearing the execution plan cache. However can you shed light at how to speed this up a little bit? The query is as follows (for simplicity there is "SELECT *" but I am only selecting the ones I need).
SELECT *
FROM View_Product_Joined j
INNER JOIN [dbo].[OPR_PriceLookup] pl on pl.siteID = NodeSiteID and pl.skuid = j.skuid
LEFT JOIN [dbo].[OPR_InventoryRules] irp on irp.ID = pl.SkuID and irp.InventoryRulesType = 'Product'
LEFT JOIN [dbo].[OPR_InventoryRules] irs on irs.ID = pl.siteID and irs.InventoryRulesType = 'Store'
WHERE (((((SiteName = N'EcommerceSite') AND (Published = 1)) AND (DocumentCulture = N'en-GB')) AND (NodeAliasPath LIKE N'/Products/Cats/Computers/Computer-servers/%')) AND ((NodeSKUID IS NOT NULL) AND (SKUEnabled = 1) AND pl.PriceLookupID in (select TOP 1 PriceLookupID from OPR_PriceLookup pl2 where pl.skuid = pl2.skuid and (pl2.RoleID = -1 or pl2.RoleId = 13) order by pl2.RoleID desc)))
ORDER BY NodeOrder ASC
Why the order by makes so massive difference and how to optimize it?
The ORDER BY needs to sort the resultset which may take long if it's big.
To optimize it, you may need to index the tables properly.
The index access path, however, has its drawbacks so it can even take longer.
If you have something other than equijoins in your query, or the ranged predicates (like <, > or BETWEEN, or GROUP BY clause), then the index used for ORDER BY may prevent the other indexes from being used.
If you post the query, I'll probably be able to tell you how to optimize it.
Update:
Rewrite the query:
SELECT *
FROM View_Product_Joined j
LEFT JOIN
[dbo].[OPR_InventoryRules] irp
ON irp.ID = j.skuid
AND irp.InventoryRulesType = 'Product'
LEFT JOIN
[dbo].[OPR_InventoryRules] irs
ON irs.ID = j.NodeSiteID
AND irs.InventoryRulesType = 'Store'
CROSS APPLY
(
SELECT TOP 1 *
FROM OPR_PriceLookup pl
WHERE pl.siteID = j.NodeSiteID
AND pl.skuid = j.skuid
AND pl.RoleID IN (-1, 13)
ORDER BY
pl.RoleID desc
) pl
WHERE SiteName = N'EcommerceSite'
AND Published = 1
AND DocumentCulture = N'en-GB'
AND NodeAliasPath LIKE N'/Products/Cats/Computers/Computer-servers/%'
AND NodeSKUID IS NOT NULL
AND SKUEnabled = 1
ORDER BY
NodeOrder ASC
The relation View_Product_Joined, as the name suggests, is probably a view.
Could you please post its definition?
If it is indexable, you may benefit from creating an index on View_Product_Joined (SiteName, Published, DocumentCulture, SKUEnabled, NodeOrder).
So I decided to try out PostgreSQL instead of MySQL but I am having some slight conversion problems. This was a query of mine that samples data from four tables and spit them out all in on result.
I am at a loss of how to convey this in PostgreSQL and specifically in Django but I am leaving that for another quesiton so bonus points if you can Django-fy it but no worries if you just pure SQL it.
SELECT links.id, links.created, links.url, links.title, user.username, category.title, SUM(votes.karma_delta) AS karma, SUM(IF(votes.user_id = 1, votes.karma_delta, 0)) AS user_vote
FROM links
LEFT OUTER JOIN `users` `user` ON (`links`.`user_id`=`user`.`id`)
LEFT OUTER JOIN `categories` `category` ON (`links`.`category_id`=`category`.`id`)
LEFT OUTER JOIN `votes` `votes` ON (`votes`.`link_id`=`links`.`id`)
WHERE (links.id = votes.link_id)
GROUP BY votes.link_id
ORDER BY (SUM(votes.karma_delta) - 1) / POW((TIMESTAMPDIFF(HOUR, links.created, NOW()) + 2), 1.5) DESC
LIMIT 20
The IF in the select was where my first troubles began. Seems it's an IF true/false THEN stuff ELSE other stuff END IF yet I can't get the syntax right. I tried to use Navicat's SQL builder but it constantly wanted me to place everything I had selected into the GROUP BY and that I think it all kinds of wrong.
What I am looking for in summary is to make this MySQL query work in PostreSQL. Thank you.
Current Progress
Just want to thank everybody for their help. This is what I have so far:
SELECT links_link.id, links_link.created, links_link.url, links_link.title, links_category.title, SUM(links_vote.karma_delta) AS karma, SUM(CASE WHEN links_vote.user_id = 1 THEN links_vote.karma_delta ELSE 0 END) AS user_vote
FROM links_link
LEFT OUTER JOIN auth_user ON (links_link.user_id = auth_user.id)
LEFT OUTER JOIN links_category ON (links_link.category_id = links_category.id)
LEFT OUTER JOIN links_vote ON (links_vote.link_id = links_link.id)
WHERE (links_link.id = links_vote.link_id)
GROUP BY links_link.id, links_link.created, links_link.url, links_link.title, links_category.title
ORDER BY links_link.created DESC
LIMIT 20
I had to make some table name changes and I am still working on my ORDER BY so till then we're just gonna cop out. Thanks again!
Have a look at this link GROUP BY
When GROUP BY is present, it is not
valid for the SELECT list expressions
to refer to ungrouped columns except
within aggregate functions, since
there would be more than one possible
value to return for an ungrouped
column.
You need to include all the select columns in the group by that are not part of the aggregate functions.
A few things:
Drop the backticks
Use a CASE statement instead of IF() CASE WHEN votes.use_id = 1 THEN votes.karma_delta ELSE 0 END
Change your timestampdiff to DATE_TRUNC('hour', now()) - DATE_TRUNC('hour', links.created) (you will need to then count the number of hours in the resulting interval. It would be much easier to compare timestamps)
Fix your GROUP BY and ORDER BY
Try to replace the IF with a case;
SUM(CASE WHEN votes.user_id = 1 THEN votes.karma_delta ELSE 0 END)
You also have to explicitly name every column or calculated column you use in the GROUP BY clause.