We have a SQL Server 2012 database with our live project including multiple tables and records, Basically We are facing a problem with SQL queries on a table where two same SQL queries. The first SQL query is taking less execution time and second SQL query is getting tremendously slow to execute.
I don't know why it is happening someone could help me to solve this problem?
Our two queries are given below....
First query (taking so much time to execute):
SELECT * FROM (SELECT TOP 10 TrackerResponse.EventName,TrackerResponse.ReceiveTime,ISNull(TrackerResponse.InputStatus,0) AS InputStatus,
TrackerResponse.Latitude,TrackerResponse.Longitude,TrackerResponse.Speed,
TrackerResponse.TrackerID,TrackerResponse.OdoMeter,TrackerResponse.Direction,
UserCar.CarNo FROM TrackerResponse
INNER JOIN UserCar ON (UserCar.TrackerID = TrackerResponse.TrackerID)
WHERE (TrackerResponse.EventName IS NOT NULL AND TrackerResponse.EventName<>'')
AND TrackerResponse.TrackerID = 112 Order By ID DESC) AS Events)
Second query (taking less execution time):
SELECT * FROM (SELECT TOP 10 TrackerResponse.EventName,TrackerResponse.ReceiveTime,ISNull(TrackerResponse.InputStatus,0) AS InputStatus,
TrackerResponse.Latitude,TrackerResponse.Longitude,TrackerResponse.Speed,
TrackerResponse.TrackerID,TrackerResponse.OdoMeter,TrackerResponse.Direction,
UserCar.CarNo FROM TrackerResponse
INNER JOIN UserCar ON (UserCar.TrackerID = TrackerResponse.TrackerID)
WHERE (TrackerResponse.EventName IS NOT NULL AND TrackerResponse.EventName<>'')
AND TrackerResponse.TrackerID = 56 Order By ID DESC) AS Events
I see Both queries are similar with an exception of tracker id..so my best guess would be ,this query might be a victim of Parameter sniffing..
Try using Option(Recompile) like below to see if this is the cause and follow article referred in above link for resolution
SELECT * FROM (SELECT TOP 10 TrackerResponse.EventName,TrackerResponse.ReceiveTime,ISNull(TrackerResponse.InputStatus,0) AS InputStatus,
TrackerResponse.Latitude,TrackerResponse.Longitude,TrackerResponse.Speed,
TrackerResponse.TrackerID,TrackerResponse.OdoMeter,TrackerResponse.Direction,
UserCar.CarNo FROM TrackerResponse
INNER JOIN UserCar ON (UserCar.TrackerID = TrackerResponse.TrackerID)
WHERE (TrackerResponse.EventName IS NOT NULL AND TrackerResponse.EventName<>'')
AND TrackerResponse.TrackerID = 56 Order By ID DESC
option(recompile)
AS Events)
Related
I have this SQL query that first selects from a relative short table a number that is used after for another select, this time from a very large table, certain info using the code from the first one. This takes more than 30 minutes for only one select and i need to optimize as i have to run 300 selects like this one but with different SWNAMEs. I would appreciate any hints and tips that you can give me. Thank you !
SELECT SWOBJECTID
FROM MBR_INST_PRODUCTS
WHERE SWPRODRELEASEID IN
(SELECT SWPRODRELEASEID
FROM ORO_PPY_OPTIONS
WHERE SWNAME LIKE 'Nov Flexibil Offer:Net Unlimited for 1MON')
AND rownum <2;
How about a simple join?
SELECT m.swobjectid
FROM mbr_inst_products m
JOIN oro_ppy_options o ON o.swprodreleaseid = m.swprodreleaseid
WHERE o.swname = 'Nov Flexibil Offer:Net Unlimited for 1MON'
AND ROWNUM < 2;
Make sure swprodreleaseid columns are indexed.
I would write this query as:
SELECT ip.SWOBJECTID
FROM MBR_INST_PRODUCTS ip JOIN
ORO_PPY_OPTIONS po
ON po.SWPRODRELEASEID = ip.SWPRODRELEASEID
WHERE po.SWNAME = 'Nov Flexibil Offer:Net Unlimited for 1MON') AND
rownum = 1;
For this query, you want indexes on ORO_PPY_OPTIONS(SWNAME, SWPRODRELEASEID) and MBR_INST_PRODUCTS(MBR_INST_PRODUCTS).
The whole query below runs incredibly slowly.
The subquery query [alias Stage_1] takes only 1.37 minutes returning 9514 records, however the whole query takes over 20 minutes, returning 2606 records.
I could use a #temp table to hold the subquery to improve the performance however I would prefer not to.
An overview of the query is that table WeeklySpace inner joins to Spaceblock_Name_to_PG table on SpaceblockName_SID, this cuts down the results in WeeklySpace and includes PG_Code with the results in WeeklySpace. WeeklySpace is then Full Outer Joined to Sales_PG_Wk across 3 fields. The where clause focuses the results, and may be changed. The results from the subquery are then sum'd. You cannot do the final sum'ing in the subquery due to the group by and sum over used.
I believe the issue is due to the subquery re calculation repeatedly during the group by in the final sum'ing. The field SpaceblockName_SID also appears to be involved in causing the issue as without it the run time with a group by in the subquery isn't affected.
I have read though loads of suggestion, trying them all to resolve the issue.
These include;
Adding TOP 2147483647 with Order by to force intermediate
materialization, both in the subquery and using a CTE.
Adding a join after stage_1.
Cast'ing SpaceblockName_SID from an int to a varchar and back again
The execution plan (cut in two parts, shown below the code) for both the subquery and the whole query appear similar. The cost is around the Full Outer Join (Hash Match), which I expected.
The query is running on T-SQL 2005.
Any help greatly appreciated!
select
Cost_centre
, Fin_week
, SpaceblockName_SID
, sum(Propor_rep_SRV) as Total_SpaceblockName_SID_SRV
from
(
select
coalesce(space_side.fin_week , sales_side.fin_week) as Fin_week
,coalesce(space_side.cost_centre , sales_side.cost_Centre) as Cost_centre
,space_side.SpaceblockName_SID
,case
when space_side.SpaceblockName_SID is null
then sales_side.SalesExVAT
else sum(space_side.TLM)
/nullif(sum (sum(space_side.TLM) ) over (partition by coalesce(space_side.fin_week , sales_side.fin_week)
, coalesce(space_side.cost_centre , sales_side.cost_Centre)
, coalesce( Spaceblock_Name_to_PG.PG_Code, sales_side.PG_Code)) ,0)*sales_side.SalesExVAT
end as Propor_rep_SRV
from
WeeklySpace as space_side
INNER JOIN
Spaceblock_Name_to_PG
ON space_side.SpaceblockName_SID = Spaceblock_Name_to_PG.SpaceblockName_SID
and Spaceblock_Name_to_PG.PG_Code < 10000
full outer join
sales_pg_wk as sales_side
on space_side.fin_week = sales_side.fin_week
and space_side.Cost_Centre = sales_side.Cost_Centre
and Spaceblock_Name_to_PG.PG_code = sales_side.pg_code
where
coalesce(space_side.fin_week, sales_side.fin_week) between 201538 and 201550
and
coalesce(space_side.cost_centre, sales_side.cost_Centre) in (3, 2800)
group by
coalesce(space_side.fin_week, sales_side.fin_week)
,coalesce(space_side.cost_centre, sales_side.cost_Centre)
,coalesce( Spaceblock_Name_to_PG.PG_Code, sales_side.PG_Code)
,sales_side.SalesExVAT
,space_side.SpaceblockName_SID
) as stage_1
group by
Cost_centre
, Fin_week
, SpaceblockName_SID
Execution plan left hand side
Execution plan right hand side
You didn't mentioned about indices are created or not on those columns those you used in your query. If not then create and check performance of the query
In looking at you logic I think you split this in two with a UNION
One with Spaceblock_Name_to_PG.PG_Code < 10000 and the other with Spaceblock_Name_to_PG.PG_Code >= 10000
And consider this change
If may be doing a bunch of join that you are going to throw out anyway
full outer join sales_pg_wk as sales_side
on space_side.fin_week = sales_side.fin_week
and space_side.Cost_Centre = sales_side.Cost_Centre
and Spaceblock_Name_to_PG.PG_code = sales_side.pg_code
and space_side.fin_week between 201538 and 201550
and sales_side.fin_week between 201538 and 201550
and space_side.cost_centre in (3, 2800)
and sales_side.cost_Centre in (3, 2800)
I have some troubles with this query (view).
I'm using SQL Server 2012.
With few Record the query is fast but after that I add only 1000 records (Links) it becomes very slow (over 23 seconds)
I have to take a random Link for every host in database,so I used Row_Number and partition by.
Links table has 1000 records and host table has 10 record but the query is so slow
Any advice for increase performance?
UPDATE I need to get foreach Host a Random Link (could be 1 or 2 or 3 depends on Host.numLinksPerWork)
;WITH MyCte As
(
SELECT DISTINCT link.namUrl, host.uidHost, host.namHost AS Hostname,
[user].uidUser, usrProfile.UserName AS Username, host.numLinksPerWork,
referer.namUrl AS refererLink,host.Min, host.Max,ROW_NUMBER()
OVER (PARTITION BY host.numLinksPerWork, host.uidHost ORDER BY newid()) AS Number
FROM Links link JOIN
Users[user] ON [user].uidUser = link.codUser JOIN
Profile usrProfile ON usrProfile.UserId = [user].uidUser JOIN
Hosts host ON host.uidHost = link.codHost JOIN
Referers referer ON referer.codHost = host.uidHost JOIN
Referers referer2 ON referer.codUser = [user].uidUser
WHERE [user].flgBanned = 0
)
SELECT MyCte.uidHost, MyCte.uidUser, MyCte.namUrl, MyCte.refererLink,
MyCte.Hostname, MyCte.Username, MyCte.Min, MyCte.Max FROM MyCte
WHERE MyCte.Number <= MyCte.numLinksPerWork!
Similar Diagram
I Fixed the problem... The double join on the same table which is wrong OFC ,It slowed down the query!
Referers referer ON referer.codHost = host.uidHost JOIN
Referers referer2 ON referer.codUser = [user].uidUser
removing useless Join(s) it's fast!
I run this query in SqlServer 2008 R2, it take about 6 seconds and return around 8000 records. OrderItemView is a view and DocumentStationHistory is a table.
SELECT o.Number, dsh.DateSend AS Expr1
FROM OrderItemView AS o INNER JOIN
DocumentStationHistory AS dsh ON dsh.DocumentStationHistoryId =
(SELECT TOP (1) DocumentStationHistoryId
FROM DocumentStationHistory AS dsh2
WHERE (o.DocumentStationId = ToStationId) AND
(DocumentId = o.id)
ORDER BY DateSend DESC)
WHERE (o.DocumentStationId = 10)
But when I run the same query with o.DocumentStationId = 8 where clause, it return around 200 records but take about 90 seconds!
Is there any idea that where is the problem?
I suppose the index is the issue, But not for o.DocumentStationId but all the fields that are joined using the field o.DocumentStationId.
try to see how your inner query is working by checking the execution plan.
that would need some performance tuning.
Also, try using index for ToStationId and DateSend. also see if you can modify inner query.
Other than these i dont see any suggestions.
Also post you execution plan
I rebuilt the index on o.DocumentStationId and the problem solved.
Try the following query. Also check if there is any index on DocumentStationId, ToStationId and DocumentId. if not create them
SELECT o.Number, dsh.DateSend AS Expr1
FROM OrderItemView AS o
OUTER APPLY
(SELECT TOP (1) DateSend
FROM DocumentStationHistory
WHERE (o.DocumentStationId = ToStationId) AND (DocumentId = o.id)
ORDER BY DateSend DESC) AS dsh
WHERE (o.DocumentStationId = 10)
I'm trying to using the aggregation features of the django ORM to run a query on a MSSQL 2008R2 database, but I keep getting a timeout error. The query (generated by django) which fails is below. I've tried running it directs the SQL management studio and it works, but takes 3.5 min
It does look it's aggregating over a bunch of fields which it doesn't need to, but I wouldn't have though that should really cause it to take that long. The database isn't that big either, auth_user has 9 records, ticket_ticket has 1210, and ticket_watchers has 1876. Is there something I'm missing?
SELECT
[auth_user].[id],
[auth_user].[password],
[auth_user].[last_login],
[auth_user].[is_superuser],
[auth_user].[username],
[auth_user].[first_name],
[auth_user].[last_name],
[auth_user].[email],
[auth_user].[is_staff],
[auth_user].[is_active],
[auth_user].[date_joined],
COUNT([tickets_ticket].[id]) AS [tickets_captured__count],
COUNT(T3.[id]) AS [assigned_tickets__count],
COUNT([tickets_ticket_watchers].[ticket_id]) AS [tickets_watched__count]
FROM
[auth_user]
LEFT OUTER JOIN [tickets_ticket] ON ([auth_user].[id] = [tickets_ticket].[capturer_id])
LEFT OUTER JOIN [tickets_ticket] T3 ON ([auth_user].[id] = T3.[responsible_id])
LEFT OUTER JOIN [tickets_ticket_watchers] ON ([auth_user].[id] = [tickets_ticket_watchers].[user_id])
GROUP BY
[auth_user].[id],
[auth_user].[password],
[auth_user].[last_login],
[auth_user].[is_superuser],
[auth_user].[username],
[auth_user].[first_name],
[auth_user].[last_name],
[auth_user].[email],
[auth_user].[is_staff],
[auth_user].[is_active],
[auth_user].[date_joined]
HAVING
(COUNT([tickets_ticket].[id]) > 0 OR COUNT(T3.[id]) > 0 )
EDIT:
Here are the relevant indexes (excluding those not used in the query):
auth_user.id (PK)
auth_user.username (Unique)
tickets_ticket.id (PK)
tickets_ticket.capturer_id
tickets_ticket.responsible_id
tickets_ticket_watchers.id (PK)
tickets_ticket_watchers.user_id
tickets_ticket_watchers.ticket_id
EDIT 2:
After a bit of experimentation, I've found that the following query is the smallest that results in the slow execution:
SELECT
COUNT([tickets_ticket].[id]) AS [tickets_captured__count],
COUNT(T3.[id]) AS [assigned_tickets__count],
COUNT([tickets_ticket_watchers].[ticket_id]) AS [tickets_watched__count]
FROM
[auth_user]
LEFT OUTER JOIN [tickets_ticket] ON ([auth_user].[id] = [tickets_ticket].[capturer_id])
LEFT OUTER JOIN [tickets_ticket] T3 ON ([auth_user].[id] = T3.[responsible_id])
LEFT OUTER JOIN [tickets_ticket_watchers] ON ([auth_user].[id] = [tickets_ticket_watchers].[user_id])
GROUP BY
[auth_user].[id]
The weird thing is that if I comment out any two lines in the above, it runs in less that 1s, but it doesn't seem to matter which lines I remove (although obviously I can't remove a join without also removing the relevant SELECT line).
EDIT 3:
The python code which generated this is:
User.objects.annotate(
Count('tickets_captured'),
Count('assigned_tickets'),
Count('tickets_watched')
)
A look at the execution plan shows that SQL Server is first doing a cross-join on all the table, resulting in about 280 million rows, and 6Gb of data. I assume that this is where the problem lies, but why is it happening?
SQL Server is doing exactly what it was asked to do. Unfortunately, Django is not generating the right query for what you want. It looks like you need to count distinct, instead of just count: Django annotate() multiple times causes wrong answers
As for why the query works that way: The query says to join the four tables together. So say an author has 2 captured tickets, 3 assigned tickets, and 4 watched tickets, the join will return 2*3*4 tickets, one for each combination of tickets. The distinct part will remove all the duplicates.
what about this?
SELECT auth_user.*,
C1.tickets_captured__count
C2.assigned_tickets__count
C3.tickets_watched__count
FROM
auth_user
LEFT JOIN
( SELECT capturer_id, COUNT(*) AS tickets_captured__count
FROM tickets_ticket GROUP BY capturer_id ) AS C1 ON auth_user.id = C1.capturer_id
LEFT JOIN
( SELECT responsible_id, COUNT(*) AS assigned_tickets__count
FROM tickets_ticket GROUP BY responsible_id ) AS C2 ON auth_user.id = C2.responsible_id
LEFT JOIN
( SELECT user_id, COUNT(*) AS tickets_watched__count
FROM tickets_ticket_watchers GROUP BY user_id ) AS C3 ON auth_user.id = C3.user_id
WHERE C1.tickets_captured__count > 0 OR C2.assigned_tickets__count > 0
--WHERE C1.tickets_captured__count is not null OR C2.assigned_tickets__count is not null -- also works (I think with beter performance)