I already had a query which showed the wrong details (cg_Tracking.Distance), which I now tried to change by changing it, from cg_tracking.distance to but it seems not to load.
it was like this before and showed results very fast:
SELECT DISTINCT cg_tracking.f_nr,
cg_tracking.date_cg,
cg_tracking.manummer,
cg_tracking.distance,
cg_tracking.longitude,
cg_tracking.latitude,
cg_tracking.datetime_cg,
cg_tracking.speed
FROM cg_tracking
WHERE f_nr = '317'
GROUP BY cg_tracking.f_nr,
cg_tracking.date_cg,
cg_tracking.manummer,
cg_trackign.Distance.
cg_tracking.longitude,
cg_tracking.latitude,
cg_tracking.datetime_cg,
cg_tracking.speed
ORDER BY cg_tracking.date_cg ASC
Now I´ve changed it to this and it takes really long to load and doesn't even give me the right details.
SELECT DISTINCT cg_tracking.f_nr,
cg_tracking.date_cg,
cg_tracking.manummer,
Round(( cg_01_ziele.strecke / 1000 ), 1) AS Strecke,
cg_tracking.longitude,
cg_tracking.latitude,
cg_tracking.datetime_cg,
cg_tracking.speed
FROM cg_tracking, cg_01_Ziele
JOIN cg_zielfahrtstatuslog
ON cg_ZielfahrtstatusLog.ZielID = cg_01_Ziele.ZielID
JOIN cg_02_kunden
ON cg_02_kunden.zielid = cg_01_ziele.zielid
WHERE cg_tracking.F_NR = '317'
AND NOT( cg_zielfahrtstatuslog.status = 7
AND cg_zielfahrtstatuslog.interruption = 0)
AND cg_01_Ziele.DATETIME_CG between '2020-06-02T00:00:00'
AND '2020-06-02T23:59:59'
GROUP BY cg_tracking.f_nr,
cg_tracking.date_cg,
cg_tracking.manummer,
Round(( cg_01_ziele.strecke / 1000 ), 1),
cg_tracking.longitude,
cg_tracking.latitude,
cg_tracking.datetime_cg,
cg_tracking.speed
ORDER BY cg_tracking.date_cg ASC
It always gives me other F_NR and Datetime_cg eventhough I wrote where F_nr = '317' and between the dates I wanted.
I already deleted the and Not conditions and it still takes a lot of time and doesn't give me the right answer.
My assumption is because of the Joins and different tables but I don't know any solution.
Related
EDIT:
Will try to explain in more detail this time as best as I can. I tried to make the query a bit simpler thinking it would make it easier to understand but it might have been a bad move.
I'm trying to get the PK_Queue and FK_Queue_Milestone from the 1st row of my Queue table ordered by PriorityScore DESC and TimeAdded ASC
I only want to get the first row, but I was advised to not use TOP(1) because it would result to another SELECT being made to my original select.
This is the query that I have:
SELECT
#Local_PK_Queue = Q.PK_Queue,
#Local_PK_Milestone_Validate = Q.FK_Queue_Milestone
FROM dbo.Queue AS Q
INNER JOIN #Local_PKHolderTable AS P
ON Q.FK_Queue_Process = P.PK_Process
AND Q.FK_Queue_Milestone = P.PK_Milestone
AND Q.FK_Queue_QueueType = P.PK_QueueType
WHERE Q.FK_Queue_Milestone = P.PK_Milestone
AND Q.FK_Queue_Process = P.PK_Process
AND Q.Tags LIKE '%' + #Input_Tags + '%'
AND ((FK_Queue_State = 5 AND TimeDeferred < GETUTCDATE()) OR (FK_Queue_State = 1))
AND Q.FK_Queue_Robot IS NULL
AND Q.FK_Queue_QueueType = P.PK_QueueType
ORDER BY
Q.PriorityScore DESC,
Q.TimeAdded
When I try to run the query, it doesn't seem to be ordering it properly because it always gets the last row of my table.
So did some research and stumbled upon this question here.
It seems to be the same problem that I am experiencing but using MySQL instead of SQL Server.
TLDR: Want to ORDER BY Priority Score DESC and TimeAdded, but is not working properly
Well, you would write this as:
SELECT #var = PK_Test, #var2 = SUM(PriorityScore)
FROM Queue
GROUP BY PK_Test
ORDER BY SUM(PriorityScore);
This is very strange, though, because the GROUP BY presumably returns multiple rows and you presumably want only one. I might suspect that you really want to assign the variables to the highest priority scores:
SELECT TOP (1) #var = PK_Test, #var2 = SUM(PriorityScore)
FROM Queue
GROUP BY PK_Test
ORDER BY SUM(PriorityScore) DESC;
I have several SQL Server 2014 queries that pull back a data set where we need to get a count on related, but different criteria along with that data. We do this with a sub query, but that is slowing it down immensely. It was fine until now where we are getting more data in our database to count on. Here is the query:
SELECT
T.*,
ISNULL((SELECT COUNT(1)
FROM EventRegTix ERT, EventReg ER
WHERE ER.EventRegID = ERT.EventRegID
AND ERT.TicketID = T.TicketID
AND ER.OrderCompleteFlag = 1), 0) AS NumTicketsSold
FROM
Tickets T
WHERE
T.EventID = 12345
AND T.DeleteFlag = 0
AND T.ActiveFlag = 1
ORDER BY
T.OrderNumber ASC
I am pretty sure its mostly due to the relation back outside of the sub query to the Tickets table. If I change the T.TicketID to an actual ticket # (999 for example), the query is MUCH faster.
I have attempted to join together these queries into one, but since there are other fields in the sub query, I just cannot get it to work properly. I was playing around with
COUNT(1) OVER (PARTITION BY T.TicketID) AS NumTicketsSold
but could not figure that out either.
Any help would be much appreciated!
I would write this as:
SELECT T.*,
(SELECT COUNT(1)
FROM EventRegTix ERT JOIN
EventReg ER
ON ER.EventRegID = ERT.EventRegID
WHERE ERT.TicketID = T.TicketID AND ER.OrderCompleteFlag = 1
) AS NumTicketsSold
FROM Tickets T
WHERE T.EventID = 12345 AND
T.DeleteFlag = 0 AND
T.ActiveFlag = 1
ORDER BY T.OrderNumber ASC;
Proper, explicit, standard JOIN syntax does not improve performance; it is just the correct syntax. COUNT(*) cannot return NULL values, so COALESCE() or a similar function is unnecessary.
You need indexes. The obvious ones are on Tickets(EventID, DeleteFlag, ActiveFlag, OrderNumber), EventRegTix(TicketID, EventRegID), and EventReg(EventRegID, OrderCompleteFlag).
I would try with OUTER APPLY :
SELECT T.*, T1.*
FROM Tickets T OUTER APPLY
(SELECT COUNT(1) AS NumTicketsSold
FROM EventRegTix ERT JOIN
EventReg ER
ON ER.EventRegID = ERT.EventRegID
WHERE ERT.TicketID = T.TicketID AND ER.OrderCompleteFlag = 1
) T1
WHERE T.EventID = 12345 AND
T.DeleteFlag = 0 AND
T.ActiveFlag = 1
ORDER BY T.OrderNumber ASC;
And, obvious you need indexes Tickets(EventID, DeleteFlag, ActiveFlag, OrderNumber), EventRegTix(TicketID, EventRegID), and EventReg(EventRegID, OrderCompleteFlag) to gain the performance.
Fixed this - query went from 5+ seconds to 1/2 second or less. Issues were:
1) No indexes. Did not know all FK fields needed indexes as well. I indexed all the fields that we joined or were in WHERE clause.
2) Used SQL Execution Plan to see the place where the bottle neck was. Told me no index, hence 1) above! :)
Thanks for all your help guys, hopefully this post helps someone else.
Dennis
PS: Changed the syntax too!
I've been working on this problem, researching what I could be doing wrong but I can't seem to find an answer or fault in the code that I've written. I'm currently extracting data from a MS SQL Server database, with a WHERE clause successfully filtering the results to what I want. I get roughly 4 rows per employee, and want to add together a value column. The moment I add the GROUP BY clause against the employee ID, and put a SUM against the value, I'm getting a number that is completely wrong. I suspect the SQL code is ignoring my WHERE clause.
Below is a small selection of data:
hr_empl_code hr_doll_paid
1 20.5
1 51.25
1 102.49
1 560
I expect that a GROUP BY and SUM clause would give me the value of 734.24. The value I'm given is 211461.12. Through troubleshooting, I added a COUNT(*) column to my query to work out how many lines it's running against, and it's giving a result of 1152, furthering reinforces my belief that it's ignoring my WHERE clause.
My SQL code is as below. Most of it has been generated by the front-end application that I'm running it from, so there is some additional code in there that I believe does assist the query.
SELECT DISTINCT
T000.hr_empl_code,
SUM(T175.hr_doll_paid)
FROM
hrtempnm T000,
qmvempms T001,
hrtmspay T166,
hrtpaytp T175,
hrtptype T177
WHERE 1 = 1
AND T000.hr_empl_code = T001.hr_empl_code
AND T001.hr_empl_code = T166.hr_empl_code
AND T001.hr_empl_code = T175.hr_empl_code
AND T001.hr_ploy_ment = T166.hr_ploy_ment
AND T001.hr_ploy_ment = T175.hr_ploy_ment
AND T175.hr_paym_code = T177.hr_paym_code
AND T166.hr_pyrl_code = 'f' AND T166.hr_paid_dati = 20180404
AND (T175.hr_paym_type = 'd' OR T175.hr_paym_type = 't')
GROUP BY T000.hr_empl_code
ORDER BY hr_empl_code
I'm really lost where it could be going wrong. I have stripped out the additional WHERE AND and brought it down to just T166.hr_empl_code = T175.hr_empl_code, but it doesn't make a different.
By no means am I any expert in SQL Server and queries, but I have decent grasp on the technology. Any help would be very appreciated!
Group by is not wrong, how you are using it is wrong.
SELECT
T000.hr_empl_code,
T.totpaid
FROM
hrtempnm T000
inner join (SELECT
hr_empl_code,
SUM(hr_doll_paid) as totPaid
FROM
hrtpaytp T175
where hr_paym_type = 'd' OR hr_paym_type = 't'
GROUP BY hr_empl_code
) T on t.hr_empl_code = T000.hr_empl_code
where exists
(select * from qmvempms T001,
hrtmspay T166,
hrtpaytp T175,
hrtptype T177
WHERE T000.hr_empl_code = T001.hr_empl_code
AND T001.hr_empl_code = T166.hr_empl_code
AND T001.hr_empl_code = T175.hr_empl_code
AND T001.hr_ploy_ment = T166.hr_ploy_ment
AND T001.hr_ploy_ment = T175.hr_ploy_ment
AND T175.hr_paym_code = T177.hr_paym_code
AND T166.hr_pyrl_code = 'f' AND T166.hr_paid_dati = 20180404
)
ORDER BY hr_empl_code
Note: It would be more clear if you have used joins instead of old style joining with where.
I have the following query which i am directly executing in my Code & putting it in datatable. The problem is it is taking more than 10 minutes to execute this query. The main part which is taking time is NON EXISTS.
SELECT
[t0].[PayrollEmployeeId],
[t0].[InOutDate],
[t0].[InOutFlag],
[t0].[InOutTime]
FROM [dbo].[MachineLog] AS [t0]
WHERE
([t0].[CompanyId] = 1)
AND ([t0].[InOutDate] >= '2016-12-13')
AND ([t0].[InOutDate] <= '2016-12-14')
AND
( NOT (EXISTS(
SELECT NULL AS [EMPTY]
FROM [dbo].[TO_Entry] AS [t1]
WHERE
([t1].[EmployeeId] = [t0].[PayrollEmployeeId])
AND ([t1]. [CompanyId] = 1)
AND ([t0].[PayrollEmployeeId] = [t1].[EmployeeId])
AND (([t0].[InOutDate]) = [t1].[Entry_Date])
AND ([t1].[Entry_Method] = 'M')
))
)
ORDER BY
[t0].[PayrollEmployeeId], [t0].[InOutDate]
Is there any way i can optimize this query? What is the work around for this. It is taking too much of time.
It seems that you can convert the NOT EXISTS into a LEFT JOIN query with second table returning NULL values
Please check following SELECT and modify if required to fulfill your requirements
SELECT
[t0].[PayrollEmployeeId], [t0].[InOutDate], [t0].[InOutFlag], [t0].[InOutTime]
FROM [dbo].[MachineLog] AS [t0]
LEFT JOIN [dbo].[TO_Entry] AS [t1]
ON [t1].[EmployeeId] = [t0].[PayrollEmployeeId]
AND [t0].[PayrollEmployeeId] = [t1].[EmployeeId]
AND [t0].[InOutDate] = [t1].[Entry_Date]
AND [t1]. [CompanyId] = 1
AND [t1].[Entry_Method] = 'M'
WHERE
([t0].[CompanyId] = 1)
AND ([t0].[InOutDate] >= '2016-12-13')
AND ([t0].[InOutDate] <= '2016-12-14')
AND [t1].[EmployeeId] IS NULL
ORDER BY
[t0].[PayrollEmployeeId], [t0].[InOutDate]
You will realize that there is an informative message on the execution plan for your query
It is informing that there is a missing cluster index with an effect of 30% on the execution time
It seems that transaction data is occurring based on some date fields like Entry time.
Dates fields especially on your case are strong candidates for clustered indexes. You can create an index on Entry_Date column
I guess you have already some index on InOutDate
You can try indexing this field as well
I’m currently using the following query for jsPerf. In the likely case you don’t know jsPerf — there are two tables: pages containing the test cases / revisions, and tests containing the code snippets for the tests inside the test cases.
There are currently 937 records in pages and 3817 records in tests.
As you can see, it takes quite a while to load the “Browse jsPerf” page where this query is used.
The query takes about 7 seconds to execute:
SELECT
id AS pID,
slug AS url,
revision,
title,
published,
updated,
(
SELECT COUNT(*)
FROM pages
WHERE slug = url
AND visible = "y"
) AS revisionCount,
(
SELECT COUNT(*)
FROM tests
WHERE pageID = pID
) AS testCount
FROM pages
WHERE updated IN (
SELECT MAX(updated)
FROM pages
WHERE visible = "y"
GROUP BY slug
)
AND visible = "y"
ORDER BY updated DESC
I’ve added indexes on all fields that appear in WHERE clauses. Should I add more?
How can this query be optimized?
P.S. I know I could implement a caching system in PHP — I probably will, so please don’t tell me :) I’d just really like to find out how this query could be improved, too.
Use:
SELECT x.id AS pID,
x.slug AS url,
x.revision,
x.title,
x.published,
x.updated,
y.revisionCount,
COALESCE(z.testCount, 0) AS testCount
FROM pages x
JOIN (SELECT p.slug,
MAX(p.updated) AS max_updated,
COUNT(*) AS revisionCount
FROM pages p
WHERE p.visible = 'y'
GROUP BY p.slug) y ON y.slug = x.slug
AND y.max_updated = x.updated
LEFT JOIN (SELECT t.pageid,
COUNT(*) AS testCount
FROM tests t
GROUP BY t.pageid) z ON z.pageid = x.id
ORDER BY updated DESC
You want to learn how to use EXPLAIN. This will execute the sql statement, and show you which indexes are being used, and what row scans are being performed. The goal is to reduce the number of row scans (ie, the database searching row by row for values).
You may want to try the subqueries one at a time to see which one is slowest.
This query:
SELECT MAX(updated)
FROM pages
WHERE visible = "y"
GROUP BY slug
Makes it sort the result by slug. This is probably slow.