We have a query and this is an actual execution plan
As you can see - Clustered Index Seek takes 99%.
Also it seeks on primary keys (type int).
Table Source has 275 000 rows.
Table AuthorSource has 2 275 000 rows.
No partition and compression used.
The problem is that first time execution takes 25-40 seconds. But the second run successively takes 1-2 seconds.
Also we have replication, queue reader, log reader agents running on this server.
Amount of RAM: 4GB
Sql Server uses: 3.7GB
We think, that sql caches query after first execution for some period of time, and this is the reason, that second run takes only 1-2 seconds.
But irrespective of cache and other reasons, it is very strange, that primary key index seek query takes 20-40 seconds.
This issue is repeated. Any different parameters we provide to the query - we get same results: very long first time query and fast the second and the following.
May be some additional settings or Resource Governor ability we have to use ?
exec sp_executesql N'
SELECT [Project1].[C1] AS [C1]
FROM ( SELECT CAST(1 AS bit) AS X
) AS [SingleRowTable1]
LEFT OUTER JOIN
(SELECT [GroupBy1].[A1] AS [C1]
FROM ( SELECT COUNT(CAST(1 AS bit)) AS [A1]
FROM (SELECT [Extent1].[Mention_ID] AS [Mention_ID] ,
[Extent1].[Theme_ID] AS [Theme_ID] ,
[Extent1].[Mention_Weight] AS [Mention_Weight] ,
[Extent1].[AuthorSource_ID] AS [AuthorSource_ID1] ,
[Extent1].[Mention_CreationDate] AS [Mention_CreationDate] ,
[Extent1].[Mention_DeletedMark] AS [Mention_DeletedMark] ,
[Extent1].[Mention_AuthorTags] AS [Mention_AuthorTags] ,
[Extent1].[Mention_Tonality] AS [Mention_Tonality] ,
[Extent1].[Mention_Comment] AS [Mention_Comment] ,
[Extent1].[Mention_AdditionDate] AS [Mention_AdditionDate] ,
[Extent1].[UserToAnswer_ID] AS [UserToAnswer_ID] ,
[Extent1].[GeoName_ID] AS [GeoName_ID] ,
[Extent1].[Geo_ID] AS [Geo_ID] ,
[Extent1].[Mention_PermaLinkHash] AS [Mention_PermaLinkHash] ,
[Extent1].[Mention_IsFiltredByAuthor] AS [Mention_IsFiltredByAuthor] ,
[Extent1].[Mention_IsFiltredByGeo] AS [Mention_IsFiltredByGeo] ,
[Extent1].[Mention_IsFiltredBySource] AS [Mention_IsFiltredBySource] ,
[Extent1].[Mention_IsFiltredBySourceType] AS [Mention_IsFiltredBySourceType] ,
[Extent1].[GengineLog_InstanceId] AS [GengineLog_InstanceId] ,
[Extent1].[Mention_PermaLinkBinaryHash] AS [Mention_PermaLinkBinaryHash] ,
[Extent1].[Mention_APIType] AS [Mention_APIType] ,
[Extent1].[Mention_IsFilteredByAuthorSource] AS [Mention_IsFilteredByAuthorSource],
[Extent1].[Mention_IsFavorite] AS [Mention_IsFavorite] ,
[Extent1].[Mention_SpamType] AS [Mention_SpamType] ,
[Extent1].[MentionContent_ID] AS [MentionContent_ID] ,
[Extent2].[AuthorSource_ID] AS [AuthorSource_ID2] ,
[Extent2].[Author_ID] AS [Author_ID] ,
[Extent2].[Source_ID] AS [Source_ID] ,
[Extent2].[Author_Nick] AS [Author_Nick] ,
[Extent2].[Author_UrlBinaryHash] AS [Author_UrlBinaryHash] ,
[Extent2].[AuthorSource_Type] AS [AuthorSource_Type] ,
[Extent2].[Author_Url] AS [Author_Url] ,
[Extent2].[AuthorSource_Description] AS [AuthorSource_Description] ,
[Extent2].[AuthorSource_Gender] AS [AuthorSource_Gender]
FROM [dbo].[Mention] AS [Extent1]
LEFT OUTER JOIN [dbo].[AuthorSource] AS [Extent2]
ON [Extent1].[AuthorSource_ID] = [Extent2].[AuthorSource_ID]
WHERE (
[Extent1].[Mention_DeletedMark] <> CAST(1 AS bit)
)
AND
(
[Extent1].[Mention_IsFiltredByAuthor] <> CAST(1 AS bit)
)
AND
(
[Extent1].[Mention_IsFilteredByAuthorSource] <> CAST(1 AS bit)
)
AND
(
[Extent1].[Mention_IsFiltredByGeo] <> CAST(1 AS bit)
)
AND
(
[Extent1].[Mention_IsFiltredBySource] <> CAST(1 AS bit)
)
AND
(
[Extent1].[Mention_IsFiltredBySourceType] <> CAST(1 AS bit)
)
) AS [Filter1]
LEFT OUTER JOIN [dbo].[Source] AS [Extent3]
ON [Filter1].[Source_ID] = [Extent3].[Source_ID]
WHERE (
[Filter1].[Theme_ID] = #p__linq__49557
)
AND
(
[Extent3].[Source_Type] <> #p__linq__49558
)
) AS [GroupaBy1]
) AS [Project1]
ON 1 = 1
',N'#p__linq__49557 int,#p__linq__49558 int',#p__linq__49557=7966,#p__linq__49558=8
IndexSeeking Performance Information
Also we wrote query manually in sql with this simple code:
Select COUNT(1) from Mention m inner join AuthorSource auth on m.AuthorSource_ID = auth.AuthorSource_ID inner join
Source s on auth.Source_ID = s.Source_ID where
m.Mention_DeletedMark = 0 AND m.Mention_IsFilteredByAuthorSource = 0 AND m.Mention_IsFiltredByAuthor = 0
AND m.Mention_IsFiltredByGeo = 0 AND m.Mention_IsFiltredBySource = 0 AND m.Mention_IsFiltredBySourceType = 0
AND m.Theme_ID = 7966
and s.Source_Type <> 8
and execution plan is the same that we posted.
The query is quite hairy, but It looks like you are missing an index on Mention.Theme_ID?
Sql server is having problem because a lot of <> are used, meaning that it cannot use an index and must fetch everything and then sort it out.
After Martin's advices in comments to question, the answer is in understanding how SQL Server build the execution plans and counting disk reads operation needed to first query run.
In our particular situation, forced inner hash join instead of inner join give to us result as we expected and different execution plan that SQL choose by default.
Related
I'm trying to create a report in Tableau to generate a tree structure of all the work items in TFS 2015 and their respective hierarchy
Such as
Epic->Features->User Story->Task
But repeated attempts to create the sql query have failed. Could you please help me with the SQL query that could help fetch all the work items and display their hierarchy?
Thanks.
Instead of using SQL Query, it's suggested to use TFS REST API to create a query in TFS, the WIQL looks like:
"wiql": "select [System.Id], [System.WorkItemType], [System.Title], [System.AssignedTo], [System.State], [System.Tags] from WorkItemLinks where (Source.[System.TeamProject] = 'xxx' and Source.[System.WorkItemType] <> '' and Source.[System.State] <> '') and ([System.Links.LinkType] = 'System.LinkTypes.Hierarchy-Forward') and (Target.[System.WorkItemType] <> '') order by [System.Id] mode (Recursive)"
So, i have found this query, which is working and can help you create a tableau workbook for the hierarchy:
WITH cte
AS ( SELECT DimTeamProject.ProjectNodeName ,
System_WorkItemType ,
DimWorkItem.System_Id ,
FactWorkItemLinkHistory.TargetWorkItemID ,
DimWorkItem.System_Title,
DimWorkItem.System_State,
DimWorkItem.Microsoft_VSTS_Common_ActivatedDate,
DimWorkItem.Microsoft_VSTS_Scheduling_TargetDate,
DimWorkItem.System_CreatedDate,
DimWorkItemLinkType.LinkName,
TeamProjectSK,
system_rev,
row_number() over( partition by system_id,TeamProjectSK, FactWorkItemLinkHistory.TargetWorkItemID order by system_rev desc ) rownum
FROM DimWorkItem ,
DimTeamProject ,
FactWorkItemLinkHistory,
DimWorkItemLinkType
WHERE DimWorkItem.TeamProjectSK = DimTeamProject.ProjectNodeSK
AND DimWorkItem.System_Id = FactWorkItemLinkHistory.SourceWorkItemID
and DimWorkItemLinkType.WorkItemLinkTypeSK = FactWorkItemLinkHistory.WorkItemLinkTypeSK
/* -To Test the Query using the project Name of our choice- */
--AND ProjectNodeName =
AND System_State in ('ACTIVE','NEW')
/* -System Revisions are created when the entry is modified. Onlt the latest entry will have the below revised date- */
AND System_RevisedDate = '9999-01-01 00:00:00.000'
AND DimWorkItemLinkType.Linkname IN ( 'Parent',
'child' )
GROUP BY DimTeamProject.ProjectNodeName ,
DimWorkItem.System_Id ,
FactWorkItemLinkHistory.TargetWorkItemID ,
DimWorkItem.System_Title ,
System_WorkItemType,
DimWorkItem.System_State,
TeamProjectSK,
DimWorkItemLinkType.LINKName,
DimWorkItem.Microsoft_VSTS_Common_ActivatedDate,
DimWorkItem.Microsoft_VSTS_Scheduling_TargetDate,
DimWorkItem.System_CreatedDate,
system_rev
)
SELECT distinct t1.ProjectNodeName ,
t1.System_Id requirement_Id ,
t1.System_WorkItemType,
t1.System_Title requirement_title ,
t2.System_Id Change_request_id ,
t1.LinkName,
t2.System_Title Change_Request_Title,
t1.Microsoft_VSTS_Common_ActivatedDate,
t1.System_CreatedDate,
t1.Microsoft_VSTS_Scheduling_TargetDate,
t1.System_State,
T1.rownum
FROM cte t1
INNER JOIN cte t2 ON t1.TargetWorkItemID = t2.System_Id
and t1.rownum = 1
ORDER BY t1.System_Id;
Used a CTE to find get the complete hierarchy, its faster and more efficient than the query posted before.
We have data in millions (total rows 1698393). While exporting this data in text takes 4 hours. I need to know if there is a way to reduce the exporting time for those many records from Oracle database using SQL Developer.
with cte as (
select *
from (
select distinct
system_serial_number,
( select s.system_status
from eim_pr_system s
where .system_serial_number=a.system_serial_number
) system_status,
( select SN.cmat_customer_id
from EIM.eim_pr_ib_latest SN
where SN.role_id=19
and SN.system_serial_number=a.system_serial_number
) SN_cmat_customer_id,
( select EC.cmat_customer_id
from EIM.eim_pr_ib_latest EC
where EC.role_id=1
and a.system_serial_number=EC.system_serial_number
) EC_cmat_customer_id
from EIM.eim_pr_ib_latest a
where a.role_id in (1,19)
)
where nvl(SN_cmat_customer_id,0)!=nvl(EC_cmat_customer_id,0)
)
select system_serial_number,
system_status,
SN_CMAT_Customer_ID,
EC_CMAT_Customer_ID,
C.Customer_Name SN_Customer_Name,
D.Customer_Name EC_Customer_Name
from cte,
eim.eim_party c,
eim.eim_party D
where c.CMAT_Customer_ID=SN_cmat_customer_id
and D.CMAT_Customer_ID=EC_cmat_customer_id;
offset first 5001 rows fetch next 200000 rows only
You can get rid of a lot of the joins and correlated sub-queries (which will speed things up by reducing the number of table scans) by doing something like:
SELECT a.system_serial_number,
s.system_status,
a.SN_cmat_customer_id,
a.EC_cmat_customer_id,
a.SN_customer_name,
a.EC_customer_name
FROM (
SELECT l.system_serial_number,
MAX( CASE l.role_id WHEN 19 THEN l.cmat_customer_id END ) AS SN_cmat_customer_id,
MAX( CASE l.role_id WHEN 1 THEN l.cmat_customer_id END ) AS EC_cmat_customer_id
MAX( CASE l.role_id WHEN 19 THEN p.customer_name END ) AS SN_customer_name,
MAX( CASE l.role_id WHEN 1 THEN p.customer_name END ) AS EC_customer_name
FROM EIM.eim_pr_ib_latest l
INNER JOIN
EIM.eim_aprty p
ON ( p.CMAT_Customer_ID= l.cmat_customer_id )
WHERE l.role_id IN ( 1, 19 )
GROUP BY system_serial_number
HAVING NVL( MAX( CASE l.role_id WHEN 19 THEN l.cmat_customer_id END ), 0 )
<> NVL( MAX( CASE l.role_id WHEN 1 THEN l.cmat_customer_id END ), 0 )
) a
LEFT OUTER JOIN
eim_pr_system s
ON ( s.system_serial_number=a.system_serial_number )
Since your original query is not throwing a TOO_MANY_ROWS exception on the correlated sub-queries, I am assuming that your data is such that there is only a single row being returned for each correlated query and the above query will reflect your output (although without some sample data it is difficult to test).
Apart from 'making the query faster' - there is a way to achieve a faster export using SQL Developer.
When you use the data grid, export feature - this will execute the query, again. The only time this won't happen is if you have fetched ALL the rows into the grid. Doing this will for very large data sets will be 'expensive' on the client side, but you can avoid that.
For a faster export, add a /*csv*/ comment in your select, and wrap the statement with a spool c:\my_file.csv - then collapse the script output panel, and run that with F5. As we fetch the data, we'll write it to that file in a CSV format.
/*csv*/
/*xml*/
/*json*/
/*html*/
/*insert*/
I talk about this feature in detail here.
I need to run a query that pulls a user(s) query history to determine long running queries. This information will be pulled every 5-10 minutes and stored in a table for a weekly report to run against showing the top 10 longest running queries.
I was able to find the below query and then add 'SYS.DM_EXEC_SESSIONS' which appears to returning what I need. However, it seems like it's not a history but only active sessions. I definitely need the user name, host name and database as part of the result set.
SELECT
r.session_id
, s.login_name
, s.host_name
, r.start_time
, TotalElapsedTime_ms = r.total_elapsed_time
, r.[status]
, s.program_name
, r.command
, DatabaseName = DB_Name(r.database_id)
, r.cpu_time
, r.reads
, r.writes
, r.logical_reads
, t.[text] AS [executing batch]
, SUBSTRING(
t.[text], r.statement_start_offset / 2,
( CASE WHEN r.statement_end_offset = -1 THEN DATALENGTH (t.[text])
ELSE r.statement_end_offset
END - r.statement_start_offset ) / 2
) AS [executing statement]
FROM
sys.dm_exec_requests r
LEFT OUTER JOIN
sys.dm_exec_sessions s
ON
r.session_id = s.session_id
CROSS APPLY
sys.dm_exec_sql_text(r.sql_handle) AS t
CROSS APPLY
sys.dm_exec_query_plan(r.plan_handle) AS p
ORDER BY
r.total_elapsed_time DESC;
So far I was able to pull session information from SYS.DM_EXEC_SESSIONS but I can't seem to find any views to link with for query statistics. The database is SQL Server 2012 SP1.
Any guidance/help would be greatly appreciated.
Thanks,
Frank
That is what you are looking for?
SELECT loginame AS LoginName ,
sqltext.TEXT ,
req.session_id ,
req.status ,
req.command ,
req.cpu_time ,
req.total_elapsed_time
FROM sys.dm_exec_requests req
CROSS APPLY sys.dm_exec_sql_text(sql_handle) AS sqltext
JOIN sys.sysprocesses s ON s.spid = session_id;
You might want to look into plan cache. It contains statistics about all those plans that are in the cache right now, so it should contain most of the recent expensive queries, and also quite a lot of older ones too. You can do it with something like this:
select top 100
SUBSTRING(t.text, (s.statement_start_offset/2)+1,
((CASE s.statement_end_offset
WHEN -1 THEN DATALENGTH(t.text)
ELSE s.statement_end_offset
END - s.statement_start_offset)/2) + 1) as statement_text,
t.text,
s.total_logical_reads,
s.total_logical_reads / s.execution_count as avg_logical_reads,
s.total_worker_time,
s.total_worker_time / s.execution_count as avg_worker_time,
s.execution_count,
creation_time,
last_execution_time
--,cast(p.query_plan as xml) as query_plan
from sys.dm_exec_query_stats s
cross apply sys.dm_exec_sql_text (sql_handle) t
--cross apply sys.dm_exec_text_query_plan (plan_handle, statement_start_offset, statement_end_offset) p
order by s.total_logical_reads desc
That's just a sample that shows top 100 statements by logical reads. The commented part is for including query plans.
I have the following SQL query to execute in Sql Server MSSM, and it takes more than 5 seconds to run. The tables that are joined by the inner join, just a few tens of thousands of records. Why does it takes so long?.
The higher costs of the query are: - Clustered Index Scan [MyDB].[dbo].[LinPresup].[PK_LinPresup_Linea_IdPresupuesto_IdPedido] 78%. - Clustered Index Seek [MyDB].[dbo].[Pedidos].[PK_Pedidos_IdPedido] 19%
Thank you.
Declare #FILTROPAG bigint
set #FILTROPAG = 1
Declare #FECHATRABAJO DATETIME
set #FECHATRABAJO = getDate()
Select * from(
SELECT distinct Linpresup.IdCliente, Linpresup.IdPedido, Linpresup.FSE, Linpresup.IdArticulo,
Linpresup.Des, ((Linpresup.can*linpresup.mca)-(linpresup.srv*linpresup.mca)) as Pendiente,
Linpresup.IdAlmacen, linpresup.IdPista, articulos.Tip, linpresup.Linea,
ROW_NUMBER() OVER(ORDER BY CONVERT(Char(19), Linpresup.FSE, 120) +
Linpresup.IdPedido + CONVERT(char(2), linpresup.Linea) DESC) as NUM_REG
FROM Linpresup INNER JOIN Pedidos on LinPresup.IdPedido = Pedidos.IdPedido
INNER JOIN Articulos ON Linpresup.IdArticulo = Articulos.IdArticulo
where pedidos.Cerrado = 'false' and linpresup.IdPedido <> '' and linpresup.can <> linpresup.srv
and Linpresup.FecAnulacion is null and Linpresup.Fse <= #FECHATRABAJO
and LinPresup.IdCliente not in (Select IdCliente from Clientes where Ctd = '4')
and Substring(LinPresup.IdPedido, 5, 2) LIKE '11' or Substring(LinPresup.IdPedido, 5, 2) LIKE '10'
) as TablaTemp
WHERE NUM_REG BETWEEN #FILTROPAG AND 1500
order by NUM_REG ASC
----------
This is the new query with the changes applied:
CHECKPOINT;
go
dbcc freeproccache
go
dbcc dropcleanbuffers
go
Declare #FILTROPAG bigint
set #FILTROPAG = 1
Declare #FECHATRABAJO DATETIME
set #FECHATRABAJO = getDate()
SELECT Linpresup.IdCliente, Linpresup.IdPedido, Linpresup.FSE, Linpresup.IdArticulo,
Linpresup.Des, Linpresup.can, linpresup.mca, linpresup.srv,
Linpresup.IdAlmacen, linpresup.IdPista, linpresup.Linea
into #TEMPREP
FROM Linpresup
where Linpresup.FecAnulacion is null and linpresup.IdPedido <> ''
and (linpresup.can <> linpresup.srv) and Linpresup.Fse <= #FECHATRABAJO
Select *, ((can*mca)-(srv*mca)) as Pendiente
From(
Select tablaTemp.*, ROW_NUMBER() OVER(ORDER BY FSECONVERT + IDPedido + LINCONVERT DESC) as NUM_REG, Articulos.Tip
From(
Select #TEMPREP.*,
Substring(#TEMPREP.IdPedido, 5, 2) as NewCol,
CONVERT(Char(19), #TEMPREP.FSE, 120) as FSECONVERT, CONVERT(char(2), #TEMPREP.Linea) as LINCONVERT
from #TEMPREP INNER JOIN Pedidos on #TEMPREP.IdPedido = Pedidos.IdPedido
where Pedidos.Cerrado = 'false'
and #TEMPREP.IdCliente not in (Select IdCliente from Clientes where Ctd = '4')) as tablaTemp
inner join Articulos on tablaTemp.IDArticulo = Articulos.IdArticulo
where (NewCol = '10' or NewCol = '11')) as TablaTemp2
where NUM_REG BETWEEN #FILTROPAG AND 1500
order by NUM_REG ASC
DROP TABLE #TEMPREP
The total execution time has decreased from 5336 to 3978, and the waiting time for a server response has come to take from 5309 to 2730. It's something.
This part of your query is not SARGable and an index scan will be performed instead of a seek
and Substring(LinPresup.IdPedido, 5, 2) LIKE '11'
or Substring(LinPresup.IdPedido, 5, 2) LIKE '10'
functions around column names in general will lead to an index scan
Without seeing your execution plan it's hard to say. That said the following jumps out at me as a potential danger point:
and Substring(LinPresup.IdPedido, 5, 2) LIKE '11'
or Substring(LinPresup.IdPedido, 5, 2) LIKE '10'
I suspect that using the substring function here will cause any potentially useful indexes to not be used. Also, why are you using LIKE here? I'm guessing it probably gets optimized out, but it seems like a standard = would work...
I can't imagine why you would think such a query would run quickly. You are:
ordering the recordset twice (and once with where you are using
concatentation and functions),
your where clause has functions (which are not sargable) and ORs
which are almost always slow,
you use not in where not exists would probably be faster.
you have math calculations
And you haven't mentioned your indexing (which may or may not be helpful) or what the execution plan shows as the spots that are affecting performance the most.
I would probably start with pulling the distinct data to a CTE or temp table (you can index temp tables) without the calcualtions (to ensure when you do the calcs later it is against the smallest data set). Then I would convert the substrings to LinPresup.IdPedido LIKE '1[0-1]%'. I woudl convert the not in to not exists. I would put the math in the outer query so that is is only done on the smalest data set.
I have a query which is taking some serious time to execute on anything older than the past, say, hours worth of data. This is going to create a view which will be used for datamining, so the expectations are that it would be able to search back weeks or months of data and return in a reasonable amount of time (even a couple minutes is fine... I ran for a date range of 10/3/2011 12:00pm to 10/3/2011 1:00pm and it took 44 minutes!)
The problem is with the two LEFT OUTER JOINs in the bottom. When I take those out, it can run in about 10 seconds. However, those are the bread and butter of this query.
This is all coming from one table. The ONLY thing this query returns differently than the original table is the column xweb_range. xweb_range is a calculated field column (range) which will only use the values from [LO,LC,RO,RC]_Avg where their corresponding [LO,LC,RO,RC]_Sensor_Alarm = 0 (do not include in range calculation if sensor alarm = 1)
WITH Alarm (sub_id,
LO_Avg, LO_Sensor_Alarm, LC_Avg, LC_Sensor_Alarm, RO_Avg, RO_Sensor_Alarm, RC_Avg, RC_Sensor_Alarm) AS (
SELECT sub_id, LO_Avg, LO_Sensor_Alarm, LC_Avg, LC_Sensor_Alarm, RO_Avg, RO_Sensor_Alarm, RC_Avg, RC_Sensor_Alarm
FROM dbo.some_table
where sub_id <> '0'
)
, AddRowNumbers AS (
SELECT rowNumber = ROW_NUMBER() OVER (ORDER BY LO_Avg)
, sub_id
, LO_Avg, LO_Sensor_Alarm
, LC_Avg, LC_Sensor_Alarm
, RO_Avg, RO_Sensor_Alarm
, RC_Avg, RC_Sensor_Alarm
FROM Alarm
)
, UnPivotColumns AS (
SELECT rowNumber, value = LO_Avg FROM AddRowNumbers WHERE LO_Sensor_Alarm = 0
UNION ALL SELECT rowNumber, LC_Avg FROM AddRowNumbers WHERE LC_Sensor_Alarm = 0
UNION ALL SELECT rowNumber, RO_Avg FROM AddRowNumbers WHERE RO_Sensor_Alarm = 0
UNION ALL SELECT rowNumber, RC_Avg FROM AddRowNumbers WHERE RC_Sensor_Alarm = 0
)
SELECT rowNumber.sub_id
, cds.equipment_id
, cds.read_time
, cds.LC_Avg
, cds.LC_Dev
, cds.LC_Ref_Gap
, cds.LC_Sensor_Alarm
, cds.LO_Avg
, cds.LO_Dev
, cds.LO_Ref_Gap
, cds.LO_Sensor_Alarm
, cds.RC_Avg
, cds.RC_Dev
, cds.RC_Ref_Gap
, cds.RC_Sensor_Alarm
, cds.RO_Avg
, cds.RO_Dev
, cds.RO_Ref_Gap
, cds.RO_Sensor_Alarm
, COALESCE(range1.range, range2.range) AS xweb_range
FROM AddRowNumbers rowNumber
LEFT OUTER JOIN (SELECT rowNumber, range = MAX(value) - MIN(value) FROM UnPivotColumns GROUP BY rowNumber HAVING COUNT(*) > 1) range1 ON range1.rowNumber = rowNumber.rowNumber
LEFT OUTER JOIN (SELECT rowNumber, range = AVG(value) FROM UnPivotColumns GROUP BY rowNumber HAVING COUNT(*) = 1) range2 ON range2.rowNumber = rowNumber.rowNumber
INNER JOIN dbo.some_table cds
ON rowNumber.sub_id = cds.sub_id
It's difficult to understand exactly what your query is trying to do without knowing the domain. However, it seems to me like your query is simply trying to find, for each row in dbo.some_table where sub_id is not 0, the range of the following columns in the record (or, if only one matches, that single value):
LO_AVG when LO_SENSOR_ALARM=0
LC_AVG when LC_SENSOR_ALARM=0
RO_AVG when RO_SENSOR_ALARM=0
RC_AVG when RC_SENSOR_ALARM=0
You constructed this query assigning each row a sequential row number, unpivoted the _AVG columns along with their row number, computed the range aggregate grouping by row number and then joining back to the original records by row number. CTEs don't materialize results (nor are they indexed, as discussed in the comments). So each reference to AddRowNumbers is expensive, because ROW_NUMBER() OVER (ORDER BY LO_Avg) is a sort.
Instead of cutting this table up just to join it back together by row number, why not do something like:
SELECT cds.sub_id
, cds.equipment_id
, cds.read_time
, cds.LC_Avg
, cds.LC_Dev
, cds.LC_Ref_Gap
, cds.LC_Sensor_Alarm
, cds.LO_Avg
, cds.LO_Dev
, cds.LO_Ref_Gap
, cds.LO_Sensor_Alarm
, cds.RC_Avg
, cds.RC_Dev
, cds.RC_Ref_Gap
, cds.RC_Sensor_Alarm
, cds.RO_Avg
, cds.RO_Dev
, cds.RO_Ref_Gap
, cds.RO_Sensor_Alarm
--if the COUNT is 0, xweb_range will be null (since MAX will be null), if it's 1, then use MAX, else use MAX - MIN (as per your example)
, (CASE WHEN stats.[Count] < 2 THEN stats.[MAX] ELSE stats.[MAX] - stats.[MIN] END) xweb_range
FROM dbo.some_table cds
--cross join on the following table derived from values in cds - it will always contain 1 record per row of cds
CROSS APPLY
(
SELECT COUNT(*), MIN(Value), MAX(Value)
FROM
(
--construct a table using the column values from cds we wish to aggregate
VALUES (LO_AVG, LO_SENSOR_ALARM),
(LC_AVG, LC_SENSOR_ALARM),
(RO_AVG, RO_SENSORALARM),
(RC_AVG, RC_SENSOR_ALARM)
) x (Value, Sensor_Alarm) --give a name to the columns for _AVG and _ALARM
WHERE Sensor_Alarm = 0 --filter our constructed table where _ALARM=0
) stats([Count], [Min], [Max]) --give our derived table and its columns some names
WHERE cds.sub_id <> '0' --this is a filter carried over from the first CTE in your example