I’m using the following query to create a report in SSRS, but it’s taking about 10 minutes to give me the result.
I was trying to add index to the view, but it seems like I don’t have the permission to do so.
Is there another way to optimize the query?
(FYI, this query is joining table and view together. I’m not sure if this causing the slowness.)
/* I'm creating the temp table here, because i think it would help run faster, but it does not */
SELECT
QM.*
INTO #QM
FROM ODS.dbo.QNXT_MEMBER QM
DROP TABLE IF EXISTS #CVG
SELECT
CVG.*
INTO #CVG
FROM JIVA_DWH.dbo.mbr_cvg CVG
DROP TABLE IF EXISTS #1;
SELECT G.ext_cvg_id MemberSourceId,
A.MBR_IDN,
I.ENC_IDN,
I.INTRACN_IDN,
A.ACTIVITY,
A.ACTIVITY_TYPE,
A.UPDATED_DATE,
A.ACTIVITY_STATUS,
A.SCHEDULED_DATE,
I.INTERACTION_DATE,
I.INTERACTION_OUTCOME,
I.INTERACTION_STATUS,
I.MODIFIED_USER,
M.STATUS_CHANGE_DATE,
M.EPISODE_STATUS,
MP.ALTERNATE_ID,
[ROW_NUM] = ROW_NUMBER() OVER (PARTITION BY A.ENC_IDN ORDER BY I.INTERACTION_DATE DESC)
INTO #1
FROM JIVA_DWH.dbo.kv_V_MODEL_MBR_ENC_ACTIVITY A /*this is a view*/
JOIN JIVA_DWH.dbo.kv_V_MODEL_EPISODES M /*this is a view*/
ON M.ENC_IDN = A.ENC_IDN
JOIN JIVA_DWH.dbo.kv_V_MODEL_INTERACTIONS I /*this is a view*/
ON I.ENC_IDN = M.ENC_IDN
JOIN #CVG G /*this is a table*/
ON G.mbr_idn = A.MBR_IDN
LEFT JOIN #QM MP /*this is a table*/
ON G.ext_cvg_id = MP.MEMBER_SOURCE_ID COLLATE DATABASE_DEFAULT
WHERE A.ACTIVITY IN ( 'Verbal consent to be received', 'Incoming Call', 'Initial outreach Call', 'Contact Member' )
AND M.EPISODE_TYPE_CD = 'ECM'
AND I.INTERACTION_DATE
BETWEEN #StartDate AND #EndDate
AND CONVERT(DATE, [M].[EPISODE_START_DATE_UTC] + GETDATE() - GETUTCDATE())
BETWEEN #StartDate AND #EndDate; /*I declear this variable on the top*/
I also tried create a temporary table and store "JIVA_DWH.dbo.kv_V_MODEL_MBR_ENC_ACTIVITY", but it took 6 minutes to load.
So I’m highly suspicious it’s because of the view itself.
What should I do to optimize the query?
When trying to optimize the query, I would recommend to use a tool in SQL Server Management Studio.
When running the query on the actual database, activate the option "Include Actual Execution Plan".
This gives you two benefits:
You see which aspects of the query use up the most time / resources. This might help you check where to look for optimization potential.
The tool also gives you the option to propose an additional database index, which may help a lot (especially if the report is used frequently)
Reference / credits
Please note, that I recommended a procedure rather than the outcome of such a procedure, because the results also depend on the amount of data in the tables and other factors which are difficult to post on a Q&A.
The views you are using were probably designed for a specific reason. There may be more there than you need. You might try reading the definitions of your 3 views and using only what you need:
DECLARE #StartDate date
DECLARE #EndDate date
SET #StartDate = getdate()
SET #EndDate = DATEADD(year, 1, getdate())
DROP TABLE IF EXISTS #1;
WITH
my_kv_V_MODEL_MBR_ENC_ACTIVITY AS (
<simplified code from kv_V_MODEL_MBR_ENC_ACTIVITY view>
WHERE <tbl>.ACTIVITY IN ( 'Verbal consent to be received', 'Incoming Call', 'Initial outreach Call', 'Contact Member' )
),
my_kv_V_MODEL_EPISODES AS (
<simplified code from kv_V_MODEL_EPISODES view>
WHERE <tbl>.EPISODE_TYPE_CD = 'ECM'
AND CONVERT(DATE, [M].[EPISODE_START_DATE_UTC] + GETDATE() - GETUTCDATE())
BETWEEN #StartDate AND #EndDate
),
my_kv_V_MODEL_INTERACTIONS AS (
<simplified code from kv_V_MODEL_INTERACTIONS view>
WHERE <tbl>.INTERACTION_DATE
BETWEEN #StartDate AND #EndDate
)
SELECT G.ext_cvg_id MemberSourceId,
A.MBR_IDN,
I.ENC_IDN,
I.INTRACN_IDN,
A.ACTIVITY,
A.ACTIVITY_TYPE,
A.UPDATED_DATE,
A.ACTIVITY_STATUS,
A.SCHEDULED_DATE,
I.INTERACTION_DATE,
I.INTERACTION_OUTCOME,
I.INTERACTION_STATUS,
I.MODIFIED_USER,
M.STATUS_CHANGE_DATE,
M.EPISODE_STATUS,
MP.ALTERNATE_ID,
[ROW_NUM] = ROW_NUMBER() OVER (PARTITION BY A.ENC_IDN ORDER BY I.INTERACTION_DATE DESC)
INTO #1
FROM my_kv_V_MODEL_MBR_ENC_ACTIVITY A
JOIN my_kv_V_MODEL_EPISODES M ON M.ENC_IDN = A.ENC_IDN
JOIN my_kv_V_MODEL_INTERACTIONS I ON I.ENC_IDN = M.ENC_IDN
JOIN JIVA_DWH.dbo.mbr_cvg G ON G.mbr_idn = A.MBR_IDN
LEFT JOIN ODS.dbo.QNXT_MEMBER MP ON G.ext_cvg_id = MP.MEMBER_SOURCE_ID COLLATE DATABASE_DEFAULT
Also, upon inspecting the 3 views, you may discover that they share some commonality. Perhaps you are asking the database server to unnecessarily perform the same steps several times. It may be better to avoid using the views and write your query using only the source tables.
Related
So, I'm trying to create a collection in SCCM which I would like to give me a
list of assets(name0) that don't have an .ide file linked to them that is newer than
21 days old. Once identified I can go off and investigate why these assets are not updating.
So far I have written the following query in SSMS before I set it up in SCCM,
but it's become evident that this isn't the correct approach .
SELECT DISTINCT v_GS_SYSTEM.Name0
FROM v_GS_SYSTEM inner join v_GS_SoftwareFile
ON v_GS_SoftwareFile.ResourceID = v_GS_SYSTEM.ResourceID
WHERE (DATEDIFF(day, v_GS_SoftwareFile.ModifiedDate, getdate()) >=21)
AND NOT
(DATEDIFF(day, v_GS_SoftwareFile.ModifiedDate, getdate()) <=21)
AND
v_GS_SoftwareFile.FileName like '/%.ide/'
ORDER BY v_GS_SYSTEM.Name0;
This code returns the "correct" values but doesn't consider the fact that an asset may
still have newer ide files related to it, which defeats the purpose of this exercise.
So (I think!) my question is, is there a way check if Name0 has any associated ModifiedDate records
newer than 21 days and only return a value if this check returns true/false?
EDIT: edited #MatBailie answer with output:
To join all '*.ide' Files to their Resource, but only for resources that have not had any '*.ide' files modified in the last 21 days...
SELECT
s.Name0,
f.FilePath,
f.FileName
FROM
(
SELECT
*,
MAX(ModifiedDate) OVER (PARTITION BY ResourceID) AS ResourceMaxModifiedDate
FROM
v_GS_SoftwareFile
WHERE
FileName LIKE '%.ide'
)
AS f
INNER JOIN
v_GS_SYSTEM AS s
ON s.ResourceID = f.ResourceID
WHERE
f.ResourceMaxModifiedDate <= DATEADD(DAY, -21, GETDATE())
ORDER BY
s.Name0,
f.FilePath,
f.FileName
To get all Resources that have had no '*.ide' files modified in the last 21 days...
SELECT
s.Name0
FROM
v_GS_SYSTEM AS s
WHERE
NOT EXISTS (
SELECT *
FROM v_GS_SoftwareFile AS f
WHERE f.FileName LIKE '%.ide'
AND f.ResourceID = s.ResourceID
AND f.ModifiedDate >= DATEADD(DAY, -21, GETDATE())
)
ORDER BY
s.name0
Consider your indexes on these tables depending on which query you end up with. A Covering index over (ResourceID, ModifiedDate) would be useful. And a flag for the file type would be useful too (LIKE '*.ide' is going to require scanning the rows to find the matches, it can't be solved with a typical index).
You can simply add an additional EXISTS clause where you check this.
I believe the query you're trying to write is:
SELECT DISTINCT vs.Name0
, vsf.FilePath
, vsf.FileName
FROM v_GS_SYSTEM vs
INNER JOIN v_GS_SoftwareFile vsf
ON vsf.ResourceID = vs.ResourceID
WHERE (DATEDIFF(day, vsf.ModifiedDate, getdate()) >= 21)
AND NOT (DATEDIFF(day, vsf.ModifiedDate, getdate()) <= 21) -- this seems a bit redundant and might even exclude some rows where the result is exactly (21 * 24) hours
AND vsf.FileName LIKE '/%.ide/'
AND NOT EXISTS (
SELECT 1
FROM v_GS_SoftwareFile vsf2
WHERE vsf2.ModifiedDate > GETDATE() - 21
AND vsf2.ResourceId = vsf.ResourceId
)
ORDER BY vs.Name0;
You can basically change the check the TRUE / FALSE by keeping or removing the NOT in the AND NOT EXISTS.
Edit:
Since you were mentioning performance problems check if you have non-clustered idexes:
on column ResourceId in v_GS_SoftwareFile table (I really hope this is not a view, but the v_ at the start kind of makes me think this one is).
on column ResourceID in v_GS_SYSTEM table (same concerned comment here)
I am new to SQL so please excuse my lack of knowledge. This is the table i have based on the following statement:
'select S_OPERATION.OPERATIONID, CHANGE_H.SERVICEREQNO, CHANGE_H.UPDATEDDATE
from sunrise.S_OPERATION inner join
CHANGE_H on S_OPERATION.OPERATIONID = CHANGE_H.OPERATIONID
where (S_OPERATION.OPERATIONID = 102005212) OR
(S_OPERATION.OPERATIONID = 102005218) or
(s_operation.operationid = 102005406) or
(s_operation.operationid = 102005401) or
(s_operation.operationid = 102005215)'
enter image description here
I would like to be able to calculate the time difference between events within the same job.
Please note: OperationID=event, Servicereqno=job
My end goal is to calculate the average time taken between each event and export this into a report, but i am having problems getting past the first hurdle.
I have tried the following statement however it does not work:
WITH cteOps AS
(
SELECT
row_number() OVER (PARTITION BY change.servicereqid ORDER BY change.updateddate) seqid,
updateddate,
servicereqid
FROM CHANGE.updateddate, CHANGE.addedby, S_OPERATION.operationid, CHANGE.servicereqid
)
SELECT
DATEDIFF(millisecond, o1.updateddate, o2.updateddate) updateddatediff,
servicereqid
FROM cteOps o1
JOIN cteOps o2 ON o1.seqid=o2.seqid+1 AND o1.servicereqid=o2.servicereqid;
Many thanks in advance.
Your two queries look quite different having different table names, etc. So you'd probably have to adjust my query below to match what you actually have.
You can look into the previous record with LAG. So a query showing all those events with a time difference to the previous one could be:
select
c.updateddate
, c.addedby
, so.operationid
, c.servicereqid
, so.updateddate
, datediff
( millisecond
, lag(so.updateddate) over (partition by c.servicereqid order by so.updateddate)
, so.updateddate
) as updateddatediff
from change c
inner join change_h ch
on c.servicereqid = ch.servicereqno
and ch.operationid in (102005212, 102005218, 102005406, 102005401, 102005215)
inner join s_operation so
on ch.operationid = so.operationid
order by
c.servicereqid,
so.updateddate;
You can build up on this by using it as a derived table (a subquery in a FROM clause).
I'm very new to SQL, and still learning. I'm using a reporting tool called Solarwinds Orion, and I'm honestly not sure how specific the query I have written is to the program, so if there's anything in the query that's confusing, let me know and I'll try to figure out if it's specific to the program or not.
The problem with the query I'm running is that it times out after a very long time (maybe an hour) of running. The database I'm using is huge. Unfortunately I don't really know how huge, but I've been told it's huge.
Is there anything I am doing wrong that would have a huge performance impact?
SELECT TOP 10000
Nodes.Caption AS NodeName,
NetflowApplicationSummary.AppName AS Application_Name,
SUM(NetflowApplicationSummary.TotalBytes) AS SUM_of_Bytes_Transferred,
AVG(Case OutBandwidth
When 0 Then 0
Else (NetflowApplicationSummary.TotalBytes/OutBandwidth) * 100
End) AS TEST_PERCENT
FROM
((NetflowApplicationSummary
INNER JOIN Nodes ON (NetflowApplicationSummary.NodeID = Nodes.NodeID))
INNER JOIN InterfaceTraffic ON (Nodes.NodeID = InterfaceTraffic.InterfaceID))
INNER JOIN Interfaces ON (Nodes.NodeID = Interfaces.NodeID)
WHERE
( InterfaceTraffic.DateTime > (GetDate()-30) )
AND
(Nodes.WANCircuit = 1)
GROUP BY Nodes.Caption, NetflowApplicationSummary.AppName
EDIT: I ran COUNT() on each of my tables with the below result.
SELECT COUNT(*) FROM NetflowApplicationSummary # 50671011
SELECT COUNT(*) FROM Nodes # 898
SELECT COUNT(*) FROM InterfaceTraffic # 18000166
SELECT COUNT(*) FROM Interfaces # 3938
# Total : 68,676,013
I really have no idea if 68 million items is a huge database to be honest.
A couple of notes:
The INNER JOIN operator is associative, so get rid of those parenthesis in the FROM clause and let the optimizer figure out the best join order.
You may have an implied cursor from the getdate() function being called for every row. Store the value in a local variable and compare to that.
The resulting SQL should look like this:
DECLARE #Date as datetime = getdate() - 30;
SELECT TOP 10000
Nodes.Caption AS NodeName,
NetflowApplicationSummary.AppName AS Application_Name,
SUM(NetflowApplicationSummary.TotalBytes) AS SUM_of_Bytes_Transferred,
AVG(Case OutBandwidth
When 0 Then 0
Else (NetflowApplicationSummary.TotalBytes/OutBandwidth) * 100
End) AS TEST_PERCENT
FROM NetflowApplicationSummary
INNER JOIN Nodes ON NetflowApplicationSummary.NodeID = Nodes.NodeID
INNER JOIN InterfaceTraffic ON Nodes.NodeID = InterfaceTraffic.InterfaceID
INNER JOIN Interfaces ON Nodes.NodeID = Interfaces.NodeID
WHERE InterfaceTraffic.DateTime > #Date
AND Nodes.WANCircuit = 1
GROUP BY Nodes.Caption, NetflowApplicationSummary.AppName
Also, make sure you have an index on table InterfaceTraffic with a leading field of DateTime. If this doesn't exist you may need to pay the penalty of a first time creation of it.
If this doesn't help, then you may need to post the execution plan where it can be inspected.
Out of interest, also perform a count() on all four tables and post that result, just so members here can make their own assessment of how big your database really is. It is amazing how many non-technical people still think a 1 or 10 GB database is huge, while I run that easily on my workstation!
I have a SQL query that takes 7 minutes+ to return results. I'm trying to optimize as much as possible and the Execution plan loses 82% of the time on a Hash Match (Aggregate). I've done some searching and looks like using an "EXISTS" would help to resolve, but I haven't figured out the syntax of the query to make it work. Here's the query:
select dbo.Server.Name,
dbo.DiskSpace.Drive,
AVG(dbo.DiskSpace.FreeSpace) as 'Free Disk Space',
AVG(dbo.Processor.PercentUsed) as 'CPU % Used',
AVG(dbo.Memory.PercentUtilized) as '% Mem Used'
from Server
join dbo.DiskSpace on dbo.Server.ID=DiskSpace.ServerID
join dbo.Processor on dbo.Server.ID=Processor.ServerID
join dbo.Memory on dbo.Server.ID=dbo.Memory.ServerID
where
dbo.Processor.ProcessorNum='_Total'
and dbo.Processor.Datetm>DATEADD(DAY,-(1),(CONVERT (date, GETDATE())))
and ( dbo.Server.Name='qp-ratking'
or dbo.Server.Name='qp-hyper2012'
or dbo.Server.Name='qp-hyped'
or dbo.Server.Name='qp-lichking')
Group By dbo.server.name, Dbo.DiskSpace.Drive
Order By Dbo.Server.Name, dbo.DiskSpace.Drive;
How do I reduce/eliminate the joins using EXISTS? Or if there is a better way to optimize, I'm up for that too. Thanks
A co-worker broke down the query and pulled out data in smaller chunks so there wasn't as much processing of the data returned by the joins. It cut it down to less than 1 second return. New Query:
WITH tempDiskSpace AS
(
SELECT dbo.Server.Name
,dbo.DiskSpace.Drive
,AVG(dbo.DiskSpace.FreeSpace) AS 'Free Disk Space'
FROM dbo.DiskSpace
LEFT JOIN dbo.Server ON dbo.DiskSpace.ServerID=Server.ID
WHERE dbo.DiskSpace.Datetm>DATEADD(DAY,-(1),(CONVERT (date, GETDATE())))
AND (dbo.Server.Name='qp-ratking'
OR dbo.Server.Name='qp-hyper2012'
OR dbo.Server.Name='qp-hyped'
OR dbo.Server.Name='qp-lichking')
GROUP BY Name, Drive
)
,tempProcessor
AS
(
SELECT dbo.Server.Name
,AVG(dbo.Processor.PercentUsed) AS 'CPU % Used'
FROM dbo.Processor
LEFT JOIN dbo.Server ON dbo.Processor.ServerID=Server.ID
WHERE dbo.Processor.Datetm>DATEADD(DAY,-(1),(CONVERT (date, GETDATE())))
AND dbo.Processor.ProcessorNum='_Total'
AND (dbo.Server.Name='qp-ratking'
OR dbo.Server.Name='qp-hyper2012'
OR dbo.Server.Name='qp-hyped'
OR dbo.Server.Name='qp-lichking')
GROUP BY Name
)
,tempMemory
AS
(
SELECT dbo.Server.Name
,AVG(dbo.Memory.PercentUtilized) as '% Mem Used'
FROM dbo.Memory
LEFT JOIN dbo.Server ON dbo.Memory.ServerID=Server.ID
WHERE dbo.Memory.Datetm>DATEADD(DAY,-(1),(CONVERT (date, GETDATE())))
AND (dbo.Server.Name='qp-ratking'
OR dbo.Server.Name='qp-hyper2012'
OR dbo.Server.Name='qp-hyped'
OR dbo.Server.Name='qp-lichking')
GROUP BY Name
)
SELECT tempDiskSpace.Name, tempDiskSpace.Drive, tempDiskSpace.[Free Disk Space], tempProcessor.[CPU % Used], tempMemory.[% Mem Used]
FROM tempDiskSpace
LEFT JOIN tempProcessor ON tempDiskSpace.Name=tempProcessor.Name
LEFT JOIN tempMemory ON tempDiskSpace.Name=tempMemory.Name
ORDER BY Name, Drive;
Thanks for all the suggestions.
At the very least I'd start with getting rid of all those OR clauses.
AND (dbo.Server.Name='qp-ratking'
OR dbo.Server.Name='qp-hyper2012'
OR dbo.Server.Name='qp-hyped'
OR dbo.Server.Name='qp-lichking')
and replace with
AND dbo.Server.Name in ('qp-ratking','qp-hyper2012','qp-hyped','qp-lichking')
I'm not sure about converting everything to CTEs though. You can't index CTEs and I'm yet to come across an occasion where CTEs outperform a regular query. Your initial query seemed well formed apart from the over use of OR as mentioned above, so I'd be looking at indexes next.
I would start by checking the indexes. Are all the keys used in the join defined as primary keys? Or do they at least have indexes?
Then, additional indexes on Processor and Server might help:
create index idx_Processor_ProcessorNum_Datetm_ServerId on ProcessorNum(ProcessorNum, Datetm, ServerId);
create index idx_Server_Name_ServerId on Server(Name, ServerId)
The statement looks reasonably structured and do not see a huge scope for optimization provided the per-requisits are addressed such as
Check Index Fragmentation and ensure all Indexes are maintained
Check if Statistics are up to date
If dirty ready are acceptable then worth consider applying WITH (NOLOCK) on the tables.
If the query allows declaring variables then moving the DATEADD out of the Filter statement as below can be beneficial.
Hope this helps.
-- Assuming Variables can be declared see the script below.
-- I made a few changes per my coding standard only to help me read better.
DECLARE #dt_Yesterdate DATE
SET #dt_Yesterdate = DATEADD(DAY, -(1), CONVERT (DATE, GETDATE()))
SELECT s.Name,
ds.Drive,
AVG(ds.FreeSpace) AS 'Free Disk Space',
AVG(P.PercentUsed) AS 'CPU % Used',
AVG(m.PercentUtilized) AS '% Mem Used'
FROM Server s
JOIN dbo.DiskSpace AS ds
ON s.ID = ds.ServerID
JOIN dbo.Processor AS p
ON s.ID = p.ServerID
JOIN dbo.Memory AS m
ON s.ID = m.ServerID
WHERE P.ProcessorNum = '_Total'
AND P.Datetm > #dt_Yesterdate
AND s.Name IN ('qp-ratking', 'qp-hyper2012', 'qp-hyped','qp-lichking')
GROUP BY s.name, ds.Drive
ORDER BY s.Name, ds.Drive;
I have some questions about my query. I call this store-procedure in my first page, so it is important for me if it is optimize enough.
I do some select with some basic where expression, Then I filter them with some expression I passed through this store-procedure.
It is also considerable for me to select top n and its gonna search through millions of items (but I have hundreds of items already) and then do some paging in my website.
Select top (#NumberOfRows)
...
from(
SELECT
row_number() OVER (ORDER BY tblEventOpen.TicketAt, tblEvent.EventName, tblEventDetail.TimeStart) as RowNumber
, ...
FROM --[...some inner join logic...]
WHERE
(tblEventOpen.isValid = 1) AND (tblEvent.isValid = 1) and
(tblCondition_ResellerDetail.ResellerID = 1) AND
(tblEventOpen.TicketAt >= GETDATE()) AND
(GETDATE() BETWEEN
DATEADD(minute, (tblEventDetail.TimeStart - 60 * tblCondition_ResellerDetail.StartTime) , tblEventOpen.TicketAt)
AND DATEADD(minute, (tblEventDetail.TimeStart - 60 * tblCondition_ResellerDetail.EndTime) , tblEventOpen.TicketAt))
) as t1
where RowNumber >= (#PageNumber -1) * #NumberOfRows and
(#city='' or #city is null or city like #city) and
(#At is null or #At=At) and
(#TimeStartInMinute=-1 or #TimeStartInMinute=TimeStartInMinute) and
(#EventName='' or EventName like #EventName) and
(#CategoryID=-1 or #CategoryID = CategoryID) and
(#EventID is null or #EventID = EventID) and
(#DetailID is null or #DetailID = DetailID)
ORDER BY RowNumber
I'm worry about this part:
(GETDATE() BETWEEN
DATEADD(minute, (tblEventDetail.TimeStart - 60 * tblCondition_ResellerDetail.StartTime) , tblEventOpen.TicketAt)
AND DATEADD(minute, (tblEventDetail.TimeStart - 60 * tblCondition_ResellerDetail.EndTime) , tblEventOpen.TicketAt))
How does table t1 execute? I mean after I put some where expression after t1 (line 17 and further), does it filter items after execution of t1? for example I filter result by rownumber of 10, so it mean the inner (...) as t1 select will only return 10 items, or it select all items then my outer select will take 10 of them?
I want to filter my result by some optional parameters, so I put something like #DetailID is null or #DetailID = DetailID, is it a good way?
Anything else should I consider to make it faster (more optimize)?
My comment on your query:
You're correct, you should worry about condition "GETDATE() BETWEEN ...". Comparing value with function involving more than 1 table will most likely scan entire search space. Simplify your condition or if possible add a computed column for such function
Put all conditions except "RowNumber >= ..." in inner query
Its okay to put optional condition the way you do. I do it too :-)
Make sure you have index at least one for each column employed in the where clause as the first column of the index, and then the primary key. It would be better if your primary key is clustered
Well, these are based on my own experience. It may or may be not applicable to your situation.
[UPDATE] Here's the complete query
Select top (#NumberOfRows)
...
from(
SELECT
row_number() OVER (ORDER BY tblEventOpen.TicketAt, tblEvent.EventName, tblEventDetail.TimeStart) as RowNumber
, ...
FROM --[...some inner join logic...]
WHERE
(tblEventOpen.isValid = 1) AND (tblEvent.isValid = 1) and
(tblCondition_ResellerDetail.ResellerID = 1) AND
(tblEventOpen.TicketAt >= GETDATE()) AND
(GETDATE() BETWEEN
DATEADD(minute, (tblEventDetail.TimeStart - 60 * tblCondition_ResellerDetail.StartTime) , tblEventOpen.TicketAt)
AND DATEADD(minute, (tblEventDetail.TimeStart - 60 * tblCondition_ResellerDetail.EndTime) , tblEventOpen.TicketAt)) and
(#city='' or #city is null or city like #city) and
(#At is null or #At=At) and
(#TimeStartInMinute=-1 or #TimeStartInMinute=TimeStartInMinute) and
(#EventName='' or EventName like #EventName) and
(#CategoryID=-1 or #CategoryID = CategoryID) and
(#EventID is null or #EventID = EventID) and
(#DetailID is null or #DetailID = DetailID)
) as t1
where RowNumber >= (#PageNumber -1) * #NumberOfRows
ORDER BY RowNumber
Whilst you can seek advice on your query, it is better to learn how to optimise it yourself.
You need to view the execution plan, identify the bottlenecks and then see if there is anything that can be done to make an improvement.
In SSMS you can click "Query" ---> "Include Actual Execution Plan" before you run your query. (Ctrl+M) is they keyboard shortcut.
Then execute your query. SSMS will create a new tab in the results pane. Which will show you how the SQL engine executes your query, you can hover over each node for more information. The cost % will be particularly interesting, allowing you to see the most expensive part of your query.
It's difficult to advise you any more without that execution plan, which is why a number of people commented on your question. Your schema and indexes change how the query is executed, so it's not something that someone can accuratly replicate in their own environment without scripts for tables / indexes etc.... Even then statistics could be out of date and other problems could arise.
You can also execute SET STATISTICS PROFILE ON to get a textual view of the plan (maybe useful to seek help).
There are a number of articles that can help you fix the bottlenecks, or post another question for more advice.
http://msdn.microsoft.com/en-us/library/ms178071.aspx
SQL Server Query Plan Analysis
Execution Plan Basics