When using QueryMultiple from dapper, are all of the queries run multiple times on the DB?
.Net core code example:
var query1 = "select * from [User]";
var query2 = "select * from UserRole";
var multiQuery = $"{query1}; {query2}";
using (var multi = await _dbConnection.QueryMultipleAsync(multiQuery))
{
var userResponse = multi.Read<UserResponse>();
var userRoleResponse = multi.Read<UserRoleResponse>();
//DO SOMETHING
}
When I look at the queries run on the DB, I see the following:
Time
Query
2021-11-12 17:17:47.673
select * from [User]; select * from UserRole
2021-11-12 17:17:47.673
select * from [User]; select * from UserRole
SQL query to see latest queries in DB:
SELECT top 10 deqs.last_execution_time AS [Time], dest.text AS [Query]
FROM sys.dm_exec_query_stats AS deqs
CROSS APPLY sys.dm_exec_sql_text(deqs.sql_handle) AS dest
ORDER BY deqs.last_execution_time DESC
I was expecting to see the following:
Time
Query
2021-11-12 17:17:47.673
select * from [User]
2021-11-12 17:17:47.673
select * from UserRole
Does this mean that Dapper's QueryMultiple is running all queries multiple times?
Just hoping to understand how this works as we are looking at potential performance impacts.
Database is Sql Server.
Does this mean that Dapper's QueryMultiple is running all queries multiple times?
No, you're misreading the DMVs.
sys.dm_exec_query_stats has one row per query, but sys.dm_exec_sql_text returns the whole batch (or stored procedure body), not an individual query. So you must use the statement_start_offset and statement_end_offset to extract the individual query.
Here's the example from the docs:
SELECT TOP 5 query_stats.query_hash AS "Query Hash",
SUM(query_stats.total_worker_time) / SUM(query_stats.execution_count) AS "Avg CPU Time",
MIN(query_stats.statement_text) AS "Statement Text"
FROM
(SELECT QS.*,
SUBSTRING(ST.text, (QS.statement_start_offset/2) + 1,
((CASE statement_end_offset
WHEN -1 THEN DATALENGTH(ST.text)
ELSE QS.statement_end_offset END
- QS.statement_start_offset)/2) + 1) AS statement_text
FROM sys.dm_exec_query_stats AS QS
CROSS APPLY sys.dm_exec_sql_text(QS.sql_handle) as ST) as query_stats
GROUP BY query_stats.query_hash
ORDER BY 2 DESC;
Related
I have two (almost) identical queries:
;with cteMasterCreateExecAsXml as (
SELECT guid, CAST(ExecutionOrder AS xml) as x
FROM (SELECT guid, ExecutionOrder
FROM #StrategiesToCreate sc
JOIN InFusion_Data.dbo.compare_FoxStrategy fs ON fs.Guid = sc.MasterGuid
WHERE fs.LinkedToTemplate = 0) k
) -- selects ~45 rows
SELECT MasterBIV.StrategyTagname, MasterBIV.Tagname, MasterBIV.Name,
b.blk.value('for $i in . return count(../*[. << $i]) + 1', 'int') as ExecOrder
INTO #MasterCreateExecOrder
FROM cteMasterCreateExecAsXml
CROSS APPLY x.nodes('//ExecutionOrder/Block') as b(blk)
JOIN InFusion_Data.dbo.compare_BlockInfoView MasterBIV ON MasterBIV.BlockGuid = b.blk.value('#Id', 'uniqueidentifier')
And
;with cteMasterExecAsXml as (
SELECT guid, CAST(ExecutionOrder AS xml) as x
FROM (SELECT guid, ExecutionOrder FROM #StrategiesToCompare sc
JOIN InFusion_Data.dbo.compare_FoxStrategy fs ON fs.Guid = sc.MasterGuid) k
) -- selects ~17000 rows
SELECT MasterBIV.StrategyTagname, MasterBIV.Tagname, MasterBIV.Name,
b.blk.value('for $i in . return count(../*[. << $i]) + 1', 'int') as ExecOrder
INTO #MasterExecOrder
FROM cteMasterExecAsXml
CROSS APPLY x.nodes('//ExecutionOrder/Block') as b(blk)
JOIN InFusion_Data.dbo.compare_BlockInfoView MasterBIV ON MasterBIV.BlockGuid = b.blk.value('#Id', 'uniqueidentifier')
According the SQL execution plan they both take the same amount of time. The first one deals with 45 rows and the second one deals with ~17000 rows. This makes me think that many rows are being selected and converted to xml in both queries.
Unfortunately because I need to do the query across servers I can't use an XML column in the schema.
Any idea what's going on here or how I can speed up my queries?
As I see, the diffence is at "WHERE fs.LinkedToTemplate = 0" condition.
According the SQL execution plan they both take the same amount of time. The first one deals with 45 rows and the second one deals with ~17000 rows. This makes me think that many rows are being selected and converted to xml in both queries.
It looks like you have no table index by LinkedToTemplate column, so for both queries full table scan is needed.
So to improve performance you should create index by LinkedToTemplate column.
I have been working on the below query. Basically there are two tables. Realtime_Input and Realtime_Output. When I join the two tables and take the necessary columns, I made this a view and when i query against the view I get duplicates.
What am I doing wrong? When I tested using distinct keyword, I get 60 unique rows but intermittently i get duplicates. My db is on cloud foundry cloud (postgres). Is is because of that? Please help !
select i2.key_ts_long,
case
when i2.revenue_activepower = 'NA'
then (-1 * CAST(io.min5_forecast as real))
else (CAST(i2.revenue_activepower AS real) - CAST(io.min5_forecast as real))
end as diff
from realtime_analytic_input i2,
(select i.farm_id,
i.key_ts_long,
o.min5_forecast,
o.min5_timestamp_seconds
from realtime_analytic_input i,
realtime_analytic_output o
where i.farm_id = o.farm_id
and i.key_ts_long = o.key_ts_long
and o.farm_id = 'MW1'
) io
where i2.key_ts_long = CAST(io.min5_timestamp_seconds AS bigint)
and i2.farm_id = io.farm_id
and i2.farm_id = 'MW1'
and io.key_ts_long between 1464738953169 and 1466457841
order by io.key_ts_long desc
I wrote this SQL query:
SELECT *
FROM
dbo.RDB_LOG_ITEM,
(
SELECT '000' + CAST(operatore as varchar) + cast(scontrino as varchar) search
FROM
(
SELECT
N0_XACT_NO scontrino,
N0_OPERATOR_NO operatore
FROM
dbo.RDB_SCALE_ITEM
WHERE
BL_RECORD_EXPLODED = 0 AND
N0_COUNTER_NO = 1 AND
DT_TIME_STAMP LIKE '20160526%'
) db
) db2
WHERE
DT_TIME_STAMP > '2016-05-26T00:00:00.000' AND
SZ_SCALE_LABEL LIKE db2.search + '%'
But this query is executed in 3+ sec. The result of this query is a single row. The result of the select db2 are only 7 rows.
I think when I use from data1,db2 that SQL does a cross join (data1 is a big db with something like 300k+ rows) and slows the process.
If I try to write the select hard-coded with the result from the 2nd select I get the result in 0.01 sec like this: select * from data1 where DT_TIME_STAMP > '2016-05-26T00:00:00.000' and SZ_SCALE_LABEL like '0001013530%'
How can I use the db2 without joining it with the other db?
edit
the subquery:
(
SELECT '000' + CAST(operatore as varchar) + cast(scontrino as varchar) search
FROM
(
SELECT
N0_XACT_NO scontrino,
N0_OPERATOR_NO operatore
FROM
dbo.RDB_SCALE_ITEM
WHERE
BL_RECORD_EXPLODED = 0 AND
N0_COUNTER_NO = 1 AND
DT_TIME_STAMP LIKE '20160526%'
) db
) db2
give me X rows like
0001013530
0001013531
0001013532
0001013533
0001013534
what i need is a query like this select * from dbo.RDB_LOG_ITEM where DT_TIME_STAMP > '2016-05-26T00:00:00.000' and (SZ_SCALE_LABEL like '0001013530%' or SZ_SCALE_LABEL like '0001013531%' or SZ_SCALE_LABEL like '0001013532%' or SZ_SCALE_LABEL like '0001013533%' or SZ_SCALE_LABEL like '0001013534%')
i think is something near the subquery IN http://www.dofactory.com/sql/subquery but with the LIKE
PS sorry for the incomplete post but i was at work and they was kicking me for close :-)
First, you should be using explicit JOINs. Your WHERE clause should not have more than one table (in this case, considering the subquery as a "table").
Second, there is no need for a subquery here at all. You need to get out of the habit of leaning on subqueries and think in a set-based way when writing SQL.
Finally, use your aliases to prefix all of your columns in a query. It's impossible to tell where some of these columns are coming from without prefixing them.
I believe that this will get you the same results and will ideally be done in a performant way. If it's not, then you will need to post the full table structures (including indexes) as well as the query plans for anyone to be able to help you with performance issues.
SELECT
LI.dt_time_stamp,
LI.sz_scale_label,
<list other columns here, because we **never** use SELECT *>
FROM
dbo.RDB_LOG_ITEM LI
INNER JOIN dbo.RDB_SCALE_ITEM SI ON
SI.bl_record_exploded = 0 AND
SI.no_counter_no = 1 AND
SI.dt_time_stamp LIKE '20160526%' AND -- You should be using date datatypes for your date columns
LI.sz_scale_label LIKE '000' + CAST(operatore AS VARCHAR(20)) + cast(scontrino AS VARCHAR(20)) + '%' -- I guessed on appropriate VARCHAR sizes, which you should have
WHERE
LI.dt_time_stamp > '2016-05-26T00:00:00.000'
How much data does your tables have? If they have huge data, without join your execution time will be more. I had similar experience. Using join reduced my execution time significantly.
With this query I'm getting the top 10 slow queries in Sql Server.
SELECT TOP 20
SUBSTRING(qt.text, (qs.statement_start_offset/2)+1,
((CASE qs.statement_end_offset WHEN -1 THEN DATALENGTH(qt.text)
ELSE qs.statement_end_offset
END - qs.statement_start_offset)/2)+1),
qs.execution_count,
qs.total_logical_reads, qs.last_logical_reads,
qs.min_logical_reads, qs.max_logical_reads,
qs.total_elapsed_time, qs.last_elapsed_time,
qs.min_elapsed_time, qs.max_elapsed_time,
qs.last_execution_time,
qp.query_plan
FROM
sys.dm_exec_query_stats qs
CROSS APPLY
sys.dm_exec_sql_text(qs.sql_handle) qt
CROSS APPLY
sys.dm_exec_query_plan(qs.plan_handle) qp
WHERE
qt.encrypted = 0
ORDER BY
qs.total_logical_reads DESC
What I want to do is finding each queries last 10 execution time.
Or an average execution period in a day makes me glad.
You can create snapshots of your database procedure executions at certain period of time and then you can compare them. Use the sql in order to insert into table every snapshot.
SELECT
getDAte() as SnapshotDate,
s.database_id,
s.[plan_handle],
s.[object_id],
s.last_execution_time,
s.execution_count,
s.total_worker_time,
s.last_worker_time,
s.min_worker_time,
s.max_worker_time,
s.total_physical_reads,
s.last_physical_reads,
s.min_physical_reads,
s.max_physical_reads,
s.total_logical_writes,
s.last_logical_writes,
s.min_logical_writes,
s.max_logical_writes,
s.total_logical_reads,
s.last_logical_reads,
s.min_logical_reads,
s.max_logical_reads,
s.total_elapsed_time,
s.last_elapsed_time,
s.min_elapsed_time,
s.max_elapsed_time
FROM sys.dm_exec_procedure_stats AS s
WHERE s.database_id NOT IN
(
DB_ID('master'),
DB_ID(N'tempdb'),
DB_ID(N'model'), DB_ID(N'msdb'),
32767 -- RESOURCE db
) ;
If you want to check that is running slowly and why, check Standard reports in SSMS.
I have some questions about my query. I call this store-procedure in my first page, so it is important for me if it is optimize enough.
I do some select with some basic where expression, Then I filter them with some expression I passed through this store-procedure.
It is also considerable for me to select top n and its gonna search through millions of items (but I have hundreds of items already) and then do some paging in my website.
Select top (#NumberOfRows)
...
from(
SELECT
row_number() OVER (ORDER BY tblEventOpen.TicketAt, tblEvent.EventName, tblEventDetail.TimeStart) as RowNumber
, ...
FROM --[...some inner join logic...]
WHERE
(tblEventOpen.isValid = 1) AND (tblEvent.isValid = 1) and
(tblCondition_ResellerDetail.ResellerID = 1) AND
(tblEventOpen.TicketAt >= GETDATE()) AND
(GETDATE() BETWEEN
DATEADD(minute, (tblEventDetail.TimeStart - 60 * tblCondition_ResellerDetail.StartTime) , tblEventOpen.TicketAt)
AND DATEADD(minute, (tblEventDetail.TimeStart - 60 * tblCondition_ResellerDetail.EndTime) , tblEventOpen.TicketAt))
) as t1
where RowNumber >= (#PageNumber -1) * #NumberOfRows and
(#city='' or #city is null or city like #city) and
(#At is null or #At=At) and
(#TimeStartInMinute=-1 or #TimeStartInMinute=TimeStartInMinute) and
(#EventName='' or EventName like #EventName) and
(#CategoryID=-1 or #CategoryID = CategoryID) and
(#EventID is null or #EventID = EventID) and
(#DetailID is null or #DetailID = DetailID)
ORDER BY RowNumber
I'm worry about this part:
(GETDATE() BETWEEN
DATEADD(minute, (tblEventDetail.TimeStart - 60 * tblCondition_ResellerDetail.StartTime) , tblEventOpen.TicketAt)
AND DATEADD(minute, (tblEventDetail.TimeStart - 60 * tblCondition_ResellerDetail.EndTime) , tblEventOpen.TicketAt))
How does table t1 execute? I mean after I put some where expression after t1 (line 17 and further), does it filter items after execution of t1? for example I filter result by rownumber of 10, so it mean the inner (...) as t1 select will only return 10 items, or it select all items then my outer select will take 10 of them?
I want to filter my result by some optional parameters, so I put something like #DetailID is null or #DetailID = DetailID, is it a good way?
Anything else should I consider to make it faster (more optimize)?
My comment on your query:
You're correct, you should worry about condition "GETDATE() BETWEEN ...". Comparing value with function involving more than 1 table will most likely scan entire search space. Simplify your condition or if possible add a computed column for such function
Put all conditions except "RowNumber >= ..." in inner query
Its okay to put optional condition the way you do. I do it too :-)
Make sure you have index at least one for each column employed in the where clause as the first column of the index, and then the primary key. It would be better if your primary key is clustered
Well, these are based on my own experience. It may or may be not applicable to your situation.
[UPDATE] Here's the complete query
Select top (#NumberOfRows)
...
from(
SELECT
row_number() OVER (ORDER BY tblEventOpen.TicketAt, tblEvent.EventName, tblEventDetail.TimeStart) as RowNumber
, ...
FROM --[...some inner join logic...]
WHERE
(tblEventOpen.isValid = 1) AND (tblEvent.isValid = 1) and
(tblCondition_ResellerDetail.ResellerID = 1) AND
(tblEventOpen.TicketAt >= GETDATE()) AND
(GETDATE() BETWEEN
DATEADD(minute, (tblEventDetail.TimeStart - 60 * tblCondition_ResellerDetail.StartTime) , tblEventOpen.TicketAt)
AND DATEADD(minute, (tblEventDetail.TimeStart - 60 * tblCondition_ResellerDetail.EndTime) , tblEventOpen.TicketAt)) and
(#city='' or #city is null or city like #city) and
(#At is null or #At=At) and
(#TimeStartInMinute=-1 or #TimeStartInMinute=TimeStartInMinute) and
(#EventName='' or EventName like #EventName) and
(#CategoryID=-1 or #CategoryID = CategoryID) and
(#EventID is null or #EventID = EventID) and
(#DetailID is null or #DetailID = DetailID)
) as t1
where RowNumber >= (#PageNumber -1) * #NumberOfRows
ORDER BY RowNumber
Whilst you can seek advice on your query, it is better to learn how to optimise it yourself.
You need to view the execution plan, identify the bottlenecks and then see if there is anything that can be done to make an improvement.
In SSMS you can click "Query" ---> "Include Actual Execution Plan" before you run your query. (Ctrl+M) is they keyboard shortcut.
Then execute your query. SSMS will create a new tab in the results pane. Which will show you how the SQL engine executes your query, you can hover over each node for more information. The cost % will be particularly interesting, allowing you to see the most expensive part of your query.
It's difficult to advise you any more without that execution plan, which is why a number of people commented on your question. Your schema and indexes change how the query is executed, so it's not something that someone can accuratly replicate in their own environment without scripts for tables / indexes etc.... Even then statistics could be out of date and other problems could arise.
You can also execute SET STATISTICS PROFILE ON to get a textual view of the plan (maybe useful to seek help).
There are a number of articles that can help you fix the bottlenecks, or post another question for more advice.
http://msdn.microsoft.com/en-us/library/ms178071.aspx
SQL Server Query Plan Analysis
Execution Plan Basics