I am using entity framework to connect to SQL Azure and data pushed from azure functions.
I noticed that at a particular time interval of 10 mins today there were errors like following thrown from function
An exception has been raised that is likely due to a transient failure. If you are connecting to a SQL Azure database consider using SqlAzureExecutionStrategy
When I looked at sql database statistics, it had reached 99% during that time and then it was good after that.
How can I find out how many transactions were executed during that timeframe using azure portal?
It can probably give me an idea about what caused this load on the server.
In this case what you are seeing may be throttling. When throttling occurs connections get affected but also bad programming may produce unwanted number of connections and every tier has limits of number connections. Below queries will help you monitor successful/terminated/throttled connections.
select *
from sys.database_connection_stats_ex
where start_time >= CAST(FLOOR(CAST(getdate() AS float)) AS DATETIME)
order by start_time desc
select *
from sys.event_log
where event_type <> 'connection_successful' and
start_time >= CAST(FLOOR(CAST(getdate() AS float)) AS DATETIME)
order by start_time desc
To monitor when your databases reach the DTU limit you can use below query:
SELECT
(COUNT(end_time) - SUM(CASE WHEN avg_cpu_percent > 80 THEN 1 ELSE 0 END) * 1.0) / COUNT(end_time) AS 'CPU Fit Percent',
(COUNT(end_time) - SUM(CASE WHEN avg_log_write_percent > 80 THEN 1 ELSE 0 END) * 1.0) / COUNT(end_time) AS 'Log Write Fit Percent',
(COUNT(end_time) - SUM(CASE WHEN avg_data_io_percent > 80 THEN 1 ELSE 0 END) * 1.0) / COUNT(end_time) AS 'Physical Data Read Fit Percent'
FROM sys.dm_db_resource_stats
When query above shows service level objective (SLO) of 99.9% <= go to next tier.
To identify queries creating high IO, high CPU usage, high resource usage you can use Query Store. There you can find queries creating the high DTU usage.
-- Top 10 long running queries
SELECT TOP 10 q.query_id, p.plan_id,
rs.count_executions,
qsqt.query_sql_text,
CONVERT(NUMERIC(10,2), (rs.avg_cpu_time/1000)) as 'avg_cpu_time_seconds',
CONVERT(NUMERIC(10,2),(rs.avg_duration/1000)) as 'avg_duration_seconds',
CONVERT(NUMERIC(10,2),rs.avg_logical_io_reads ) as 'avg_logical_io_reads',
CONVERT(NUMERIC(10,2),rs.avg_logical_io_writes ) as 'avg_logical_io_writes',
CONVERT(NUMERIC(10,2),rs.avg_physical_io_reads ) as 'avg_physical_io_reads',
CONVERT(NUMERIC(10,0),rs.avg_rowcount ) as 'avg_rowcount'
from sys.query_store_query q
JOIN sys.query_store_plan p ON q.query_id = p.query_id
JOIN sys.query_store_runtime_stats rs ON p.plan_id = rs.plan_id
INNER JOIN sys.query_store_query_text qsqt
ON q.query_text_id = qsqt.query_text_id
WHERE rs.last_execution_time > dateadd(hour, -1, getutcdate())
ORDER BY rs.avg_duration DESC
Related
My application is used with different database instances. A particular query is executing in 1 second in all database instances except for one where it is taking more than 30 minutes. What can be the reason? Although data volume is almost the same. My Database is Oracle 11g.
Here is the query
SELECT b.VC_CUSTOMER_NAME customer,
TO_CHAR( sum(c.INV_VALUE), '999,999,999,999') value,
ROUND(
(SUM (c.inv_value) / (SELECT SUM (c.inv_value)
FROM mks_mst_customer b,
sls_temp_invoice_ticket c,
sls_dt_invoice_ticket d
WHERE c.vc_comp_code = b.vc_comp_code
AND b.vc_comp_code = '01'
AND INV_LABEL LIKE 'COLLECT FROM CUSTOMER%'
AND d.vc_ticket_no=c.vc_ticket_no
AND d.dt_invoice_date BETWEEN '01-Dec-2021' AND '07-Dec-2021'
AND b.nu_account_code=c.nu_account_code)
)* 100
) PERCENT
FROM mks_mst_customer b,
sls_temp_invoice_ticket c,
sls_dt_invoice_ticket d
WHERE c.vc_comp_code = b.vc_comp_code
AND b.vc_comp_code = '01'
AND INV_LABEL like 'COLLECT FROM CUSTOMER%'
AND b.nu_account_code=c.nu_account_code
AND d.vc_ticket_no=c.vc_ticket_no
AND d.dt_invoice_date BETWEEN '01-Dec-2021' AND '07-Dec-2021'
GROUP BY b.VC_CUSTOMER_NAME
ORDER BY SUM(c.INV_VALUE) DESC
The most obvious step would be to check indexes, on this slow instance they might not be configured.
Little more demanding would be to get statistics
Wrote a view in a database. The view takes 0 seconds to run when called from 1 database and 2.5 minutes when called from another.
I have created a video that best describes this problem. Watch it here: https://youtu.be/jEqI2bUyelQ
I tired to re create the view by dropping it.
I tried to compare the query execution plans, they are different when run with 1 database vs the other.
I looked into the query it self and noticed that if you remove the where clause the performance is regained and it takes the same amount of time for both.
Expected results are that it should take 0 seconds to run from no matter what database the view is being called from.
Here is the SQL script:
SELECT
cus.MacolaCustNo,
dsp.cmp_code ,
count(distinct dsp.item_no) AS InventoryOnDisplay,
(SELECT max(dsp.LastSynchronizationDate)
FROM Hinkley.dbo.vw_HH_next_Capture_date ) AS UpdatedDate,
case
WHEN DATEADD(DAY, 90, isnull(max(dsp.LastSynchronizationDate),'1/1/1900')) >=
(SELECT max(dsp.LastSynchronizationDate)
FROM Hinkley.dbo.vw_HH_next_Capture_date )
THEN 'Compliant'
WHEN DATEADD(DAY, 90, isnull(max(dsp.LastSynchronizationDate),'1/1/1900')) <=
(SELECT max(dsp.LastSynchronizationDate)
FROM Hinkley.dbo.vw_HH_next_Capture_date )
AND DATEADD(DAY, 90, isnull(max(dsp.LastSynchronizationDate),'1/1/1900')) >= getdate()
THEN 'Warning'
ELSE 'Non-Compliant'
END AS Inventory_Status
FROM
Hinkley.dbo.HLIINVDSP_SQL dsp (nolock)
INNER JOIN
[DATA].dbo.vw_HLI_Customer (nolock) cus
ON cus.CusNo = dsp.cmp_code
WHERE
cus.cust_showroom = 1
AND
cus.active_y = 1
GROUP BY cus.MacolaCustNo,dsp.cmp_code
I'm very new to SQL, and still learning. I'm using a reporting tool called Solarwinds Orion, and I'm honestly not sure how specific the query I have written is to the program, so if there's anything in the query that's confusing, let me know and I'll try to figure out if it's specific to the program or not.
The problem with the query I'm running is that it times out after a very long time (maybe an hour) of running. The database I'm using is huge. Unfortunately I don't really know how huge, but I've been told it's huge.
Is there anything I am doing wrong that would have a huge performance impact?
SELECT TOP 10000
Nodes.Caption AS NodeName,
NetflowApplicationSummary.AppName AS Application_Name,
SUM(NetflowApplicationSummary.TotalBytes) AS SUM_of_Bytes_Transferred,
AVG(Case OutBandwidth
When 0 Then 0
Else (NetflowApplicationSummary.TotalBytes/OutBandwidth) * 100
End) AS TEST_PERCENT
FROM
((NetflowApplicationSummary
INNER JOIN Nodes ON (NetflowApplicationSummary.NodeID = Nodes.NodeID))
INNER JOIN InterfaceTraffic ON (Nodes.NodeID = InterfaceTraffic.InterfaceID))
INNER JOIN Interfaces ON (Nodes.NodeID = Interfaces.NodeID)
WHERE
( InterfaceTraffic.DateTime > (GetDate()-30) )
AND
(Nodes.WANCircuit = 1)
GROUP BY Nodes.Caption, NetflowApplicationSummary.AppName
EDIT: I ran COUNT() on each of my tables with the below result.
SELECT COUNT(*) FROM NetflowApplicationSummary # 50671011
SELECT COUNT(*) FROM Nodes # 898
SELECT COUNT(*) FROM InterfaceTraffic # 18000166
SELECT COUNT(*) FROM Interfaces # 3938
# Total : 68,676,013
I really have no idea if 68 million items is a huge database to be honest.
A couple of notes:
The INNER JOIN operator is associative, so get rid of those parenthesis in the FROM clause and let the optimizer figure out the best join order.
You may have an implied cursor from the getdate() function being called for every row. Store the value in a local variable and compare to that.
The resulting SQL should look like this:
DECLARE #Date as datetime = getdate() - 30;
SELECT TOP 10000
Nodes.Caption AS NodeName,
NetflowApplicationSummary.AppName AS Application_Name,
SUM(NetflowApplicationSummary.TotalBytes) AS SUM_of_Bytes_Transferred,
AVG(Case OutBandwidth
When 0 Then 0
Else (NetflowApplicationSummary.TotalBytes/OutBandwidth) * 100
End) AS TEST_PERCENT
FROM NetflowApplicationSummary
INNER JOIN Nodes ON NetflowApplicationSummary.NodeID = Nodes.NodeID
INNER JOIN InterfaceTraffic ON Nodes.NodeID = InterfaceTraffic.InterfaceID
INNER JOIN Interfaces ON Nodes.NodeID = Interfaces.NodeID
WHERE InterfaceTraffic.DateTime > #Date
AND Nodes.WANCircuit = 1
GROUP BY Nodes.Caption, NetflowApplicationSummary.AppName
Also, make sure you have an index on table InterfaceTraffic with a leading field of DateTime. If this doesn't exist you may need to pay the penalty of a first time creation of it.
If this doesn't help, then you may need to post the execution plan where it can be inspected.
Out of interest, also perform a count() on all four tables and post that result, just so members here can make their own assessment of how big your database really is. It is amazing how many non-technical people still think a 1 or 10 GB database is huge, while I run that easily on my workstation!
I tried Querying from Table with multiple ORDER BY
SELECT TOP 50
TBL_ContentsPage.NewsId,
TBL_ContentsPage.author,
TBL_ContentsPage.Header,
TBL_ContentsPage.TextContent,
TBL_ContentsPage.PostedDate,
TBL_ContentsPage.status,
TBLTempSettings.templateID
FROM TBL_ContentsPage
INNER JOIN TBLTempSettings
ON TBL_ContentsPage.NewsId = TBLTempSettings.newsId
WHERE TBL_ContentsPage.mode = '1' AND TBLTempSettings.mode = '1' AND (TBLTempSettings.templateID = #templateID OR #templateID = 'all')
ORDER BY 0 + TBLTempSettings.rank DESC
But when I add TBL_ContentsPage.PostedDate DESC the query takes more than double time. TBLTempSettings.rank is indexed already.
To sort your query results, SQL Server burns CPU time.
The ORDER BY clause consumes all of the query results as fast as possible into memory in your app, and then sort.
Your application is already designed in a way that you can scale out multiple app servers to distribute CPU load, whereas your database server…is not.
The sort operations, besides using the TEMPDB system database for a temporary storage area, also add a great I/O rate to the operations.
Therefore, if you are used to seeing the Sort operator frequently in its queries and this operator has a high consumption operation, consider removing the mentioned clause. On the other hand, if you know that will always organize your query by a specific column, consider indexing it.
Try this one -
SELECT TOP 50 c.newsId
, c.author
, c.Header
, c.TextContent
, c.PostedDate
, c.status
, t.templateID
FROM TBL_ContentsPage c
JOIN (
SELECT *
FROM TBLTempSettings t
WHERE t.mode = '1'
AND (t.templateID = #templateID OR #templateID = 'all')
) t ON c.newsId = CAST(t.newsId AS INT)
WHERE c.mode = '1'
ORDER BY t.rank DESC
With this query I'm getting the top 10 slow queries in Sql Server.
SELECT TOP 20
SUBSTRING(qt.text, (qs.statement_start_offset/2)+1,
((CASE qs.statement_end_offset WHEN -1 THEN DATALENGTH(qt.text)
ELSE qs.statement_end_offset
END - qs.statement_start_offset)/2)+1),
qs.execution_count,
qs.total_logical_reads, qs.last_logical_reads,
qs.min_logical_reads, qs.max_logical_reads,
qs.total_elapsed_time, qs.last_elapsed_time,
qs.min_elapsed_time, qs.max_elapsed_time,
qs.last_execution_time,
qp.query_plan
FROM
sys.dm_exec_query_stats qs
CROSS APPLY
sys.dm_exec_sql_text(qs.sql_handle) qt
CROSS APPLY
sys.dm_exec_query_plan(qs.plan_handle) qp
WHERE
qt.encrypted = 0
ORDER BY
qs.total_logical_reads DESC
What I want to do is finding each queries last 10 execution time.
Or an average execution period in a day makes me glad.
You can create snapshots of your database procedure executions at certain period of time and then you can compare them. Use the sql in order to insert into table every snapshot.
SELECT
getDAte() as SnapshotDate,
s.database_id,
s.[plan_handle],
s.[object_id],
s.last_execution_time,
s.execution_count,
s.total_worker_time,
s.last_worker_time,
s.min_worker_time,
s.max_worker_time,
s.total_physical_reads,
s.last_physical_reads,
s.min_physical_reads,
s.max_physical_reads,
s.total_logical_writes,
s.last_logical_writes,
s.min_logical_writes,
s.max_logical_writes,
s.total_logical_reads,
s.last_logical_reads,
s.min_logical_reads,
s.max_logical_reads,
s.total_elapsed_time,
s.last_elapsed_time,
s.min_elapsed_time,
s.max_elapsed_time
FROM sys.dm_exec_procedure_stats AS s
WHERE s.database_id NOT IN
(
DB_ID('master'),
DB_ID(N'tempdb'),
DB_ID(N'model'), DB_ID(N'msdb'),
32767 -- RESOURCE db
) ;
If you want to check that is running slowly and why, check Standard reports in SSMS.