Insanely poor query performance on SQL Azure - sql

We have S1: 20 DTU 250GB SQL Azure database with following table
CREATE TABLE [dbo].[Reads]
(
[ReadId] [INT] IDENTITY(1,1) NOT NULL,
[LicenseNumber] [VARCHAR](50) NULL,
[Name] [VARCHAR](50) NULL,
[Serial] [VARCHAR](20) NULL,
[FirstSeenUtc] [DATETIME] NULL,
[LastSeenUtc] [DATETIME] NULL,
[Count] [INT] NOT NULL,
[Model] [VARCHAR](100) NULL,
[Make] [VARCHAR](100) NULL,
[TimestampUtc] [DATETIME] NOT NULL,
[PortNumber] [INT] NOT NULL,
[Code] [VARCHAR](50) NULL,
[Peak] [FLOAT] NULL,
CONSTRAINT [PK_Reads]
PRIMARY KEY CLUSTERED ([ReadId] ASC)
WITH (STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF) ON [PRIMARY]
) ON [PRIMARY]
This table has more than 80 million rows and simple query as
select count(1) from dbo.Reads
took 1 hour and 30 minutes to run. The load on the database in minimal with a process adding maybe around 1000 rows every minute. Currently nothing reading from this table and overall pretty much no load on the database.
I upgraded the database to S2: 50 DTU and the above query took 18 minutes to run.
I updated stats but didn't help much. I ran Brent Ozar's BlitzFirst stored procedure while the above query was running and it said database is maxing out data IO. Same database restored on my surface laptop returns row count in a second. Database performance tab does not have any recommendations.
S2: 50 DTU costs $75 per month and next option is S3: 100 DTU at $150 per month.
My plan was to create a database for every customer I sign up but at $150 per database per month I will go out of business pretty quick!
Is this SQL Azure's expected level of performance? Shouldn't this sort of basic query yield instantaneous result? Would moving to SQL Server on VM be better?
[Update 2019-03-10 11:35AM EST]
The table does have following IX
CREATE NONCLUSTERED INDEX [IX_Read_License_Code_TimeStamp] ON [dbo].[Reads]
(
[LicenseNumber] ASC,
[Code] ASC,
[TimestampUtc] ASC
)WITH (STATISTICS_NORECOMPUTE = OFF, DROP_EXISTING = OFF, ONLINE = OFF) ON [PRIMARY]
I see now that some of the columns can be safely changed into NOT NULL and could help improve things.
[Update: 2019-03-10 8:40PM EST]
I altered the table to make LicenseNumber and Code NOT NULL that took more than 6 hours. After that the count query ran in 1 minute and 32 seconds.
Following query returned results in 40 seconds
select Code, LicenseNumber, TimeStampUtc from dbo.Reads Where TimestampUtc >= '2019-03-10'

Dropping the index and creating it again did it for me. Before this even queries that were completely covered by the index was taking several minutes to execute. After re-creating the index same queries are running under a second.
Thanks to everyone who commented on this question. I learned new things.

Related

Query with index incredibly slow on Sort

I have a database table with about 3.5 million rows. The table holds contract data records, with an amount, a date, and some IDs related to other tables (VendorId, AgencyId, StateId), this is the database table:
CREATE TABLE [dbo].[VendorContracts]
(
[Id] [uniqueidentifier] NOT NULL,
[ContractDate] [datetime2](7) NOT NULL,
[ContractAmount] [decimal](19, 4) NULL,
[VendorId] [uniqueidentifier] NOT NULL,
[AgencyId] [uniqueidentifier] NOT NULL,
[StateId] [uniqueidentifier] NOT NULL,
[CreatedBy] [nvarchar](max) NULL,
[CreatedDate] [datetime2](7) NOT NULL,
[LastModifiedBy] [nvarchar](max) NULL,
[LastModifiedDate] [datetime2](7) NULL,
[IsActive] [bit] NOT NULL,
CONSTRAINT [PK_VendorContracts]
PRIMARY KEY CLUSTERED ([Id] ASC)
WITH (STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF,
OPTIMIZE_FOR_SEQUENTIAL_KEY = OFF) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
I have a page on my site where the user can filter a paged grid by VendorId and ContractDate, and sort by the ContractAmount or ContractDate. This is the query that EF Core produces when sorting by ContractAmount for this particular vendor that has over a million records:
DECLARE #__vendorId_0 uniqueIdentifier = 'f39c7198-b05a-477e-b7bc-cb189c5944c0';
DECLARE #__startDate_1 datetime2 = '2017-01-01T07:00:00.0000000';
DECLARE #__endDate_2 datetime2 = '2018-01-02T06:59:59.0000000';
DECLARE #__p_3 int = 0;
DECLARE #__p_4 int = 50;
SELECT [v].[Id], [v].[AdminFee], [v].[ContractAmount], [v].[ContractDate], [v].[PONumber], [v].[PostalCode], [v].[AgencyId], [v].[StateId], [v].[VendorId]
FROM [VendorContracts] AS [v]
WHERE (([v].[VendorId] = #__vendorId_0) AND ([v].[ContractDate] >= #__startDate_1)) AND ([v].[ContractDate] <= #__endDate_2)
ORDER BY [v].[ContractAmount] ASC
OFFSET #__p_3 ROWS FETCH NEXT #__p_4 ROWS ONLY
When I run this, it takes 50s, whether sorting ASC or DESC or offsetting by thousands, it's always 50s.
If I look at my Execution Plan, I see that it does use my index, but the Sort Cost is what's making the query take so long
This is my index:
CREATE NONCLUSTERED INDEX [IX_VendorContracts_VendorIdAndContractDate] ON [dbo].[VendorContracts]
(
[VendorId] ASC,
[ContractDate] DESC
)
INCLUDE([ContractAmount],[AdminFee],[PONumber],[PostalCode],[AgencyId],[StateId])
WITH (STATISTICS_NORECOMPUTE = OFF, DROP_EXISTING = OFF, ONLINE = OFF, OPTIMIZE_FOR_SEQUENTIAL_KEY = OFF)
The strange thing is that I have a similar index for sorting by ContractDate, and that one returns results in less than a second, even on the vendor that has millions of records.
Is there something wrong with my index? Or is sorting by a decimal data type just incredibly intensive?
You have an index that allows the
VendorId = #__vendorId_0 and ContractDate BETWEEN #__startDate_1 AND #__endDate_2
predicate to be seeked exactly.
SQL Server estimates that 6,657 rows will match this predicate and need to be sorted so it requests a memory grant suitable for that amount of rows.
In reality for the parameter values where you see the problem nearly half a million are sorted and the memory grant is insufficient and the sort spills to disc.
50 seconds for 10,299 spilled pages does still sound unexpectedly slow but I assume you may well be on some very low SKU in Azure SQL Database?
Some possible solutions to resolve the issue might be to
Force it to use an execution plan that is compiled for parameter values with your largest vendor and wide date range (e.g. with OPTIMIZE FOR hint). This will mean an excessive memory grant for smaller vendors though which may mean other queries have to incur memory grant waits.
Use OPTION (RECOMPILE) so every invocation is recompiled for the specific parameter values passed. This means in theory every execution will get an appropriate memory grant at the cost of more time spent in compilation.
Remove the need for a sort at all. If you have an index on VendorId, ContractAmount INCLUDE (ContractDate) then the VendorId = #__vendorId_0 part can be seeked and the index read in ContractAmount order. Once 50 rows have been found that match the ContractDate BETWEEN #__startDate_1 AND #__endDate_2 predicate then query execution can stop. SQL Server might not choose this execution plan without hints though.
I'm not sure how easy or otherwise it is to apply query hints through EF but you could look at forcing a plan via query store if you manage to get the desired plan to appear there.

SQL Statement take long time to execute

I have a SQL Server database and having a table containing too many records. Before it was working fine but now when I run SQL Statement takes time to execute.
Sometime cause the SQL Database to use too much CPU.
This is the Query for the table.
CREATE TABLE [dbo].[tblPAnswer1](
[ID] [bigint] IDENTITY(1,1) NOT NULL,
[AttrID] [int] NULL,
[Kidato] [int] NULL,
[Wav] [int] NULL,
[Was] [int] NULL,
[ShuleID] [int] NULL,
[Mwaka] [int] NULL,
[Swali] [float] NULL,
[Wilaya] [int] NULL,
CONSTRAINT [PK_tblPAnswer1] PRIMARY KEY CLUSTERED
(
[ID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
And the following down is the sql stored procedure for the statement.
ALTER PROC [dbo].[uspGetPAnswer1](#ShuleID int, #Mwaka int, #Swali float, #Wilaya int)
as
SELECT ID,
AttrID,
Kidato,
Wav,
Was,
ShuleID,
Mwaka,
Swali,
Wilaya
FROM dbo.tblPAnswer1
WHERE [ShuleID] = #ShuleID
AND [Mwaka] = #Mwaka
AND [Swali] = #Swali
AND Wilaya = #Wilaya
What is wrong in my SQL Statement. Need help.
Just add an index on ShuleID, Mwaka, Swali and Wilaya columns. The order of columns in the index should depend on distribution of data (the columns with most diverse values in it should be the first in the index, and so on).
And if you need it super-fast, also include all the remaining columns used in the query, to have a covering index for this particular query.
EDIT: Probably should move the float col (Swali) from indexed to included columns.
Add an Index on the ID column and include ShuleID, Mwaka, Swali and Wilaya columns. That should help improve the speed of the query.
CREATE NONCLUSTERED INDEX IX_ID_ShuleID_Mwaka_Swali_Wilaya
ON tblPAnswer1 (ID)
INCLUDE (ShuleID, Mwaka, Swali, Wilaya);
What is the size of the table? You may need additional indices as you are not using the primary key to query the data. This article by Pinal Dave provides a script to identify missing indices.
http://blog.sqlauthority.com/2011/01/03/sql-server-2008-missing-index-script-download/
It provides a good starting point for index optimization.

Concurrent SQL Insert Are Blocking to a Table

I am using the SQL 2005 for an application. In my case, numbers of requests are being generated through different processes and inserting the Record to one Table. But when I examine the processes running in database by sp_who2 active procedure, I find the Inserts are being blocked by other Inserts Statements and causing the process slower. Is there any way to avoid the blocking / deadlocks in concurrent inserts to one table. Below is the structure of my table.
`CREATE TABLE [dbo].[Tbl_Meta_JS_Syn_Details](
[ID] [int] IDENTITY(1,1) NOT NULL,
[EID] [int] NULL,
[Syn_Points_ID] [int] NULL,
[Syn_ID] [int] NULL,
[Syn_Word_ID] [int] NULL,
[Created_Date_Time] [datetime] NULL CONSTRAINT [DF_Tbl_JS_Syn_Details_Created_Date_Time] DEFAULT (getdate()),
CONSTRAINT [PK_Tbl_JS_Syn_Details] PRIMARY KEY CLUSTERED
(
[ID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]`
There is always blocking if many processes are trying to insert into one table. However, some settings can be used to limit the amount of time.
What isolation level are you using? The default? http://technet.microsoft.com/en-us/library/ms173763.aspx
Can you include the following information into this post:
1 - How many processes are running (inserting) at the same time?
2 - What type of disk sub-system are you using? RAID 5 or just a simple disk.
3 - What version of SQL Server you are on?
4 - What are the growth options on the database?
5 - How full is the current database?
6 - Is instance file initialization on?
Given answers to the above questions, you can optimize the insert process.

Slow SQL performance

I have a query as follows;
SELECT COUNT(Id) FROM Table
The table contains 33 million records - it contains a primary key on Id and no other indices.
The query takes 30 seconds.
The actual execution plan shows it uses a clustered index scan.
We have analysed the table and found it isn't fragmented using the first query shown in this link: http://sqlserverpedia.com/wiki/Index_Maintenance.
Any ideas as to why this query is so slow and how to fix it.
The Table Definition:
CREATE TABLE [dbo].[DbConversation](
[ConversationID] [int] IDENTITY(1,1) NOT NULL,
[ConversationGroupID] [int] NOT NULL,
[InsideIP] [uniqueidentifier] NOT NULL,
[OutsideIP] [uniqueidentifier] NOT NULL,
[ServerPort] [int] NOT NULL,
[BytesOutbound] [bigint] NOT NULL,
[BytesInbound] [bigint] NOT NULL,
[ServerOutside] [bit] NOT NULL,
[LastFlowTime] [datetime] NOT NULL,
[LastClientPort] [int] NOT NULL,
[Protocol] [tinyint] NOT NULL,
[TypeOfService] [tinyint] NOT NULL,
CONSTRAINT [PK_Conversation_1] PRIMARY KEY CLUSTERED
(
[ConversationID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
One thing I have noticed is the database is set to grow in 1Mb chunks.
It's a live system so we restricted in what we can play with - any ideas?
UPDATE:
OK - we've improved performance in the actual query of interest by adding new non-clustered indices on appropriate columns so it's not a critical issue anymore.
SELECT COUNT is still slow though - tried it with NOLOCK hints - no difference.
We're all thinking it's something to do with the Autogrowth set to 1Mb rather than a larger number, but surprised it has this effect. Can MDF fragmentation on the disk be a possible cause?
Is this a frequently read/inserted/updated table? Is there update/insert activity concurrent with your select?
My guess is the delay is due to contention.
I'm able to run a count on 189m rows in 17 seconds on my dev server, but there's nothing else hitting that table.
If you aren't too worried about contention or absolute accuracy you can do:
exec sp_spaceused 'MyTableName' which will give a count based on meta-data.
If you want a more exact count but don't necessarily care if it reflect concurrent DELETE or INSERT activity you can do your current query with a NOLOCK hint:
SELECT COUNT(id) FROM MyTable WITH (NOLOCK) which will not get row-level locks for your query and should run faster.
Thoughts:
Use SELECT COUNT(*) which is correct for "how many rows" (as per ANSI SQL). Even if ID is the PK and thus not nullable, SQL Server will count ID. Not rows.
If you can live with approximate counts, then use sys.dm_db_partition_stats. See my answer here: Fastest way to count exact number of rows in a very large table?
If you can live with dirty reads use WITH (NOLOCK)
use [DatabaseName]
select tbl.name, dd.rows from sysindexes dd
inner join sysobjects tbl on dd.id = tbl.id where dd.indid < 2 and tbl.xtype = 'U'
select sum(dd.rows)from sysindexes dd
inner join sysobjects tbl on dd.id = tbl.id where dd.indid < 2 and tbl.xtype = 'U'
By using these queries you can fetch all tables' count within 0-5 seconds
use where clause according to your requirement.....
Another idea: When the files grow with 1MB parts, it may be fragmented on the file system. You can't see this by SQL, you see it using a disk defragmentation tool.

Massive CROSS JOIN in SQL Server 2005

I'm porting a process which creates a MASSIVE CROSS JOIN of two tables. The resulting table contains 15m records (looks like the process makes a 30m cross join with a 2600 row table and a 12000 row table and then does some grouping which must split it in half). The rows are relatively narrow - just 6 columns. It's been running for 5 hours with no sign of completion. I only just noticed the count discrepancy between the known good and what I would expect for the cross join, so my output doesn't have the grouping or deduping which will halve the final table - but this still seems like it's not going to complete any time soon.
First I'm going to look to eliminate this table from the process if at all possible - obviously it could be replaced by joining to both tables individually, but right now I do not have visibility into everywhere else it is used.
But given that the existing process does it (in less time, on a less powerful machine, using the FOCUS language), are there any options for improving the performance of large CROSS JOINs in SQL Server (2005) (hardware is not really an option, this box is a 64-bit 8-way with 32-GB of RAM)?
Details:
It's written this way in FOCUS (I'm trying to produce the same output, which is a CROSS JOIN in SQL):
JOIN CLEAR *
DEFINE FILE COSTCENT
WBLANK/A1 = ' ';
END
TABLE FILE COSTCENT
BY WBLANK BY CC_COSTCENT
ON TABLE HOLD AS TEMPCC FORMAT FOCUS
END
DEFINE FILE JOINGLAC
WBLANK/A1 = ' ';
END
TABLE FILE JOINGLAC
BY WBLANK BY ACCOUNT_NO BY LI_LNTM
ON TABLE HOLD AS TEMPAC FORMAT FOCUS INDEX WBLANK
JOIN CLEAR *
JOIN WBLANK IN TEMPCC TO ALL WBLANK IN TEMPAC
DEFINE FILE TEMPCC
CA_JCCAC/A16=EDIT(CC_COSTCENT)|EDIT(ACCOUNT_NO);
END
TABLE FILE TEMPCC
BY CA_JCCAC BY CC_COSTCENT AS COST CENTER BY ACCOUNT_NO
BY LI_LNTM
ON TABLE HOLD AS TEMPCCAC
END
So the required output really is a CROSS JOIN (it's joining a blank column from each side).
In SQL:
CREATE TABLE [COSTCENT](
[COST_CTR_NUM] [int] NOT NULL,
[CC_CNM] [varchar](40) NULL,
[CC_DEPT] [varchar](7) NULL,
[CC_ALSRC] [varchar](6) NULL,
[CC_HIER_CODE] [varchar](20) NULL,
CONSTRAINT [PK_LOOKUP_GL_COST_CTR] PRIMARY KEY NONCLUSTERED
(
[ID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY
= OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
CREATE TABLE [JOINGLAC](
[ACCOUNT_NO] [int] NULL,
[LI_LNTM] [int] NULL,
[PR_PRODUCT] [varchar](5) NULL,
[PR_GROUP] [varchar](1) NULL,
[AC_NAME_LONG] [varchar](40) NULL,
[LI_NM_LONG] [varchar](30) NULL,
[LI_INC] [int] NULL,
[LI_MULT] [int] NULL,
[LI_ANLZ] [int] NULL,
[LI_TYPE] [varchar](2) NULL,
[PR_SORT] [varchar](2) NULL,
[PR_NM] [varchar](26) NULL,
[PZ_SORT] [varchar](2) NULL,
[PZNAME] [varchar](26) NULL,
[WANLZ] [varchar](3) NULL,
[OPMLNTM] [int] NULL,
[PS_GROUP] [varchar](5) NULL,
[PS_SORT] [varchar](2) NULL,
[PS_NAME] [varchar](26) NULL,
[PT_GROUP] [varchar](5) NULL,
[PT_SORT] [varchar](2) NULL,
[PT_NAME] [varchar](26) NULL
) ON [PRIMARY]
CREATE TABLE [JOINCCAC](
[CA_JCCAC] [varchar](16) NOT NULL,
[CA_COSTCENT] [int] NOT NULL,
[CA_GLACCOUNT] [int] NOT NULL,
[CA_LNTM] [int] NOT NULL,
[CA_UNIT] [varchar](6) NOT NULL,
CONSTRAINT [PK_JOINCCAC_KNOWN_GOOD] PRIMARY KEY CLUSTERED
(
[CA_JCCAC] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY
= OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
With the SQL Code:
INSERT INTO [JOINCCAC]
(
[CA_JCCAC]
,[CA_COSTCENT]
,[CA_GLACCOUNT]
,[CA_LNTM]
,[CA_UNIT]
)
SELECT Util.PADLEFT(CONVERT(varchar, CC.COST_CTR_NUM), '0',
7)
+ Util.PADLEFT(CONVERT(varchar, GL.ACCOUNT_NO), '0',
9) AS CC_JCCAC
,CC.COST_CTR_NUM AS CA_COSTCENT
,GL.ACCOUNT_NO % 900000000 AS CA_GLACCOUNT
,GL.LI_LNTM AS CA_LNTM
,udf_BUPDEF(GL.ACCOUNT_NO, CC.COST_CTR_NUM, GL.LI_LNTM, 'N') AS CA_UNIT
FROM JOINGLAC AS GL
CROSS JOIN COSTCENT AS CC
Depending on how this table is subsequently used, it should be able to be eliminated from the process, by simply joining to both the original tables used to build it. However, this is an extremely large porting effort, and I might not find the usage of the table for some time, so I was wondering if there were any tricks to CROSS JOINing big tables like that in a timely fashion (especially given that the existing process in FOCUS is able to do it more speedily). That way I could validate the correctness of my building of the replacement query and then later factor it out with views or whatever.
I am also considering factoring out the UDFs and string manipulation and performing the CROSS JOIN first to break the process up a bit.
RESULTS SO FAR:
It turns out that the UDFs do contribute a lot (negatively) to the performance. But there also appears to be a big difference between a 15m row cross join and a 30m row cross join. I do not have SHOWPLAN rights (boo hoo), so I can't tell whether the plan it is using is better or worse after changing indexes. I have not refactored it yet, but am expecting the entire table to go away shortly.
Examining that query shows only one column used from one table, and only two columns used from the other table. Due to the very low numbers of columns used, this query can be easily enhanced with covering indexes:
CREATE INDEX COSTCENTCoverCross ON COSTCENT(COST_CTR_NUM)
CREATE INDEX JOINGLACCoverCross ON JOINGLAC(ACCOUNT_NO, LI_LNTM)
Here are my questions for further optimization:
When you put the query in query analyzer and whack the "show estimated execution plan" button, it will show a graphical representation of what it's going to do.
Join Type: There should be a nested loop join in there. (the other options are merge join and hash join). If you see nested loop, then ok. If you see merge join or hash join, let us know.
Order of table access: Go all the way to the top and scroll all the way to the right. The first step should be accessing a table. Which table is that and what method is used(index scan, clustered index scan)? What method is used to access the other table?
Parallelism: You should see the little jaggedy arrows on almost all icons in the plan indicating that parallelism is being used. If you don't see this, there is a major problem!
That udf_BUPDEF concerns me. Does it read from additional tables? Util.PADLEFT concerns me less, but still.. what is it? If it isn't a Database Object, then consider using this instead:
RIGHT('z00000000000000000000000000' + columnName, 7)
Are there any triggers on JOINCCAC? How about indexes? With an insert this large, you'll want to drop all triggers and indexes on that table.
Continuing on what others a saying, DB functions that contained queries which are used in a select always made my queries extremely slow. Off the top of my head, I believe i had a query run in 45 seconds, then I removed the function, and then result was 0 seconds :)
So check udf_BUPDEF is not doing any queries.
Break down the query to make it a plain simple cross join.
SELECT CC.COST_CTR_NUM, GL.ACCOUNT_NO
,CC.COST_CTR_NUM AS CA_COSTCENT
,GL.ACCOUNT_NO AS CA_GLACCOUNT
,GL.LI_LNTM AS CA_LNTM
-- I don't know what is BUPDEF doing? but remove it from the query for time being
-- ,udf_BUPDEF(GL.ACCOUNT_NO, CC.COST_CTR_NUM, GL.LI_LNTM, 'N') AS CA_UNIT
FROM JOINGLAC AS GL
CROSS JOIN COSTCENT AS CC
See how good is the simple cross join? (without any functions applied on it)