I am using the SQL 2005 for an application. In my case, numbers of requests are being generated through different processes and inserting the Record to one Table. But when I examine the processes running in database by sp_who2 active procedure, I find the Inserts are being blocked by other Inserts Statements and causing the process slower. Is there any way to avoid the blocking / deadlocks in concurrent inserts to one table. Below is the structure of my table.
`CREATE TABLE [dbo].[Tbl_Meta_JS_Syn_Details](
[ID] [int] IDENTITY(1,1) NOT NULL,
[EID] [int] NULL,
[Syn_Points_ID] [int] NULL,
[Syn_ID] [int] NULL,
[Syn_Word_ID] [int] NULL,
[Created_Date_Time] [datetime] NULL CONSTRAINT [DF_Tbl_JS_Syn_Details_Created_Date_Time] DEFAULT (getdate()),
CONSTRAINT [PK_Tbl_JS_Syn_Details] PRIMARY KEY CLUSTERED
(
[ID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]`
There is always blocking if many processes are trying to insert into one table. However, some settings can be used to limit the amount of time.
What isolation level are you using? The default? http://technet.microsoft.com/en-us/library/ms173763.aspx
Can you include the following information into this post:
1 - How many processes are running (inserting) at the same time?
2 - What type of disk sub-system are you using? RAID 5 or just a simple disk.
3 - What version of SQL Server you are on?
4 - What are the growth options on the database?
5 - How full is the current database?
6 - Is instance file initialization on?
Given answers to the above questions, you can optimize the insert process.
Related
I have a table where anywhere between 1 and 5 million records come in as a batch and then a bunch of stored procedures are ran on them that update and delete records in the batch.
All of these stored procedures are using two fields for selectivity so they only run on the records in that batch.
Both of these fields are in a nonclustered index.
There are times when multiple batches are run at the same time and I am continuously getting deadlocks happening between batches, I assume due to lock escalations.
Trying to figure out if there is a way to solve this without a complete redesign to use a dedicated table for each batch. Is disabling page locks asking for more trouble?
Additional information:
Example of table structure and Index(the real one has a lot more columns than this)
CREATE TABLE [dbo].[TempImport](
[UID] [int] IDENTITY(1,1) NOT NULL,
[EID] [int] NULL,
[EXTID] [int] NULL,
[COL1] [varchar](50) NULL,
[COL2] [varchar](50) NULL,
CONSTRAINT [PK_TempImport] PRIMARY KEY CLUSTERED
(
[UID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, OPTIMIZE_FOR_SEQUENTIAL_KEY = OFF) ON [PRIMARY]
) ON [PRIMARY]
GO
CREATE NONCLUSTERED INDEX [IX_TempImport_Main] ON [dbo].[TempImport]
(
[EID] ASC,
[EXTID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, OPTIMIZE_FOR_SEQUENTIAL_KEY = OFF) ON [PRIMARY]
GO
And the types of queries in the stored procedures look something like this:
update TempImport set COL1 = 'foo' where EID = #EID and EXTID = #EXT and COL2='bar'
And the last thing that happens when batch completes is something like this:
Delete from TempImport where EID = #EID and EXTID = #EXT
It is typically the delete and the updates in the stored procedures that are involved in the deadlock.
Please let me know if any other info would be useful
Just some potential suggestions
Yeah, you could split them up into (say) batches of 2000, to stop the row locks escalating to table locks, but you'd have a gazillion running. Probably not a good solution.
You could modify the update processes (as per Brent Ozar's video re deadlocks I recommended in the comments) to update each table once and once only. As a suggestion, you could also try encapsulating them in transactions. These could remove deadlocks, but add blocking (where the second operation has to wait for the first to finish).
A structural method is to make a 'loading' or 'scratch' table which has IDs and relevant data to be operated on (you could see this as a queue). When calls to do updates come in, they simply insert their requests into that queue. Then you have an asynchronous process (that can only be called one at a time) that gets all outstanding data from the queue (and flags it as such), does the relevant processing, then cleans out the processed data from the scratch table.
Note that if you have other things accessing/using this table, then they could get blocked or deadlocked on this table too, and then your approach needs to be very careful.
We have S1: 20 DTU 250GB SQL Azure database with following table
CREATE TABLE [dbo].[Reads]
(
[ReadId] [INT] IDENTITY(1,1) NOT NULL,
[LicenseNumber] [VARCHAR](50) NULL,
[Name] [VARCHAR](50) NULL,
[Serial] [VARCHAR](20) NULL,
[FirstSeenUtc] [DATETIME] NULL,
[LastSeenUtc] [DATETIME] NULL,
[Count] [INT] NOT NULL,
[Model] [VARCHAR](100) NULL,
[Make] [VARCHAR](100) NULL,
[TimestampUtc] [DATETIME] NOT NULL,
[PortNumber] [INT] NOT NULL,
[Code] [VARCHAR](50) NULL,
[Peak] [FLOAT] NULL,
CONSTRAINT [PK_Reads]
PRIMARY KEY CLUSTERED ([ReadId] ASC)
WITH (STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF) ON [PRIMARY]
) ON [PRIMARY]
This table has more than 80 million rows and simple query as
select count(1) from dbo.Reads
took 1 hour and 30 minutes to run. The load on the database in minimal with a process adding maybe around 1000 rows every minute. Currently nothing reading from this table and overall pretty much no load on the database.
I upgraded the database to S2: 50 DTU and the above query took 18 minutes to run.
I updated stats but didn't help much. I ran Brent Ozar's BlitzFirst stored procedure while the above query was running and it said database is maxing out data IO. Same database restored on my surface laptop returns row count in a second. Database performance tab does not have any recommendations.
S2: 50 DTU costs $75 per month and next option is S3: 100 DTU at $150 per month.
My plan was to create a database for every customer I sign up but at $150 per database per month I will go out of business pretty quick!
Is this SQL Azure's expected level of performance? Shouldn't this sort of basic query yield instantaneous result? Would moving to SQL Server on VM be better?
[Update 2019-03-10 11:35AM EST]
The table does have following IX
CREATE NONCLUSTERED INDEX [IX_Read_License_Code_TimeStamp] ON [dbo].[Reads]
(
[LicenseNumber] ASC,
[Code] ASC,
[TimestampUtc] ASC
)WITH (STATISTICS_NORECOMPUTE = OFF, DROP_EXISTING = OFF, ONLINE = OFF) ON [PRIMARY]
I see now that some of the columns can be safely changed into NOT NULL and could help improve things.
[Update: 2019-03-10 8:40PM EST]
I altered the table to make LicenseNumber and Code NOT NULL that took more than 6 hours. After that the count query ran in 1 minute and 32 seconds.
Following query returned results in 40 seconds
select Code, LicenseNumber, TimeStampUtc from dbo.Reads Where TimestampUtc >= '2019-03-10'
Dropping the index and creating it again did it for me. Before this even queries that were completely covered by the index was taking several minutes to execute. After re-creating the index same queries are running under a second.
Thanks to everyone who commented on this question. I learned new things.
i have the following table structure :
CREATE TABLE [dbo].[TableABC](
[Id] [bigint] IDENTITY(1,1) NOT NULL,
[FieldA] [nvarchar](36) NULL,
[FieldB] [int] NULL,
[FieldC] [datetime] NULL,
[FieldD] [nvarchar](255) NULL,
[FieldE] [decimal](19, 5) NULL,
PRIMARY KEY CLUSTERED
(
[Id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
I do two type of CRUD operations with this table.
SELECT * FROM [dbo].[TableABC] WHERE FieldA = #FieldA
INSERT INTO [dbo].[TableABC](FieldA,FieldB,FieldC,FieldD,FieldE) VALUES (#FieldA,#FieldB,#FieldC,#FieldD,#FieldE)
FieldA has a unique value, but there is no constraint in the table.
Currently there are 6070755 rows in the table. Along with data growing , performance is getting slow.
Any suggestion , how to improve perfomance ? How to make CREATE and READ operation faster ?
now i faced problem , that select and insert takes too long , sometime more then 60 seconds
Read up on SQL basics- and Indices DEFINITELY are one. And if you have a unique value and no index on the field (constraint is irrelevant, unique index is good neough) - yes, that will get slower. SQL Server has to check the whole table.
So:
Add a unique index to Field a.
Given your 2 statements and the little "FieldA has a unique value, but there is no constraint in the table." I assume you are trying to enforce unique values there by selecting first. This will slow you down.
Instead make the index, and then try/catch the non unique sql errors - WAY faster. WAY faster. The index will make the insert a LITTLE slower, but you can save on the very slow select you do not totally.
Using version:
Microsoft SQL Server 2008 R2 (SP3-OD) (KB3144114) - 10.50.6542.0 (Intel X86)
Feb 22 2016 18:12:09
Copyright (c) Microsoft Corporation
Standard Edition on Windows NT 5.2 <X86> (Build : )
I have a heavy table (135K rows), that I moved from another DB.
It transferred with the [id] column being a standard int column instead of it being the key & seed column.
When trying to edit that field to become an identity specification, with a seed value, its errors out and gives me this error:
Execution Timeout Expired.
The timeout period elapsed prior to completion of the operation...
I even tried deleting that column, to try recreate it later, but i get the same issue.
Thanks
UPDATE:
Table structure:
CREATE TABLE [dbo].[tblEmailsSent](
[id] [int] IDENTITY(1,1) NOT NULL, -- this is what it should be. currently its just an `[int] NOT NULL`
[Sent] [datetime] NULL,
[SentByUser] [nvarchar](50) NULL,
[ToEmail] [nvarchar](150) NULL,
[StudentID] [int] NULL,
[SubjectLine] [nvarchar](200) NULL,
[MessageContent] [nvarchar](max) NULL,
[ReadStatus] [bit] NULL,
[Folder] [nvarchar](50) NULL,
CONSTRAINT [PK_tblMessages] PRIMARY KEY CLUSTERED
(
[id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO
I think that your question is a duplicate of Adding an identity to an existing column. That question above has an answer that should be perfect for your situation. I'll reproduce its essential part here below.
But before that, let's clarify why you see the timeout error.
You are trying to add the IDENTITY property to existing column. And you are using SSMS GUI for it. A simple ALTER COLUMN statement can't do it and even if it could, SSMS generates a script that creates a new table, copies over the data into the new table, drops the old table and renames the new table to the old name. When you do this operation via SSMS GUI it runs its scripts with a predefined timeout of 30 seconds.
Of course, you can change this setting in SSMS and increase the timeout, but there is a much better way.
Simple/lazy way
Use SSMS GUI to change the column definition, but then instead of clicking "Save", click "Generate Change Script" in the table designer.
Then save this script to a file and review the generated T-SQL code that GUI runs behind the scene.
You'll see that it creates a temp table with the required schema, copies data over, re-creates foreign keys and indexes, drops the old table and renames the new table.
The script itself is usually correct, but pay close attention to transactions in it. For some reason SSMS often doesn't use a single transaction for the whole operation, but several transactions. I'd recommend to manually review the script and make sure that there is only one BEGIN TRANSACTION at the top and one COMMIT in the end. You don't want to end up with a half-done operation with, say, a table where all indexes and foreign keys were dropped.
If it is a one-off operation, it could be enough for you. Your table is only 2.4GB, so it may take few minutes, but it should not be hours.
If you run the T-SQL script yourself in SSMS, then by default there is no timeout. You can stop it yourself if it takes too long.
Smart and fast way to do it is described in details in this answer by Justin Grant.
The main idea is to use the ALTER TABLE...SWITCH statement to make the change only touching the metadata without touching each page of the table.
BEGIN TRANSACTION;
-- create a new table with required schema
CREATE TABLE [dbo].[NEW_tblEmailsSent](
[id] [int] IDENTITY(1,1) NOT NULL,
[Sent] [datetime] NULL,
[SentByUser] [nvarchar](50) NULL,
[ToEmail] [nvarchar](150) NULL,
[StudentID] [int] NULL,
[SubjectLine] [nvarchar](200) NULL,
[MessageContent] [nvarchar](max) NULL,
[ReadStatus] [bit] NULL,
[Folder] [nvarchar](50) NULL,
CONSTRAINT [PK_tblEmailsSent] PRIMARY KEY CLUSTERED
(
[id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
-- switch the tables
ALTER TABLE [dbo].[tblEmailsSent] SWITCH TO [dbo].[NEW_tblEmailsSent];
-- drop the original (now empty) table
DROP TABLE [dbo].[tblEmailsSent];
-- rename new table to old table's name
EXEC sp_rename 'NEW_tblEmailsSent','tblEmailsSent';
COMMIT;
After the new table has IDENTITY property you normally should set the current identity value to the maximum of the actual values in your table. If you don't do it, new rows inserted into the table would start from 1.
One way to do it is to run DBCC CHECKIDENT after you switched the tables:
DBCC CHECKIDENT('dbo.tblEmailsSent')
Alternatively, you can specify the new seed in the table definition:
CREATE TABLE [dbo].[NEW_tblEmailsSent](
[id] [int] IDENTITY(<max value of id + 1>, 1) NOT NULL,
I have a SQL Server database and having a table containing too many records. Before it was working fine but now when I run SQL Statement takes time to execute.
Sometime cause the SQL Database to use too much CPU.
This is the Query for the table.
CREATE TABLE [dbo].[tblPAnswer1](
[ID] [bigint] IDENTITY(1,1) NOT NULL,
[AttrID] [int] NULL,
[Kidato] [int] NULL,
[Wav] [int] NULL,
[Was] [int] NULL,
[ShuleID] [int] NULL,
[Mwaka] [int] NULL,
[Swali] [float] NULL,
[Wilaya] [int] NULL,
CONSTRAINT [PK_tblPAnswer1] PRIMARY KEY CLUSTERED
(
[ID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
And the following down is the sql stored procedure for the statement.
ALTER PROC [dbo].[uspGetPAnswer1](#ShuleID int, #Mwaka int, #Swali float, #Wilaya int)
as
SELECT ID,
AttrID,
Kidato,
Wav,
Was,
ShuleID,
Mwaka,
Swali,
Wilaya
FROM dbo.tblPAnswer1
WHERE [ShuleID] = #ShuleID
AND [Mwaka] = #Mwaka
AND [Swali] = #Swali
AND Wilaya = #Wilaya
What is wrong in my SQL Statement. Need help.
Just add an index on ShuleID, Mwaka, Swali and Wilaya columns. The order of columns in the index should depend on distribution of data (the columns with most diverse values in it should be the first in the index, and so on).
And if you need it super-fast, also include all the remaining columns used in the query, to have a covering index for this particular query.
EDIT: Probably should move the float col (Swali) from indexed to included columns.
Add an Index on the ID column and include ShuleID, Mwaka, Swali and Wilaya columns. That should help improve the speed of the query.
CREATE NONCLUSTERED INDEX IX_ID_ShuleID_Mwaka_Swali_Wilaya
ON tblPAnswer1 (ID)
INCLUDE (ShuleID, Mwaka, Swali, Wilaya);
What is the size of the table? You may need additional indices as you are not using the primary key to query the data. This article by Pinal Dave provides a script to identify missing indices.
http://blog.sqlauthority.com/2011/01/03/sql-server-2008-missing-index-script-download/
It provides a good starting point for index optimization.