Simulate a deadlock using stored procedure - sql

Does anyone know how to simulate a deadlock using a stored procedure inserting or updating values? I could only do so in sybase using individual commands.
Thanks,
Ver

Create two stored procedures.
The first should start a transaction, modify table 1 (and take a long time) and then modify table 2.
The second should start a transaction, modify table 2 (and take a long time) and then modify table 1.
Ideally, the modifications should affect the same rows, or create table locks.
Then, in a client application, start SP1, and immediately then also start SP2 (before SP1 has finished).

The simple and short answer to get a deadlock will be to access the tables data in a reverse order and hence introducing a cyclic deadlock between two connections. Let me show you code:
Create table vin_deadlock (id int, Name Varchar(30))
GO
Insert into vin_deadlock values (1, 'Vinod')
Insert into vin_deadlock values (2, 'Kumar')
Insert into vin_deadlock values (3, 'Saravana')
Insert into vin_deadlock values (4, 'Srinivas')
Insert into vin_deadlock values (5, 'Sampath')
Insert into vin_deadlock values (6, 'Manoj')
GO
Now with the tables ready. Just update the columns in the reverse order from two connections like:
-- Connection 1
Begin Tran
Update vin_deadlock
SET Name = 'Manoj'
Where id = 6
WAITFOR DELAY '00:00:10'
Update vin_deadlock
SET Name = 'Vinod'
Where id = 1
and from connection 2
-- Connection 2
Begin Tran
Update vin_deadlock
SET Name = 'Vinod'
Where id = 1
WAITFOR DELAY '00:00:10'
Update vin_deadlock
SET Name = 'Manoj'
Where id = 6
And this will result in a deadlock. You can see the deadlock graph from profiler.

Start a process which continously insert or update a table using while loop with script and run your desired sp.

Related

tsql rowcount without executing update [duplicate]

This question already has answers here:
how to know how many rows will be affected before running a query in microsoft sql server 2008
(5 answers)
Closed 4 years ago.
I have got any TSQL UPDATE statement and I'd like to know the number of rows that would be affected or the ##ROWCOUNT before really executing the updates.
I think I could replace the UPDATE part of the query with a SELECT COUNT(1), but it seems to me there should be an easier way.
Is there an easier way?
If there is no easy solution in SQL, a solution in .NET/C# would be ok for me too; I'm using System.Data.SqlClient.SqlCommand to execute the command anyway.
I'm using MSSQL 2008.
Example:
create table T (
A char(5),
B int
)
insert into T values ('a', 1)
insert into T values ('A', 1)
insert into T values ('B', 2)
insert into T values ('X', 1)
insert into T values ('D', 4)
-- I do not want to execute this query
-- update T set A = 'X' where B = 1
-- but I want ot get the ##ROWCOUNT of it, in this case
-- the wanted result would be 3.
One method would be to use transactions:
begin transaction;
declare #rc int;
update T set A = 'X' where B = 1;
set #rc = ##rowcount;
. . .
commit; -- or rollback
Just about any other method would have race conditions, if other threads might be updating the table.
However, I am suspicious that this solves a real problem. I suspect that your real problem might have a better solution. Perhaps you should ask another question explaining what you are really trying to do.
You could wrap your script in a transaction and use ROLLBACK at the end to forgo saving your changes.
create table T (
A char(5),
B int
)
insert into T values ('a',
insert into T values ('A', 1)
insert into T values ('B', 2)
insert into T values ('X', 1)
insert into T values ('D', 4)
BEGIN TRANSACTION;
update T set A = 'X' where B = 1
SELECT ##RowCount;
ROLLBACK TRANSACTION;
When you are ready to save your changes, switch ROLLBACK to COMMIT and execute, or you can simply remove those lines. You can also name your transactions. Ref: https://learn.microsoft.com/en-us/sql/t-sql/language-elements/transactions-transact-sql
You could add a SELECT * FROM T; after the rest of the script to confirm that your object was not actually updated.

Difference in inserting values into SQL Server

I am using SQL Server 2012
The query is:
drop table x
create table x(id int primary key)
insert into x values(5)
insert into x values(6)
begin tran
insert into x values(1),(2),(3),(3),(4)--Primary key violation
commit tran
select* from x
This returns
5
6
and another query
drop table x
create table x(id int primary key)
insert into x values(5)
insert into x values(6)
begin tran
insert into x values(1)
insert into x values(2)
insert into x values(3)
insert into x values(3) --Primary key violation
insert into x values (4)
commit tran
select * from x
This returns
1
2
3
4
5
6
So what is the difference in inserting values in SQL Server?
Between those 2 queries and why the different result sets?
Sample 1 has a single insert statement for the 1,2,3,3,4,5. This is a "bulk" insert statement (however SQL Server uses the term bulk insert in a different fashion). Essentially it means all the inserts in this line are executed as 1 single action.
Sample 2 has separate insert statements. Since there is no exception handling in place, there is no reason for the transaction to abort. The error is ignored, the other records are added, and the result is then what you see.
SQL server executes the queries as batches. So if any error occurs in the batch, according to MSDN, one the following is possible.
No statements in the batch are executed.
No statements in the batch are executed and the transaction is rolled back.
All of the statements before the error statement are executed.
All of the statements except the error statement are executed.
In your first case, "No statements in the batch are executed". And in your second case, "All of the statements except the error statement or executed".
For more about SQL batches, please refer the following MSDN articles,
Batches of SQL Statements
Executing Batches
Errors and Batches

Create a Trigger to insert a rows in another table

After create a Stored Procedure in a Table " dbo.terms" to insert a data in it using this code:
CREATE PROCEDURE dbo.terms
#Term_en NVARCHAR(50) = NULL ,
#Createdate DATETIME = NULL ,
#Writer NVARCHAR(50) = NULL ,
#Term_Subdomain NVARCHAR(50) = NULL
AS
BEGIN
SET NOCOUNT ON
INSERT INTO dbo.terms
(
Term_en ,
Createdate ,
Writer ,
Term_Subdomain
)
VALUES
(
#Term_en = 'Cat' ,
#Createdate = '2013-12-12' ,
#Writer = 'Fadi' ,
#Term_Subdomain = 'English'
)
END
GO
I want to Create a Trigger in it to add another rows in a table dbo.term_prop
I used this code :
CREATE TRIGGER triggerdata
AFTER INSERT
ON dbo.terms
FOR EACH ROW
BEGIN
INSERT INTO dbo.term_prop VALUES
('قطة', term_ar, upper(:new.term_ar) , null , 'chat', term_fr, upper(:new.term_fr) , null ,'Animal', Def_en, upper(:new.Def_en) , null ,'حيوان', Def_ar, upper(:new.Def_ar) , null ,'Animal', Def_fr, upper(:new.Def_fr) , null);
END;
and it shows me an Error
To add more rows you can use SELECTED table.
This is a special table populated with rows inserted in your transaction.
An example is:
INSERT INTO dbo.term_prop VALUES
SELECT * FROM inserted
So you mustn't use FOR EACH ROW.
The correct definition of your trigger will be
CREATE TRIGGER triggername ON table AFTER INSERT
AS BEGIN
END
Joe answer is a good one and this is more a advice.
Avoid triggers, they can cause maintenance nightmares: are trick to maintain and debug.
If you want to inserts tables in another table after inserting in the first one just put that code in the same SP.
If you need a auto identity generated value you can do it by using ##identity or scope_identity() or ident_current().
Try to keep things simple.
Wow, I am still surprised that triggers are given a bad wrap! I wrote a dozen articles on them a long time ago ...
Like anything in life, the use of triggers depends on the situation.
1 - Trigger are great to track DDL changes. Who changed that table?
http://craftydba.com/?p=2015
2 - Triggers can track DML changes (insert, update, delete). However, on large tables with large transaction numbers, they can slow down processing.
http://craftydba.com/?p=2060
However, with today's hardware, what is slow for me might not be slow for you.
3 - Triggers are great at tracking logins and/or server changes.
http://craftydba.com/?p=1909
So, lets get back to center and talk about your situation.
Why are you trying to make a duplicate entry on just an insert action?
Other options right out of the SQL Server Engine to solve this problem, are:
1 - Move data from table 1 to table 2 via a custom job. "Insert Into table 1 select * from table 2 where etl_flag = 0;". Of course make it transactional and update the flag after the insert is complete. I am just considering inserts w/o deletes or updates.
2 - If you want to track just changes, check out the change data capture. It reads from the transaction log. It is not as instant as a trigger, ie - does not fire for every record. Just runs as a SQL Agent Job in the background to load cdc.tables.
3 - Replicate the data from one server1.database1.table1 to server2.database2.table2.
ETC ...
I hope my post reminds everyone that the situation determines the solution.
Triggers are good in certain situations, otherwise, they would have been removed from the product a long time ago.
And if the situation changes, then the solution might have to change ...

Issue with SQL Server trigger event firing

I have a trigger on a table that is something like this:
ALTER TRIGGER [shoot_sms]
ON [dbo].[MyTable]
AFTER INSERT
AS
begin
declare #number bigint
declare #body varchar(50)
declare #flag int
select #number=number,#body=body,#flag=flag from inserted
if(#flag=0)
begin
insert into temptable (number,body,status)
select #number,#body,'P'
end
end
Now I am making two entries in mytable as below:
insert into mytable(number, body, flag)
values(3018440225, 'This is test', 0)
insert into mytable(number, body, flag)
values(3018440225, 'This is test', 0)
I execute these queries at a time, but for both of the queries the trigger fires only once and performs the task for the first query only.
How can I make it work for both insert statements?
Just an idea but put a GO statement between those two insert statements and that might cause the trigger to fire twice.
You should probably rewrite your trigger to handle multiple row inserts I think.
Here is your query converted. You should get two rows now.
ALTER TRIGGER [shoot_sms]
ON [dbo].[MyTable]
AFTER INSERT
AS
begin
insert into temptable (number,body,status)
select number,body,'P'
from inserted
where flag = 0
end
Also notice your trigger is much simpler now.
Since those two statements are in one SQL batch, the trigger will (by design) only fire once.
Triggers don't fire once per row - they fire once per statement! So if you have an INSERT or UPDATE statement that affects more than one row, your trigger will have more than one row in the Inserted (and possibly Deleted) pseudo tables.
The way you wrote this trigger is really not taking into account that Inserted could contain multiple rows - what row do you select from the Inserted table if you're inserting 20 rows at once?
select #number = number, #body = body, #flag = flag from inserted
You need to change your trigger to take that into account!

SQL Server: Is it possible to insert into two tables at the same time?

My database contains three tables called Object_Table, Data_Table and Link_Table. The link table just contains two columns, the identity of an object record and an identity of a data record.
I want to copy the data from DATA_TABLE where it is linked to one given object identity and insert corresponding records into Data_Table and Link_Table for a different given object identity.
I can do this by selecting into a table variable and the looping through doing two inserts for each iteration.
Is this the best way to do it?
Edit : I want to avoid a loop for two reason, the first is that I'm lazy and a loop/temp table requires more code, more code means more places to make a mistake and the second reason is a concern about performance.
I can copy all the data in one insert but how do get the link table to link to the new data records where each record has a new id?
In one statement: No.
In one transaction: Yes
BEGIN TRANSACTION
DECLARE #DataID int;
INSERT INTO DataTable (Column1 ...) VALUES (....);
SELECT #DataID = scope_identity();
INSERT INTO LinkTable VALUES (#ObjectID, #DataID);
COMMIT
The good news is that the above code is also guaranteed to be atomic, and can be sent to the server from a client application with one sql string in a single function call as if it were one statement. You could also apply a trigger to one table to get the effect of a single insert. However, it's ultimately still two statements and you probably don't want to run the trigger for every insert.
You still need two INSERT statements, but it sounds like you want to get the IDENTITY from the first insert and use it in the second, in which case, you might want to look into OUTPUT or OUTPUT INTO: http://msdn.microsoft.com/en-us/library/ms177564.aspx
The following sets up the situation I had, using table variables.
DECLARE #Object_Table TABLE
(
Id INT NOT NULL PRIMARY KEY
)
DECLARE #Link_Table TABLE
(
ObjectId INT NOT NULL,
DataId INT NOT NULL
)
DECLARE #Data_Table TABLE
(
Id INT NOT NULL Identity(1,1),
Data VARCHAR(50) NOT NULL
)
-- create two objects '1' and '2'
INSERT INTO #Object_Table (Id) VALUES (1)
INSERT INTO #Object_Table (Id) VALUES (2)
-- create some data
INSERT INTO #Data_Table (Data) VALUES ('Data One')
INSERT INTO #Data_Table (Data) VALUES ('Data Two')
-- link all data to first object
INSERT INTO #Link_Table (ObjectId, DataId)
SELECT Objects.Id, Data.Id
FROM #Object_Table AS Objects, #Data_Table AS Data
WHERE Objects.Id = 1
Thanks to another answer that pointed me towards the OUTPUT clause I can demonstrate a solution:
-- now I want to copy the data from from object 1 to object 2 without looping
INSERT INTO #Data_Table (Data)
OUTPUT 2, INSERTED.Id INTO #Link_Table (ObjectId, DataId)
SELECT Data.Data
FROM #Data_Table AS Data INNER JOIN #Link_Table AS Link ON Data.Id = Link.DataId
INNER JOIN #Object_Table AS Objects ON Link.ObjectId = Objects.Id
WHERE Objects.Id = 1
It turns out however that it is not that simple in real life because of the following error
the OUTPUT INTO clause cannot be on
either side of a (primary key, foreign
key) relationship
I can still OUTPUT INTO a temp table and then finish with normal insert. So I can avoid my loop but I cannot avoid the temp table.
I want to stress on using
SET XACT_ABORT ON;
for the MSSQL transaction with multiple sql statements.
See: https://msdn.microsoft.com/en-us/library/ms188792.aspx
They provide a very good example.
So, the final code should look like the following:
SET XACT_ABORT ON;
BEGIN TRANSACTION
DECLARE #DataID int;
INSERT INTO DataTable (Column1 ...) VALUES (....);
SELECT #DataID = scope_identity();
INSERT INTO LinkTable VALUES (#ObjectID, #DataID);
COMMIT
It sounds like the Link table captures the many:many relationship between the Object table and Data table.
My suggestion is to use a stored procedure to manage the transactions. When you want to insert to the Object or Data table perform your inserts, get the new IDs and insert them to the Link table.
This allows all of your logic to remain encapsulated in one easy to call sproc.
If you want the actions to be more or less atomic, I would make sure to wrap them in a transaction. That way you can be sure both happened or both didn't happen as needed.
You might create a View selecting the column names required by your insert statement, add an INSTEAD OF INSERT Trigger, and insert into this view.
Before being able to do a multitable insert in Oracle, you could use a trick involving an insert into a view that had an INSTEAD OF trigger defined on it to perform the inserts. Can this be done in SQL Server?
Insert can only operate on one table at a time. Multiple Inserts have to have multiple statements.
I don't know that you need to do the looping through a table variable - can't you just use a mass insert into one table, then the mass insert into the other?
By the way - I am guessing you mean copy the data from Object_Table; otherwise the question does not make sense.
//if you want to insert the same as first table
$qry = "INSERT INTO table (one, two, three) VALUES('$one','$two','$three')";
$result = #mysql_query($qry);
$qry2 = "INSERT INTO table2 (one,two, three) VVALUES('$one','$two','$three')";
$result = #mysql_query($qry2);
//or if you want to insert certain parts of table one
$qry = "INSERT INTO table (one, two, three) VALUES('$one','$two','$three')";
$result = #mysql_query($qry);
$qry2 = "INSERT INTO table2 (two) VALUES('$two')";
$result = #mysql_query($qry2);
//i know it looks too good to be right, but it works and you can keep adding query's just change the
"$qry"-number and number in #mysql_query($qry"")
I have 17 tables this has worked in.
-- ================================================
-- Template generated from Template Explorer using:
-- Create Procedure (New Menu).SQL
--
-- Use the Specify Values for Template Parameters
-- command (Ctrl-Shift-M) to fill in the parameter
-- values below.
--
-- This block of comments will not be included in
-- the definition of the procedure.
-- ================================================
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER PROCEDURE InsetIntoTwoTable
(
#name nvarchar(50),
#Email nvarchar(50)
)
AS
BEGIN
SET NOCOUNT ON;
insert into dbo.info(name) values (#name)
insert into dbo.login(Email) values (#Email)
END
GO