SQL Server store value during implicit transaction - sql

I'm having to restructure how an existing program deals with the database.
Before, I was executing statements using odbc_php consecutively.
E.g
SELECT [Value1] FROM TABLE save to $value1
INSERT INTO TABLE2 (VALUE2) VALUES ('$value1')
UPDATE TABLE SET [Value1] = '" . $value1 + 1 . "'
You get the idea.
However I believe this way of running statements is causing conflict with 'other users' of the database.
My solution is to run the statements as an implicit transaction, however I require values being saved & reused during this transaction.
so, How do I save values from select statements in MSSQL?
(My skills are not ideal for MSSQL so any good tutorial or help document are apperciated)

You have some issues with dealing with concurrent attempts to hit this sequence of commands. Given that:
You want Table2 to have an entry for every incremented value1
and
You want Table1 to have value1 incremented by 1 every time this is executed.
In which case, I think you need to update Table1 first and then insert the original value prior to the update into table2, ensuring this remains consecutive:
DECLARE #UpdatedVal TABLE (InsertVal INT)
UPDATE Table1
SET Value1 = (Value1 + 1)
OUTPUT INSERTED.Value1
INTO #InsertedVal
INSERT INTO Table2 (VALUE2)
VALUES ((SELECT(InsertVal - 1) FROM #UpdatedVal)
GO

One example:
declare #value int <\or any datatype> --create a variable
select #value = value1
from table --set the value
insert into table2 (value2) values(#value) --insert it

Related

Enumerate the multiple rows in a multi-update Trigger

I have something like the table below:
CREATE TABLE updates (
id INT PRIMARY KEY IDENTITY (1, 1),
name VARCHAR (50) NOT NULL,
updated DATETIME
);
And I'm updating it like so:
INSERT INTO updates (name, updated)
VALUES
('fred', '2020-11-11),
('fred', '2020-11-11'),
...
('bert', '2020-11-11');
I need to write an after update Trigger and enumerate all the name(s) that were added and add each one to another table but can't work out how enumerate each one.
EDIT: - thanks to those who pointed me in the right direction, I know very little SQL.
What I need to do is something like this
foreach name in inserted
look it up in another table and
retrieve a count of the updates a 'name' has done
add 1 to the count
and update it back into the other table
I can't get to my laptop at the moment, but presumably I can do something like:
BEGIN
SET #count = (SELECT UCount from OTHERTAB WHERE name = ins.name)
SET #count = #count + 1
UPDATE OTHERTAB SET UCount = #count WHERE name = ins.name
SELECT ins.name
FROM inserted ins;
END
and that would work for each name in the update?
Obviously I'll have to read up on set based SQL processing.
Thanks all for the help and pointers.
Based on your edits you would do something like the following... set based is a mindset, so you don't need to compute the count in advance (in fact you can't). It's not clear whether you are counting in the same table or another table - but I'm sure you can work it out.
Points:
Use the Inserted table to determine what rows to update
Use a sub-query to calculate the new value if its a second table, taking into account the possibility of null
If you are really using the same table, then this should work
BEGIN
UPDATE OTHERTAB SET
UCount = COALESCE(UCount,0) + 1
WHERE [name] in (
SELECT I.[name]
FROM Inserted I
);
END;
If however you are using a second table then this should work:
BEGIN
UPDATE OTHERTAB SET
UCount = COALESCE((SELECT UCount+1 from OTHERTAB T2 WHERE T2.[name] = OTHERTAB.[name]),0)
WHERE [name] in (
SELECT I.[name]
FROM Inserted I
);
END;
Using inserted and set-based approach(no need for loop):
CREATE TRIGGER trg
ON updates
AFTER INSERT
AS
BEGIN
INSERT INTO tab2(name)
SELECT name
FROM inserted;
END

How to loop with a table values when column value is not incremental in SQL Server

I need to insert the value from the table T1 to another table t2 where t1 is truncate and load and any values can come after load. So how to use Loop to insert data into T2. It should happen automatically no manual intervention should required so can't Use table value parameter.
Suppose table1 has column Id
Id
---
4
7
15
I have to insert the data into table 2.
I have used this code:
DECLARE #counter INT = (SELECT MIN(CAST(ID AS INT)) FROM Table1);
WHILE #counter <= (SELECT COUNT(CAST(ID AS INT)) FROM Table1)
BEGIN
INSERT INTO TABLE2 (ID)
VALUES (#Counter)
SET #counter = (SELECT ID FROM table1 WHERE #counter = ID)
END
How to set the counter or pick the value from table1.Id value can come differently every time?
Please help
In the absence of any further detail, it seems you could simply rewrite your query as the below:
INSERT INTO Table2 (ID)
SELECT ID
FROM Table1;
There is no need for a loop (WHILE/CURSOR) for what you have here. SQL is a Query Language, and excels are set based operations. What SQL isn't good at is iterative ones, and whenever a CURSOR or WHILE is used, I would suggest it is almost always being misused; this certainly appears to be one of those times. A WHILE or CURSOR, for a even slightly larger dataset would be significantly slower, probably by 1,000s of times so, than the simple statement above.
Not sure of your logic and its almost always better to use set based solutions but here is a TSQL loop solution. I left out the casting which you may have to use:
declare #curid int
declare #previd int
select #curid= min([ID]) from Table1 ;
while ##rowcount > 0
begin
INSERT INTO TABLE2 (ID) VALUES (#curid)
set #previd=#curid
select #curid= min([ID])
from Table1
where [ID]> #previd;
end

SQL Server Instead Of Insert trigger on View causes Cannot Insert Null

We have an Instead-Of-Insert trigger on a view which copies all values from the INSERTED virtual-table to another table.
One of the fields in the list is non-nullable for the target table, and has a default value specified.
What we are experiencing, is, some application code is sending an insert command, and not specifying the non-nullable field - which (if the insert were executed against the actual table) would normally result in SQL Server inserting the column's default value. But, the trigger is explicit for all fields, so the trigger tries to insert null for that field... resulting in an error.
What I DONT want, is code like this...
INSERT INTO XXXX (col1, col2, col3)
SELECT
ISNULL(col1, 0), ISNULL(COL2, 0), ISNULL(COL3, 0)
FROM INSERTED
I don't want the trigger to need to know what the actual default values of each column should be (from a maintainability perspective)...
Does anyone have a better solution?
Thanks
when your application is sending NULL values to a not nullable column, there are not to many options. specialy when you dont want to use input validation with isnull.
we are using default values in this case. if it is possible you can alter your table:
ALTER TABLE xxxx ADD CONSTRAINT DF_col1 DEFAULT N'default' FOR col1;
I can think of an ugly and inefficient way of doing this. The idea is to insert a default row and then update the columns one at a time, using try/catch to ignore errors.
declare #Id int;
insert int XXX DEFAULT VALUES;
set #id = ##IDENTITY;
begin try
update XXX set col1 = val1 where id = #id;
end try
begin catch
end catch;
begin try
update XXX set col2 = val2 where id = #id;
end try
begin catch
end catch;
. . .
If you have to do this on 100 columns, then that could be a bad idea. If you only have two or three columns causing the problems, then this might solve your problem.

Setting multiple scalar variables from a single row in SQL Server 2008?

In a trigger, I have code like:
SET #var1 = (SELECT col1 FROM Inserted);
SET #var2 = (SELECT col2 FROM Inserted);
Is it possible to write the above in a single line? Something conceptually like:
SET (#var1,#var2) = (SELECT col1,col2 FROM Inserted);
Obviously I tried the above, without success; am I just stuck with the first method?
Even if possible, is that a good idea?
Thanks!
yes, use first method.
Or...
SELECT
#var1 = col1
,#var2 = col2
FROM
Inserted;
However, it is a major red flag if you are expecting to set variable values like that in a trigger. It generally means the trigger is poorly designed and needs revision. This code expects there will be only one record in inserted and this is something that is not going to be true in all cases. A multiple record insert or update will have multiple records in inserted and the trigger must account for that (please without using a trigger!!!). Triggers should under no circumstances be written to handle only one-record inserts/updates or deletes. They must be written to handle sets of data.
Example to insert the values from inserted to another table where the trigger is on table1:
CREATE TRIGGER mytrigger on table1
AFTER INSERT
AS
INSERT table2 (field1, field2, field3)
SELECT field1, 'test', CASE WHEN field3 >10 THEN field3 ELSE 0 END
FROM inserted
No, it is not possible. SET accepts a single target and value. AFAIK.

Possible to implement a manual increment with just simple SQL INSERT?

I have a primary key that I don't want to auto increment (for various reasons) and so I'm looking for a way to simply increment that field when I INSERT. By simply, I mean without stored procedures and without triggers, so just a series of SQL commands (preferably one command).
Here is what I have tried thus far:
BEGIN TRAN
INSERT INTO Table1(id, data_field)
VALUES ( (SELECT (MAX(id) + 1) FROM Table1), '[blob of data]');
COMMIT TRAN;
* Data abstracted to use generic names and identifiers
However, when executed, the command errors, saying that
"Subqueries are not allowed in this
context. only scalar expressions are
allowed"
So, how can I do this/what am I doing wrong?
EDIT: Since it was pointed out as a consideration, the table to be inserted into is guaranteed to have at least 1 row already.
You understand that you will have collisions right?
you need to do something like this and this might cause deadlocks so be very sure what you are trying to accomplish here
DECLARE #id int
BEGIN TRAN
SELECT #id = MAX(id) + 1 FROM Table1 WITH (UPDLOCK, HOLDLOCK)
INSERT INTO Table1(id, data_field)
VALUES (#id ,'[blob of data]')
COMMIT TRAN
To explain the collision thing, I have provided some code
first create this table and insert one row
CREATE TABLE Table1(id int primary key not null, data_field char(100))
GO
Insert Table1 values(1,'[blob of data]')
Go
Now open up two query windows and run this at the same time
declare #i int
set #i =1
while #i < 10000
begin
BEGIN TRAN
INSERT INTO Table1(id, data_field)
SELECT MAX(id) + 1, '[blob of data]' FROM Table1
COMMIT TRAN;
set #i =#i + 1
end
You will see a bunch of these
Server: Msg 2627, Level 14, State 1, Line 7
Violation of PRIMARY KEY constraint 'PK__Table1__3213E83F2962141D'. Cannot insert duplicate key in object 'dbo.Table1'.
The statement has been terminated.
Try this instead:
INSERT INTO Table1 (id, data_field)
SELECT id, '[blob of data]' FROM (SELECT MAX(id) + 1 as id FROM Table1) tbl
I wouldn't recommend doing it that way for any number of reasons though (performance, transaction safety, etc)
It could be because there are no records so the sub query is returning NULL...try
INSERT INTO tblTest(RecordID, Text)
VALUES ((SELECT ISNULL(MAX(RecordID), 0) + 1 FROM tblTest), 'asdf')
I don't know if somebody is still looking for an answer but here is a solution that seems to work:
-- Preparation: execute only once
CREATE TABLE Test (Value int)
CREATE TABLE Lock (LockID uniqueidentifier)
INSERT INTO Lock SELECT NEWID()
-- Real insert
BEGIN TRAN LockTran
-- Lock an object to block simultaneous calls.
UPDATE Lock WITH(TABLOCK)
SET LockID = LockID
INSERT INTO Test
SELECT ISNULL(MAX(T.Value), 0) + 1
FROM Test T
COMMIT TRAN LockTran
We have a similar situation where we needed to increment and could not have gaps in the numbers. (If you use an identity value and a transaction is rolled back, that number will not be inserted and you will have gaps because the identity value does not roll back.)
We created a separate table for last number used and seeded it with 0.
Our insert takes a few steps.
--increment the number
Update dbo.NumberTable
set number = number + 1
--find out what the incremented number is
select #number = number
from dbo.NumberTable
--use the number
insert into dbo.MyTable using the #number
commit or rollback
This causes simultaneous transactions to process in a single line as each concurrent transaction will wait because the NumberTable is locked. As soon as the waiting transaction gets the lock, it increments the current value and locks it from others. That current value is the last number used and if a transaction is rolled back, the NumberTable update is also rolled back so there are no gaps.
Hope that helps.
Another way to cause single file execution is to use a SQL application lock. We have used that approach for longer running processes like synchronizing data between systems so only one synchronizing process can run at a time.
If you're doing it in a trigger, you could make sure it's an "INSTEAD OF" trigger and do it in a couple of statements:
DECLARE #next INT
SET #next = (SELECT (MAX(id) + 1) FROM Table1)
INSERT INTO Table1
VALUES (#next, inserted.datablob)
The only thing you'd have to be careful about is concurrency - if two rows are inserted at the same time, they could attempt to use the same value for #next, causing a conflict.
Does this accomplish what you want?
It seems very odd to do this sort of thing w/o an IDENTITY (auto-increment) column, making me question the architecture itself. I mean, seriously, this is the perfect situation for an IDENTITY column. It might help us answer your question if you'd explain the reasoning behind this decision. =)
Having said that, some options are:
using an INSTEAD OF trigger for this purpose. So, you'd do your INSERT (the INSERT statement would not need to pass in an ID). The trigger code would handle inserting the appropriate ID. You'd need to use the WITH (UPDLOCK, HOLDLOCK) syntax used by another answerer to hold the lock for the duration of the trigger (which is implicitly wrapped in a transaction) & to elevate the lock type from "shared" to "update" lock (IIRC).
you can use the idea above, but have a table whose purpose is to store the last, max value inserted into the table. So, once the table is set up, you would no longer have to do a SELECT MAX(ID) every time. You'd simply increment the value in the table. This is safe provided that you use appropriate locking (as discussed). Again, that avoids repeated table scans every time you INSERT.
use GUIDs instead of IDs. It's much easier to merge tables across databases, since the GUIDs will always be unique (whereas records across databases will have conflicting integer IDs). To avoid page splitting, sequential GUIDs can be used. This is only beneficial if you might need to do database merging.
Use a stored proc in lieu of the trigger approach (since triggers are to be avoided, for some reason). You'd still have the locking issue (and the performance problems that can arise). But sprocs are preferred over dynamic SQL (in the context of applications), and are often much more performant.
Sorry about rambling. Hope that helps.
How about creating a separate table to maintain the counter? It has better performance than MAX(id), as it will be O(1). MAX(id) is at best O(lgn) depending on the implementation.
And then when you need to insert, simply lock the counter table for reading the counter and increment the counter. Then you can release the lock and insert to your table with the incremented counter value.
Have a separate table where you keep your latest ID and for every transaction get a new one.
It may be a bit slower but it should work.
DECLARE #NEWID INT
BEGIN TRAN
UPDATE TABLE SET ID=ID+1
SELECT #NEWID=ID FROM TABLE
COMMIT TRAN
PRINT #NEWID -- Do what you want with your new ID
Code without any transaction scope (I use it in my engineer course as an exercice) :
-- Preparation: execute only once
CREATE TABLE increment (val int);
INSERT INTO increment VALUES (1);
-- Real insert
DECLARE #newIncrement INT;
UPDATE increment
SET #newIncrement = val,
val = val + 1;
INSERT INTO Table1 (id, data_field)
SELECT #newIncrement, 'some data';
declare #nextId int
set #nextId = (select MAX(id)+1 from Table1)
insert into Table1(id, data_field) values (#nextId, '[blob of data]')
commit;
But perhaps a better approach would be using a scalar function getNextId('table1')
Any critiques of this? Works for me.
DECLARE #m_NewRequestID INT
, #m_IsError BIT = 1
, #m_CatchEndless INT = 0
WHILE #m_IsError = 1
BEGIN TRY
SELECT #m_NewRequestID = (SELECT ISNULL(MAX(RequestID), 0) + 1 FROM Requests)
INSERT INTO Requests ( RequestID
, RequestName
, Customer
, Comment
, CreatedFromApplication)
SELECT RequestID = #m_NewRequestID
, RequestName = dbo.ufGetNextAvailableRequestName(PatternName)
, Customer = #Customer
, Comment = [Description]
, CreatedFromApplication = #CreatedFromApplication
FROM RequestPatterns
WHERE PatternID = #PatternID
SET #m_IsError = 0
END TRY
BEGIN CATCH
SET #m_IsError = 1
SET #m_CatchEndless = #m_CatchEndless + 1
IF #m_CatchEndless > 1000
THROW 51000, '[upCreateRequestFromPattern]: Unable to get new RequestID', 1
END CATCH
This should work:
INSERT INTO Table1 (id, data_field)
SELECT (SELECT (MAX(id) + 1) FROM Table1), '[blob of data]';
Or this (substitute LIMIT for other platforms):
INSERT INTO Table1 (id, data_field)
SELECT TOP 1
MAX(id) + 1, '[blob of data]'
FROM
Table1
ORDER BY
[id] DESC;