I have the following function which returns a bit:
Declare #Ret bit
SET #Ret = 0
IF EXISTS ( Select * from tblExclusion where StatusID = 1 and AccountID = #AccountID )
Begin
SET #Ret = 1
End
Return #Ret
Now there can be multiple entries for the same AccountID in the table or none at all but only one entry will ever have a "1" status if it exists.
I have to be honest I'm not very knowledgeable when it comes to SQL but when called the function seems to take a long time to return. I'm wondering if there is a more efficient way of writing the above.
Thanks in advance.
An index may be necessary, reviewing a sample execution plan will reveal what index would improve.
If you were to modify your query to:
Declare #Ret bit
SET #Ret = 0
IF EXISTS ( Select 1 from tblExclusion where StatusID = 1 and AccountID = #AccountID )
Begin
SET #Ret = 1
End
Return #Ret
An NONCLUSTERED INDEX would be of the format:
USE [DatabaseName]
GO
CREATE NONCLUSTERED INDEX [<Name of Missing Index, sysname,>]
ON [dbo].[tblExclusion] ([StatusID],[AccountID])
<optional, INCLUDE ([columns within the select,]) >
GO
Types of indexes and how to create them: Create Index
If it takes a long time to run, then I would suspect that there is no index on the column "AccountID". Adding an index on that column will probably significantly improve performance. However, without knowing how tblExclusion is defined, there is no way to be certain of this answer. Also, adding an index to StatusID will help as well, assuming there are a large number of entries for different StatusIDs.
Also, since you only need to test the existence of the record, you don't need to select every column in tblExclusion. You could change "*" to "1" or something, though this will not improve performance significantly.
Try this form
Declare #Ret bitSET #Ret = 0
IF EXISTS ( Select top 1 * from tblExclusion(nolock) where StatusID = 1 and AccountID = #AccountID )
Begin
SET #Ret = 1
End
Return #Ret
Remember the index's and maintenance can make this work slow.
I suggest using select top 1 1 from instead of select * from as in:
Declare #Ret bit
SET #Ret = 0
IF EXISTS (Select top 1 1 from tblExclusion where StatusID = 1 and AccountID = #AccountID)
SET #Ret = 1
Return #Ret
This way you avoid getting unneeded and probably large data.
Related
I have a very simple sql update statement in postgres.
UPDATE p2sa.observation SET file_path = replace(file_path, 'path/sps', 'newpath/p2s')
The observation table has 1513128 rows. The query so far has been running for around 18 hours with no end in sight.
The file_path column is not indexed so I guess it is doing a top to bottom scan but it seems a bit excessive the time. Probably replace is also a slow operation.
Is there some alternative or better approach for doing this one off kind of update which affects all rows. It is essentially updating an old file path to a new location. It only needs to be updated once or maybe again in the future.
Thanks.
In SQL you could do a while loop to update in batches.
Try this to see how it performs.
Declare #counter int
Declare #RowsEffected int
Declare #RowsCnt int
Declare #CodeId int
Declare #Err int
DECLARE #MaxNumber int = (select COUNT(*) from p2sa.observation)
SELECT #COUNTER = 1
SELECT #RowsEffected = 0
WHILE ( #RowsEffected < #MaxNumber)
BEGIN
SET ROWCOUNT 10000
UPDATE p2sa.observation
SET file_path = replace(file_path, 'path/sps', 'newpath/p2s')
where file_path != 'newpath/p2s'
SELECT #RowsCnt = ##ROWCOUNT ,#Err = ##error
IF #Err <> 0
BEGIN
Print 'Problem Updating the records'
BREAK
END
ELSE
SELECT #RowsEffected = #RowsEffected + #RowsCnt
PRINT 'The total number of rows effected :'+convert(varchar,#RowsEffected)
/*delaying the Loop for 10 secs , so that Update is completed*/
WAITFOR DELAY '00:00:10'
END
SET ROWCOUNT 0
I have the following sql:
UPDATE Customer SET Count=1 WHERE ID=1 AND Count=0
SELECT ##ROWCOUNT
I need to know if this is guaranteed to be atomic.
If 2 users try this simultaneously, will only one succeed and get a return value of 1? Do I need to use a transaction or something else in order to guarantee this?
The goal is to get a unique 'Count' for the customer. Collisions in this system will almost never happen, so I am not concerned with the performance if a user has to query again (and again) to get a unique Count.
EDIT:
The goal is to not use a transaction if it is not needed. Also this logic is ran very infrequently (up to 100 per day), so I wanted to keep it as simple as possible.
It may depend on the sql server you are using. However for most, the answer is yes. I guess you are implementing a lock.
Using SQL SERVER (v 11.0.6020) that this is indeed an atomic operation as best as I can determine.
I wrote some test stored procedures to try to test this logic:
-- Attempt to update a Customer row with a new Count, returns
-- The current count (used as customer order number) and a bit
-- which determines success or failure. If #Success is 0, re-run
-- the query and try again.
CREATE PROCEDURE [dbo].[sp_TestUpdate]
(
#Count INT OUTPUT,
#Success BIT OUTPUT
)
AS
BEGIN
DECLARE #NextCount INT
SELECT #Count=Count FROM Customer WHERE ID=1
SET #NextCount = #Count + 1
UPDATE Customer SET Count=#NextCount WHERE ID=1 AND Count=#Count
SET #Success=##ROWCOUNT
END
And:
-- Loop (many times) trying to get a number and insert in into another
-- table. Execute this loop concurrently in several different windows
-- using SMSS.
CREATE PROCEDURE [dbo].[sp_TestLoop]
AS
BEGIN
DECLARE #Iterations INT
DECLARE #Counter INT
DECLARE #Count INT
DECLARE #Success BIT
SET #Iterations = 40000
SET #Counter = 0
WHILE (#Counter < #Iterations)
BEGIN
SET #Counter = #Counter + 1
EXEC sp_TestUpdate #Count = #Count OUTPUT , #Success = #Success OUTPUT
IF (#Success=1)
BEGIN
INSERT INTO TestImage (ImageNumber) VALUES (#Count)
END
END
END
This code ran, creating unique sequential ImageNumber values in the TestImage table. This proves that the above SQL update call is indeed atomic. Neither function guaranteed the updates were done, but they did guarantee that no duplicates were created, and no numbers were skipped.
I have a pending order table with a check constraint to prevent people from ordering an item we don't have in stock. This required me to create a counter function to decide if an insert can happen or not. It works until there is 1 item left in inventory then I get a message that we are out of stock of the item. I thought it was a dirty read issue but even after interducing a ReadPast hint I still see this behavior. Is there some other factor causing this problem? Or do I need to setup the isolation level differently?
I have tried calling this function with the sprokID and it returns true which is why I am thinking during insert there is a dirty read taking place.
ALTER TABLE [dbo].[PendingSprokOrders] WITH CHECK ADD CONSTRAINT [CK_SprokInStock] CHECK (([dbo].[SprokInStockCount]([SprokID])=(1)))
FUNCTION [dbo].[SprokInStockCount] ( #SprokId INT )
RETURNS INT
AS
BEGIN
DECLARE #Active INT
SET #Active = ( SELECT COUNT(*)
FROM [PendingSprokOrders] AS uac WITH(READPAST)
WHERE uac.SprokID = #SprokId
)
DECLARE #Total INT
SET #Total = ( SELECT
ISNULL(InStock, 0)
FROM SprokInvetory
WHERE id = #SprokId
)
DECLARE #Result INT
IF #Total - #Active > 0
SET #Result = 1
ELSE
SET #Result = 0
RETURN #Result;
END;
The math is off. Instead of:
IF #Total - #Active > 0
SET #Result = 1
ELSE
SET #Result = 0
it should be:
IF #Total - #Active > -1
SET #Result = 1
ELSE
SET #Result = 0
That's because your constraint function can see the row that you are attempting to add and is counting it.
Yes it does but your set#total statements are contradictory also there are a couple breaks in your code.
i have this stored procedure to look up uk postcodes..
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER PROCEDURE [dbo].[sp_postcode_UK]
-- Add the parameters for the stored procedure here
#post_code varchar(10)
AS
DECLARE #intFlag INT
SET #intFlag = 4
WHILE (#intFlag >=1)
BEGIN
SET NOCOUNT ON;
SELECT top 1 id,lat,lng from [postcodes].[dbo].UKREGIONS
where postcode = left(#post_code,#intFlag)
order by newid()
IF ##rowcount > 0
BREAK;
SET #intFlag = #intFlag - 1
END
GO
basically i havea database with the main regions and their geo positions.. so a postcode of w140df will belong to w14 in the database... sometimes it goes back to just one letter.. how do i do it so the stored procedure doesnt return blank records for the first couple of searches
You can do it with (assuming you really want the longest match as it's more precise):
SELECT top 1 id,lat,lng from [postcodes].[dbo].UKREGIONS
where postcode IN (
left(#post_code,1), left(#post_code,2),
left(#post_code,3), left(#post_code,4)
)
ORDER BY LEN(postcode) DESC
without any need for looping.
You can also use like, but I think it would perform worse.
SELECT top 1 id,lat,lng from [postcodes].[dbo].UKREGIONS
where #post_code LIKE postcode+'%'
ORDER BY LEN(postcode) DESC
Note the inverted order of parameter and column in the LIKE clause.
I need to run a stored procedure on a bunch of records. The code I have now iterates through the record stored in a temp table. The stored procedure returns a table of records.
I was wondering what I can do to avoid the iteration if anything.
set #counter = 1
set #empnum = null
set #lname = null
set #fname = null
-- get all punches for employees
while exists(select emp_num, lname, fname from #tt_employees where id = #counter)
begin
set #empnum = 0
select #empnum = emp_num, #lname = lname , #fname= fname from #tt_employees where id = #counter
INSERT #tt_hrs
exec PCT_GetEmpTimeSp
empnum
,#d_start_dt
,#d_end_dt
,#pMode = 0
,#pLunchMode = 3
,#pShowdetail = 0
,#pGetAll = 1
set #counter = #counter + 1
end
One way to avoid this kind of iteration is to analyze the code within the stored procedure and revised so that, rather than processing for one set of inputs at a time, it processes for all sets of inputs at a time. Often enough, this is not possible, which is why iteration loops are not all that uncommon.
A possible alternative is to use APPLY functionality (cross apply, outer apply). To do this, you'd rewrite the procedure as one of the table-type functions, and work that function into the query something like so:
INSERT #tt_hrs
select [columnList]
from #tt_employees
cross apply dbo.PCT_GetEmpTimeFunc(emp_num, #d_start_dt, #d_end_dt, 0, 3, 0, 1)
(It was not clear where all your inputs to the procedure were coming from.)
Note that you still are iterating over calls to the function, but now it's "packed" into one query.
I think you are on the right track.
you can have a temp table with identity column
CREATE TABLE #A (ID INT IDENTITY(1,1) NOT NULL, Name VARCHAR(50))
After records are inserted in to this temp table, find the total number of records in the table.
DECLARE #TableLength INTEGER
SELECT #TableLength = MAX(ID) FROM #A
DECLARE #Index INT
SET #Index = 1
WHILE (#Index <=#TableLength)
BEGIN
-- DO your work here
SET #Index = #Index + 1
END
Similar to what you have already proposed.
Alternative to iterate over records is to use CURSOR. CURSORS should be avoided at any cost.