SQL Maximum Recursion Causes Failure - sql

I can't remember where I got this, but probably SO so sorry to original poster. I have been using it wonderfully for quiet some time with perfect results. Today however I ran into a problem. I am running it through SSRS with a list of around 1500 item. Naturally someone is going to click "ALL" even though they really never will want all, however, when they do, is errors out
An error occurred during local report processing. An error has
occurred during report processing. Cannot read the next data row for
the data set Accounts. The statement terminated. The maximum recursion
100 has been exhausted before statement completion.
here is the code: Is this a SQL thing or a code thing?
ALTER FUNCTION [dbo].[ParseCSV] (#CSV_STR VARCHAR(8000),#Delimiter varchar(20) )
RETURNS #splittable TABLE (ID int identity(1,1), CSVvalues VARCHAR(256) )
AS
BEGIN
-- Check for NULL string or empty sting
IF (LEN(#CSV_STR) < 1 OR #CSV_STR IS NULL)
BEGIN
RETURN
END
; WITH csvtbl(i,j)
AS
(
SELECT i=1, j= CHARINDEX(#Delimiter,#CSV_STR+#Delimiter)
UNION ALL
SELECT i=j+1, j=CHARINDEX(#Delimiter,#CSV_STR+#Delimiter,j+1)
FROM csvtbl
WHERE CHARINDEX(#Delimiter,#CSV_STR+#Delimiter,j+1) <> 0
)
INSERT INTO #splittable ( CSVvalues)
SELECT LTRIM(RTRIM(SUBSTRING(#CSV_STR,i,j-i)))
FROM csvtbl
RETURN
END

Save everyone some time. Its not exactly a duplicate but the same response would fix it. It is a SQL this :
The maximum recursion 100 has been exhausted before statement completion error showing in SQL Query

The default maximum number of recursions for a recursive CTE is 100. You need to set OPTION MAXRECURSION number on your SELECT statement where number is anything between 0 and 32767 and 0 is no limit. Obviously be careful setting the number to 0 as you could potentially get an infinite loop.

Related

Same query, different result after removing USE DatabaseName GO

My function GetProductDesc (when called) returns a different result after commenting out USE DatabaseName GO. I don't even know where to start debugging this. The pictures tell the story. I had to blur out a lot but you can see that the results are clearly different. Keep in mind that the pictures are not the function code, they are calling the function GetProductDesc
So strange. Any suggestions? I have an expert helping me later today but I had to share.
EDIT:
The function uses another lookup table in the same database. There is no Top or Order By clause. It calculates the product description based on the input components (numbers). It will return a different result if the input numbers are different, but here the input numbers are the same!
The function has been in place and working for over 5 years. I believe the problem started at about the time the version of SQL Server was updated recently.
EDIT 2 with partial answer:
The problem is caused by ##RowCount. It appears to be a breaking change caused by our recent migration to SQL Server 2019 although I haven't found the problem documented. The function returns a different product description based on ##RowCount following a Select statement. Internally the function does something like this:
SELECT Fields FROM Table WHERE Field = #Variable
IF ##Rowcount = 1
Return ProdDesc1
ELSE
Return ProdDesc2
After the SQL Server migration ##RowCount here was different depending on whether
USE DatabaseName
GO
was present.
The solution was to replace ##Rowcount with a variable #RowCount. This new code works:
DECLARE #RowCount INT = 0
SELECT Fields, #RowCount = #RowCount + 1
FROM Table WHERE Field = #Variable
IF #RowCount = 1
Return ProdDesc1
ELSE
Return ProdDesc2
If you have SQL Server 2019 installed try this to recreate the problem:
USE Master
GO
Select ##ROWCOUNT
The result here is ##ROWCOUNT = 0
Now comment out the two top lines:
--USE Master
--GO
Select ##ROWCOUNT
The result is now ##ROWCOUNT = 1
Anybody know why?
There is a SQL Server 2019 cumulative update from Microsoft that fixes this problem.

Stored Procedure for batch delete in Firebird

I need to delete a bunch of records (literally millions) but I don't want to make it in an individual statement, because of performance issues. So I created a view:
CREATE VIEW V1
AS
SELECT FIRST 500000 *
FROM TABLE
WHERE W_ID = 14
After that I do a bunch deletes for example:
DELETE FROM V1 WHERE TS < 2021-01-01
What I want is to import this logic in a While loop and in stored procedure. I tried SELECT COUNT query like this:
SELECT COUNT(*)
FROM TABLE
WHERE W_ID = 14 AND TS < 2021-01-01;
Can I use this number in the same procedure as a condition and how can I manage that?
This is what I have tried and I get an error
ERROR: Dynamic SQL Error; SQL error code = -104; Token unknown; WHILE
Code:
CREATE PROCEDURE DeleteBatch
AS
DECLARE VARIABLE CNT INT;
BEGIN
SELECT COUNT(*) FROM TABLE WHERE W_ID = 14 AND TS < 2021-01-01 INTO :cnt;
WHILE cnt > 0 do
BEGIN
IF (cnt > 0) THEN
DELETE FROM V1 WHERE TS < 2021-01-01;
END
ELSE break;
END
I just can't wrap my head around this.
To clarify, in my previous question I wanted to know how to manage the garbage_collection after many deleted records, and I did what was suggested - SELECT * FROM TABLE; or gfix -sweep and that worked very well. As mentioned in the comments the correct statement is SELECT COUNT(*) FROM TABLE;
After that another even bigger database was given to me - above 50 million. And the problem was the DB was very slow to operate with. And I managed to get the server it was on, killed with a DELETE statement to clean the database.
That's why I wanted to try deleting in batches. The slow-down problem there was purely hardware - HDD has gone, and we replaced it. After that there was no problem with executing statements and doing backup and restore to reclaim disk space.
Provided the data that you need to delete, doesn't ever need to be rollbacked once the stored procedure is kicked off, there is another way to handle massive DELETEs in a Stored Procedure.
The example stored procedure will delete the rows 500,000 at a time. It will loop until there aren't any more rows to delete. The AUTONOMOUS TRANSACTION will allow you to put each delete statement in its own transaction and it will commit immediately after the statement completes. This is issuing an implicit commit inside a stored procedure, which you normally can't do.
CREATE OR ALTER PROCEDURE DELETE_TABLEXYZ_ROWS
AS
DECLARE VARIABLE RC INTEGER;
BEGIN
RC = 9999;
WHILE (RC > 0) DO
BEGIN
IN AUTONOMOUS TRANSACTION DO
BEGIN
DELETE FROM TABLEXYZ ROWS 500000;
RC = ROW_COUNT;
END
END
SELECT COUNT(*)
FROM TABLEXYZ
INTO :RC;
END
because of performance issues
What are those exactly? I do not think you actually are improving performance, by just running delete in loops but within the same transaction, or even different TXs but within the same timespan. You seem to be solving some wrong problem. The issue is not how you create "garbage", but how and when Firebird "collects" it.
For example, Select Count(*) in Interbase/Firebird engines means natural scan over all the table and the garbage collection is often trigggered by it, which can itself get long if lot of garbage was created (and massive delete surely does, no matter if done by one million-rows statement or million of one-row statements).
How to delete large data from Firebird SQL database
If you really want to slow down deletion - you have to spread that activity round the clock, and make your client application call a deleting SP for example once every 15 minutes. You would have to add some column to the table, flagging it is marked for deletion and then do the job like that
CREATE PROCEDURE DeleteBatch(CNT INT)
AS
DECLARE ROW_ID INTEGER;
BEGIN
FOR SELECT ID FROM TABLENAME WHERE MARKED_TO_DEL > 0 INTO :row_id
DO BEGIN
CNT = CNT - 1;
DELETE FROM TABLENAME WHERE ID = :ROW_ID;
IF (CNT <= 0) THEN LEAVE;
END
SELECT COUNT(1) FROM TABLENAME INTO :ROW_id; /* force GC now */
END
...and every 15 minutes you do EXECUTE PROCEDURE DeleteBatch(1000).
Overall this probably would only be slower, because of single-row "precision targeting" - but at least it would spread the delays.
Use DELETE...ROWS.
https://firebirdsql.org/file/documentation/html/en/refdocs/fblangref25/firebird-25-language-reference.html#fblangref25-dml-delete-orderby
But as I already said in the answer to the previous question it is better to spend time investigating source of slowdown instead of workaround it by deleting data.

Why does my stored procedure execute all selects regardless of condition logic?

I'm using SQL Server 2008.
I have an interesting scenario where a stored procedure (written by a "power user") has an okay runtime of (around 4 seconds) if there's data in the primary table. If the search value doesn't exist, the run time averages out at about 3 minutes. Because of how the process works, and the web application that uses the procedure, it requires an empty result set in the case of no data.
I've tested the logic below with values that have data and values that don't and the flow seems to work; however, when I put my actual query in the else statement, it seems like that part is always being evaluated despite my knowing that logic branch shouldn't execute.
DECLARE #spId int
SELECT #spId = td.mainId
FROM dbo.PRIMARYTABLE
WHERE td.longId = #searchVal
IF #spId < 1 OR #spId IS NULL
BEGIN
select 'RETURN EMPTY RESULT SET' as test
END
ELSE
BEGIN
SELECT 'DO ACTUAL QUERY' as test
END
When I test this with a dummy value, such as 1111, the select 'RETURN EMPTY RESULT SET' as test is returned. When I use a value that I know exists, the SELECT 'DO ACTUAL QUERY' as test is returned. When I replace "SELECT 'DO ACTUAL QUERY' as test" with the actual heavy duty query and use the same non-existent dummy value, it still looks like the ELSE clause is reached.
What am I missing here?
Perhaps you are not showing everything. There is an counter-intuitive thing about assignment in select where no rows are returned - the value of variable will not be cleared. Paste this in SSMS:
declare #searchVal as int
set #searchVal=111
DECLARE #spId int
set #spId = 2134
SELECT #spId = td.mainId
FROM (select 839 as mainId, 0 as longid) td
where td.longId = #searchVal
print #spid
#spid will be 2134. This is why you should always test using ##rowcount, in you case
IF ##rowcount = 0 or #spId < 1 or #spId is null
BEGIN
select 'RETURN EMPTY RESULT SET' as test
END
ELSE
BEGIN
SELECT 'DO ACTUAL QUERY' as test
END
There is also a possibility of duplicated data by longId, returning random mainid from rows that satisfy #searchval condition.
Other than that, I would not know.
Thank you all for your suggestions. I apologize for the lack of posting the entire stored procedure, but I'm not allowed to share that exact code. The snippet I began with was psuedo code (well, real code with tables and fields renamed).
I think Nikola Markovinović may be onto something with his answer and article link. This entire ordeal has been sort of maddening. I googled, debugged, and did the whole thing again, then search on stack overflow. After a few changes from your suggestions, the procedure magically started responding with the run time I thought it should. I don't think some of the initial changes took or maybe they weren't be cached by sql server correctly; I've got nothing but guesses.
It's very strange because, for a good hour or more, it was running as if it had never been changed (performance wise)...then it just kicked into gear. I wonder if this isn't my fault and maybe I didn't alter the one on staging like I did the one on Test...that seems the most feasible explanation.
Anyhow, thank you for your suggestions. I've learned a few things so that's always good.

getting number of records updated or inserted in sql server stored procedure

I have an SP that inserts some records and updates others and deletes some. What I want is to return the count values of what was inserted and what was updated and what was deleted. I thought I could use ##ROWCOUNT but that is always giving me a 1.
After my INSERT I run:
PRINT ##ROWCOUNT
But my message console shows what really happened and this number:
(36 row(s) affected)
1
So I can see that 36 records were actually updated but ##ROWCOUNT returned a 1.
I am trying to do the same thing after the UPDATE and DELETE parts of the SP runs with the same result.
##ROWCOUNT will show the number of rows affected by the most recent statement - if you have any statements between the INSERT and the PRINT then it will give you the wrong number.
Can you show us a little more code so we can see the order of execution?
Depending on how #ninesided's answer works for you, you could also use the output clause on each update/insert/delete and get the counts from there.
Example:
declare #count table
(
id int
)
update mytable
set oldVal = newVal
output inserted.field1 into #count
select count(*) from #count
You could reuse the count table throughout, and set variables as needed to hold the values.

The object name 'FacetsXrefStaging.Facets.Facets.FacetsXrefImport' contains more than the maximum number of prefixes. The maximum is 2

Hi i have created a proc which truncates and reseeds the no of records from the tables. but i am getting the error : The object name 'FacetsXrefStaging.Facets.Facets.FacetsXrefImport' contains more than the maximum number of prefixes. The maximum is 2.
Create proc TruncateAndReseedFacetsXrefStagingTables
'
'
Declare variables
'
'
SET #iSeed = ( SELECT CASE WHEN MAX(FacetsXrefId) IS NULL
THEN -2147483648
ELSE MAX(FacetsXrefId) + 1
END
FROM FacetsXref.Facets.Facets.FacetsXrefCertified
)
TRUNCATE TABLE FacetsXrefStaging.Facets.Facets.FacetsXrefImport
DBCC CHECKIDENT ('FacetsXrefStaging.Facets.FacetsXrefImport', RESEED,#iSeed )
TRUNCATE TABLE FacetsXrefStaging.Facets.FacetsXrefImport
Can anybody help me with that.
I AM USING SQL SERVER 2005.
I am actually having this problem that the OP had - and there's no typo involved in my situation. :-)
This is a table that exists on a different server from the server I'm on. The servers are linked.
The queries above and below the TRUNCATE statement work just fine.
The TRUNCATE does not work.
...Anonymized to protect the innocent...
select count(*) as mc from servername.databasename.dbo.tablename -- works
truncate TABLE [servername].[databasename].[dbo].[tablename] -- error
select count(*) as mc from servername.databasename.dbo.tablename -- works
Error message:
The object name 'servername.databasename.dbo.'
contains more than the maximum number of prefixes. The maximum is 2.
Yes - the TRUNCATE is commented but I noticed that after I did all the blur effects and wasn't going to go back and re-make the image - sorry :-( - ignore the begin/end tran and the comment markers - the TRUNCATE does not work - see error above.