I am trying to execute a loop while in Dbeaver with SQL Server, but the statement keeps loading and does not execute it. I just want to print the word 'ok!' 3 times.
I'm not sure if it's a loop problem, a Dbeaver problem, or another.
Can anyone help please?
My code:
DECLARE #cnt INT = 0;
WHILE #cnt < 3
BEGIN
PRINT 'ok!';
END;
Screenshot from Dbeaver
#cnt never increments, so this loop will never finish. It will never yield control back to the system to let it even show the first ok string. You need to add this:
DECLARE #cnt INT = 0;
WHILE #cnt < 3
BEGIN
PRINT 'ok!';
Set #cnt = #cnt + 1;
END;
Anyway, 99 times out of a 100, if you're writing a loop in SQL at all you're doing something very wrong. SQL really wants to operating on whole sets at a time, not individual items in the sets via loops.
Related
Is there a way to observe the values of variable inside of a running query. Let's say that I have a loop that has been running for hours. I want to see what the value of the variable which was explicitly set in the query for the start of the loop (#start).
Yes, I can determine this by deductive reasoning by looking at the what the procedure is doing (first value inserted/updated, etc), but I'm looking for a way to actually dig into a running query.
Try RAISERROR(...) WITH NOWAIT with a severity <= 10.
DECLARE #i INT = 0;
DECLARE #out VARCHAR(4);
WHILE #i < 100
BEGIN
SET #i = #i + 1;
SET #out = CAST(#i AS VARCHAR(4));
RAISERROR (#out, 1, 0 ) WITH NOWAIT;
WAITFOR DELAY '00:00:0.250'; -- wait 250 milliseconds
END
Severity levels from 0 through 18 can be specified by any user.
When RAISERROR is run with a severity of 11 or higher in a TRY block, it transfers control to the associated CATCH block. The error is returned to the caller if RAISERROR is run outside the scope of any TRY block, or with a severity of 10 or lower in a TRY block.
##ERROR is set to 0 by default for messages with a severity from 1 through 10.
I have the following sql:
UPDATE Customer SET Count=1 WHERE ID=1 AND Count=0
SELECT ##ROWCOUNT
I need to know if this is guaranteed to be atomic.
If 2 users try this simultaneously, will only one succeed and get a return value of 1? Do I need to use a transaction or something else in order to guarantee this?
The goal is to get a unique 'Count' for the customer. Collisions in this system will almost never happen, so I am not concerned with the performance if a user has to query again (and again) to get a unique Count.
EDIT:
The goal is to not use a transaction if it is not needed. Also this logic is ran very infrequently (up to 100 per day), so I wanted to keep it as simple as possible.
It may depend on the sql server you are using. However for most, the answer is yes. I guess you are implementing a lock.
Using SQL SERVER (v 11.0.6020) that this is indeed an atomic operation as best as I can determine.
I wrote some test stored procedures to try to test this logic:
-- Attempt to update a Customer row with a new Count, returns
-- The current count (used as customer order number) and a bit
-- which determines success or failure. If #Success is 0, re-run
-- the query and try again.
CREATE PROCEDURE [dbo].[sp_TestUpdate]
(
#Count INT OUTPUT,
#Success BIT OUTPUT
)
AS
BEGIN
DECLARE #NextCount INT
SELECT #Count=Count FROM Customer WHERE ID=1
SET #NextCount = #Count + 1
UPDATE Customer SET Count=#NextCount WHERE ID=1 AND Count=#Count
SET #Success=##ROWCOUNT
END
And:
-- Loop (many times) trying to get a number and insert in into another
-- table. Execute this loop concurrently in several different windows
-- using SMSS.
CREATE PROCEDURE [dbo].[sp_TestLoop]
AS
BEGIN
DECLARE #Iterations INT
DECLARE #Counter INT
DECLARE #Count INT
DECLARE #Success BIT
SET #Iterations = 40000
SET #Counter = 0
WHILE (#Counter < #Iterations)
BEGIN
SET #Counter = #Counter + 1
EXEC sp_TestUpdate #Count = #Count OUTPUT , #Success = #Success OUTPUT
IF (#Success=1)
BEGIN
INSERT INTO TestImage (ImageNumber) VALUES (#Count)
END
END
END
This code ran, creating unique sequential ImageNumber values in the TestImage table. This proves that the above SQL update call is indeed atomic. Neither function guaranteed the updates were done, but they did guarantee that no duplicates were created, and no numbers were skipped.
I want to optimize the insertion of millions of records in SQL server.
while #i <= 2000000 begin
set #sql = 'select #max_c= isnull(max(c), 0) from nptb_data_v7'
Execute sp_executesql #sql, N'#max_c int output', #max_c output
set #max_c= #max_c+ 1
while #j <= 10 begin
set #sql_insert = #sql_insert + '(' + cast(#i as varchar) +',' + cast(#b as varchar) + ',' + cast(#max_c as varchar)+ '),'
if len(ltrim(rtrim(#sql_insert)))>=3800
begin
set #sql_insert = SUBSTRING(#sql_insert, 1 , len(ltrim(rtrim(#sql_insert)))-1)
Execute sp_executesql #sql_insert
set #sql_insert = 'insert into dbo.nptb_data_v7 values '
end
set #b = #b + 1
if #b > 100000
begin
set #b = 1
end
set #j = #j + 1
end
set #i = #i + 1
set #j=1
end
set #sql_insert = SUBSTRING(#sql_insert, 1 , len(ltrim(rtrim(#sql_insert)))-1)
Execute sp_executesql #sql_insert
end
I want to optimize the above code as it is taking hours to complete this.
There are quite a few critical things I want to hit on. First, iteratively doing just about anything (especially inserting millions of rows) in SQL is almost never the right way to go. So right off the bat, we need to look at throwing that model out the window.
Second, your approach appears to be to loop over a set of numbers millions of times and add a new set of parentheses to the VALUES clause in the insert, winding you up with a ridiciulously long string of brackets which just look like (1,1,2),(1,2,2)... etc. If you WERE going to do this iteratively, you'd want to make it so that every loop just did an insert rather than building an unwielding insert string.
Third, nothing here needs dynamic SQL; not the first assignment of #max_c, nor the insertion into nptb_data_v7. You could statically construct all of these statements without having to use dynamic sql and a) obfuscate your code and b) open yourself up to injection attack.
With those out of the way, now we can get down to brass tacks. All this appears to be doing is creating combinations of an auto incrementing number between 1 and 2 million, and based on some rules, a value for #i, #b and the current iteration.
The first thing you need here is a tally table (just a big table of integers). There are tons of ways to do this, but here's a succinct script which should get you started.
http://www.sqlservercentral.com/scripts/Advanced+SQL/62486/
Once you have this table made, your script becomes a matter of self joining your newly created numbers/tally table to itself so that the rules you define are satisfied for your insert. Since it's not 100% clear what you're code is trying to get at, nor can I run it from the information provided, this is where I have to leave it up to you. If you can provide a better summary of what your tables look like, your variable declarations and your objective, I may be able to help you write some code. But hopefully this should get you started.
There is a SQL script with some declared variables. I want to run this script for various sets of values of these variables and see the outputs. How do I do this?
Just a note: this answer is copied from here but is a great resource for what you are asking.
More examples for set-based vs. procedural can be found here, here and here.
And here is an actual example in SQL code:
DECLARE #someFlag INT
SET #someFlag = 0
WHILE (#someFlag <=5)
BEGIN
PRINT #someFlag
SET #someFlag = #someFlag + 1
END
GO
If you have the appropriate permissions to do it, you could set up the script as a stored procedure and then run the procedure multiple times. Reference on how to do it: http://msdn.microsoft.com/en-us/library/ms187926(v=sql.100).aspx
You don't have to make a permanent proc either, if you don't need it or want it in whatever database you're running it in, you can set it up as a temp proc instead.
So instead of CREATE PROCEDURE dbo.usp_SomeProcedure AS ....
you would do CREATE PROCEDURE #usp_SomeProcedure
Your other option is to put your script into an nvarchar(max) variable and use that along with your other variables to run sp_executesql (http://msdn.microsoft.com/en-us/library/ms188001.aspx).
I would either use a Cursor or a While loop (preference would be the While). It would be something like this
DECLARE #i INT
SET #i = 1
WHILE (#i <=10)
BEGIN
-- do whatever you need to do
SET #i = #i + 1
END
I am trying to run a stored procedure with a while loop in it using Aqua Data Studio 6.5 and as soon as the SP starts Aqua Data starts consuming an increasing amount of my CPU's memory which makes absolutely no sense to me because everything should be off on the Sybase server I am working with. I have commented out and tested every piece of the SP and narrowed the issue down to the while loop. Can anyone explain to me what is going on?
create procedure sp_check_stuff as
begin
declare
#counter numeric (9),
#max_id numeric (9),
#exists numeric (1),
#rows numeric (1)
select #max_id = max(id)
from my_table
set #counter = 0
set #exists = 0
set #rows = 0
while #count <= #max_id
begin
//More logic which doesn't affect memory usage based
//on commenting it out and running the SP
set #counter = #counter + 1
set #exists = 0
set #rows = 0
end
end
return
How many times does the while loop iterate? I suspect Aqua Data Studio is building up data structures as the query runs and for every iteration of the loop, a further block of memory is needed to catalogue the plan/stats of that iteration.