I have a feeling this is an extremely newbie question, but it's hard to find the answer as anything to do with logging points me to SQL errors and issues. If not that, then the answer is querying the entire log to sift through.
When I insert data into an existing table via TSQL. How can I save or reference the Query Message for that specific statement? That way I can take the Query Message and insert the result into a log table that specifies how many records got inserted, maybe a duration of time it took and etc.
I'm using SQL Server 2008 R2 and these SQL statements are stored procedures inserting data and updating data. I want to ensure every step of the process is logged and inserted into a specific log table with details about that step of the process.
Thanks for your help on this (I'm assuming) newbie question. I'm still learning MSSQL.
DECLARE #dt DATETIME2(7), #duration INT, #rowcount INT;
SET #dt = SYSDATETIME();
INSERT dbo.foo(bar) VALUES('x');
SELECT #rowcount = ##ROWCOUNT, #duration = DATEDIFF(MICROSECOND, #dt, SYSDATETIME());
INSERT dbo.LoggingTable(duration,row_count) SELECT #duration, #rowcount;
In 2005 or lower, you can't get quite that precise, e.g.
DECLARE #dt DATETIME, ...
SET #dt = GETDATE();
...
... , #duration = DATEDIFF(MILLISECOND, #dt, GETDATE());
Related
I'm having a problem and I don't know how to solve it, I have searched the web and found good advice but I can't work it out.
This is the problem: I have a SQL Server instance running on my PC, and I linked one of the main servers SRVOLD\SQLDESA to it. I want to execute main server's stored procedures from my PC's SQL Server instance and insert the results into a new table. I found the perfect way to do it using the following:
SELECT *
INTO Bank
FROM OPENQUERY([SRVOLD\SQLDESA],
'EXEC Bank_Database.Bank.usp_GetTDcodes 1, 5')
GO
There is important information about this server, it's SQL Server version is 2008. Keep this in mind for later.
Ok so I managed to executed this Stored Procedure but I found out something, turns out that inside this Stored Procedure there's an execution of another stored procedure, check this out:
1st stored procedure:
CREATE PROCEDURE Bank.usp_GetTDcodes
(#code TINYINT = NULL, #qty TINYINT = NULL)
WITH ENCRYPTION
AS
DECLARE ##msg VARCHAR(100)
DECLARE ##OK INT
DECLARE ##today CHAR(30)
SELECT ##today = CONVERT(VARCHAR(30), GETDATE(), 112) + ' ' +
CONVERT(VARCHAR(30), GETDATE(), 8)
SELECT bnk_code, bnk_descr
FROM CODBNK
WHERE bnk_code < 50
EXECUTE ##OK = Bank.usp_WriteEvent #qty, #code, ##today, 500
IF ##OK <> 0
RETURN ##OK
RETURN 0
GO
Now let's look inside the 2nd stored procedure:
CREATE PROCEDURE Bank.usp_WriteEvent
(#code TINYINT = NULL,
#qty TINYINT = NULL,
#date DATETIME = NULL,
#number SMALLINT = NULL,
#ideve INT = 0 OUTPUT)
WITH ENCRYPTION
AS
DECLARE ##sdate VARCHAR(30)
DECLARE ##ret SMALLINT
INSERT INTO Event (eve_code, eve_qty, eve_date, eve_number)
VALUES (#code, #qty, #date, #number)
SET ##ret = ##error
IF ##ret = 0
BEGIN
SELECT #ideve = ##IDENTITY
SELECT ##sdate = CONVERT(VARCHAR(30), #date, 112) + ' ' +
VARCHAR(30), #date, 8)
END
ELSE
RETURN ##ret
GO
When I executed the 1st stored procedure, I was able to insert it's result into a new table, but I was hoping to find a new row inserted in the table Event, because that is the expected result when executing 2nd stored procedure.
So I started to search online and managed to achieve this by doing the following:
SELECT *
INTO Bank
FROM OPENQUERY([SRVTEST\SQLDESA],
'SET FMTONLY OFF;SET NOCOUNT ON;EXEC Bank_Database.Bank.usp_GetTDcodes 1, 5')
GO
So, the SET FMTONLY OFF;SET NOCOUNT ON worked and I was happy. But something happened...
I needed to execute the same stored procedure, but this time adding a new linked server SRVNEW\SQLDESA. This server's version is 2012, so the new solution didn't work. I kept trying and trying different ways, there's just one way to make it work and is the following:
EXEC [SRVNEW\SQLDESA].[Bank_Database].Bank.usp_GetTDcodes 1,5
But it doesn't work for me because I need the 1st stored procedure result into a new table. And I don't know its schema that's why SELECT INTO works best for me.
I don't know what else I can do, maybe is the OPENQUERY that doesn't work? Do I need to change something else?
PD: I also tried using OPENROWSET didn't work either.
Thanks in advance, and have a nice day!
Peace!
Some references: http://www.sommarskog.se/share_data.html#OPENQUERY
I have got a procedure which inserts data from one table to other and one time it takes from example 5 minutes and next time for example 15 minutes.
I want to write code that create a log in my log table when procedure will take more then 10 minutes. Is exists any function or time counter in ms sql that I can use?
Add the following lines into your SP and it should work:
ALTER PROCEDURE YourSP
AS
BEGIN
DECLARE #StartTime AS DATETIME = GETDATE();
... <Your current lines>
IF DATEDIFF(mi, #StartTime, GETDATE()) > 10
INSERT INTO LogTable <YourFields>, MinutesSpent
VALUES <YourValues>, DATEDIFF(mi, #StartTime, GETDATE())
END
Why would you only log particular calls to the stored procedure? You should log all calls and filter out the ones that you want. This week you might be interesting in timings longer than 10 minutes. Next week, the data might grow and it might be 12 minutes.
Or you might change the code to make it more efficient, and it should finish in 2 minutes.
If you are only interested in timing, I would write a rather generic log table, something like this:
create table spTimeLog (
procedureName varchar(255),
startDateTime datetime
endDateTime datetime,
createdAt datetime default getdate()
);
create procedure usp_proc . . .
begin
declare #StartTime datetime = getdate();
. . .
insert into spTimeLog (procedureName, startDateTime, endDateTime)
values ('usp_proc', StartTime, getdate());
end;
Then you can get the information you want when you query the table:
select count(*)
from spTimeLog tl
where tl.procedureName = 'usp_proc' and
endDateTime > dateadd(minute, 10, startDateTime);
In general, when I write stored procedures for a real application, the stored procedures generate audit logs when they enter and exit -- both successfully and when they fail.
You can try this way
declare #start datetime = getdate()
-- your SQL statements
exec dbo.MyStoredProcedure
declare #executionTimeInMilliseconds int = datediff(ms, #start, getdate())
I have a task to test stored procedures performance in SQL Server. My goal is to report the average time and standard deviation of the stored procedure's execution time to the stakeholders. Realistic data input is a must here :)
My question: as I was trying to realistically stage the test I created a simple script that supposedly should measure the time it takes to perform stored procedure:
DECLARE #ValidCharacters varchar(20),
#DataLength tinyint, #LocalPart smallint, #DomainPart smallint
SET #ValidCharacters = 'abcdefghijklmnopqrstuvwxy'
SET #DataLength = DATALENGTH (#ValidCharacters) - 1
CREATE TABLE #LocalTempTable(EmailID int PRIMARY KEY IDENTITY(1,1), email varchar(30));
CREATE TABLE #LocalTempTableTimesOfInserting(TimesOfInsertingID int PRIMARY KEY IDENTITY(1,1), TimesOfInserting int);
DECLARE #counter int, #boundary int, #email varchar(25), #start DateTime, #end DateTime
SET #counter=0
SET #boundary=25
WHILE (#counter < #boundary)
BEGIN
DBCC FREEPROCCACHE;
DBCC DROPCLEANBUFFERS;
SET #email = SUBSTRING(#ValidCharacters, ABS(CHECKSUM(NewId())) % #DataLength + 1, ABS(CHECKSUM(NewId())) % #DataLength + 1) +
'#' + SUBSTRING(#ValidCharacters, ABS(CHECKSUM(NewId())) % #DataLength + 1, ABS(CHECKSUM(NewId())) % #DataLength + 1) + '.com'
SET #start = SYSDATETIME()
INSERT INTO #LocalTempTable VALUES (#email);
SET #end = SYSDATETIME()
INSERT INTO #LocalTempTableTimesOfInserting
VALUES (DATEDIFF(ns, #start, #end));
SELECT DATEDIFF(ms, #start, #end)
SET #counter = #counter + 1;
END
You see that I'm doing a micro benchmark on an insert and recording the results to a local temporary table (my idea was to export is latter to excel and do my calculations there and share it with colleagues) :)
My questions:
Do you have any advice how to improve the performance test - the real stored procedure is much heavier than the one in the example (I have read many a post, tried tools like SQLQueryStress - I'm really interested in doing the test this way, mainly for the interesting questions I go here);
Why do I get useless results in this case, like so (measuring in nanoseconds) - maybe as the operations are beginning to be simple and fast I will see such fluctuating performance, instability in results? Or maybe this is the result of SQL Server returning a cached result (even if I'm using DBCC; how to turn it off then?). Another explanation could be thread and parallel execution (different threads are executing time functions and thus they are executing them in parallel - which would explain the zeros):
To answer your second question: like you said, it's probably the result of query execution plan caching.
Execution Plan Caching and Reuse
When you manually run the sp and it runs quickly, you can try executing another query and then running your sp test again. If it runs more slowly the second time around, your quick speeds are probably due to the caching.
I have a stored procedure which update a table based on such calculation and the calculation is done as column name (Calendatedate) - (Current System Date Time) and update this information to a column (TimeSpent) and display the value in Hh:Mm:SS:Msec format.
The query is working fine but I want to update it in such a way so that the time spent should be only HH:MM:SS format. Please help me that how I remove that Msec from the time spent.
CREATE procedure St_Proc_UpdateTimeSpent
#timeEntryID int,
#status int output
as begin
set nocount on;
declare #Date dateTime;
set #Date=GETDATE();
update Production set TimeSpent=(SELECT CONVERT(VARCHAR(20),DateAdd(SS,Datediff(ss,CalendarDate, #Date)%(60*60*24),0),114)),
IsTaskCompleted=1
where productionTimeEntryID=#timeEntryID
set #status=1;
return #status;
end
You can just use style 108 instead of 114 in the CONVERT function to get only the hh:mm:ss:
CREATE PROCEDURE dbo.St_Proc_UpdateTimeSpent
#timeEntryID int,
#status int output
AS BEGIN
SET NOCOUNT ON;
DECLARE #Date DATETIME;
SET #Date = GETDATE();
UPDATE dbo.Production
SET TimeSpent = CONVERT(VARCHAR(20), DATEADD(SS, DATEDIFF(ss, CalendarDate, #Date)%(60*60*24),0), 108),
IsTaskCompleted = 1
WHERE
productionTimeEntryID = #timeEntryID
SET #status = 1;
RETURN #status;
END
See the excellent MSDN documentation on CAST and CONVERT for a comprehensive list of all supported styles when converting DATETIME to VARCHAR (and back)
BTW: SQL Server 2008 also introduced a TIME datatype which would probably be a better fit than a VARCHAR to store your TimeSpent values ... check it out!
The following stored proc has been written some time ago and now requires modification.
Unable to contact the original developer, I had a look. To me this proc seems over-complicated. Couldn't it be done with a straightforward UPDATE? Can anyone justify the use of CURSOR here?
ALTER PROCEDURE [settle_Stage1]
#settleBatch int
AS
DECLARE #refDate datetime;
DECLARE #dd int;
DECLARE #uid int;
DECLARE trans_cursor CURSOR FOR
SELECT uid, refDate FROM tblTransactions WHERE (settle IS NULL ) AND (state IN ( 21, 31, 98, 99 ))
OPEN trans_cursor
FETCH FROM trans_cursor INTO #uid, #refDate
WHILE ##FETCH_STATUS = 0
BEGIN
SET #dd = DATEDIFF( day, #refDate, getDate())
IF ( #dd >= '1' )
BEGIN
UPDATE tblTransactions
SET settle = #settleBatch WHERE uid = #uid
END
FETCH FROM trans_cursor INTO #uid, #refDate
END
CLOSE trans_cursor
DEALLOCATE trans_cursor
You are right - this looks like "procedural SQL", from someone who probably doesn't get SQL and set operations.
And converting this to a set based query should help performance.
A cursor is not needed and is indeed over complicating the stored procedure.
If there are triggers involved that would blow up on multiple updated rows, then you would want to iterate. But that would still not justify using an actual CURSOR.
Doing single updates would cause row locks and not page or table locks that a set based update could. Since you're making the transactions smaller, the programmer could have been attempted to remove deadlocks which were caused by a large update.
NOTE: I am not advocating this method, I am only suggesting reasons.
Simply looking at it, I don't see any reason at all why this isn't done on a single UPDATE. Maybe (and its a maaaaaybe) if there are too many records to update, then this could be a reason. In any case, I would simply change it with:
UPDATE tblTransactions
SET settle = #settleBatch
WHERE settle IS NULL
AND [state] IN (21, 31, 98, 99)
AND DATEDIFF( day, refDate, getDate()) >= 1
edited following #Martin Smith comment
If running one record at a time is too slow and a single update causes blocking and too much growth of the transaction log, the third alternative is to batch process. Use a set-based query, but run it through a loop of 1000 records at a time (you may have to experiement to find the optimum size of the batch).