I am using JMETER for load testing a SQL DB and I tryed to do (in a single JDBC Sampler):
IF NOT OBJECT_ID('tempdb.dbo.#tmp') IS NULL
BEGIN DROP TABLE #tmp END
CREATE TABLE #tmp ( entity numeric(4, 0) NOT NULL, entityCode numeric(18, 0) NOT NULL, entityTraceabilityDeclared numeric(18, 0) NOT NULL )
CREATE NONCLUSTERED INDEX IX_#tmp_entityCode_entity ON #tmp ( entityCode, entity )
{CALL entityTraceability_UpdatePieceLastMovement }
But i got a error saying "invalid object name #tmp"
If i do
SELECT * FROM #tmp
Instead of calling the sp it works.
Thanks.
You are creating a temporary table by using # in the front. This means it will only be available in the exact session it was created in. It will not be visible by any sessions you create outside of jmeter or in fact any sessions in jmeter that aren't the session that issued the create (in case you are creating multiple threads).
I noticed you are using .dbo. in tempdb.dbo.#tmp which could be an issue if dbo is not the default database. Might want to use tempdb..#tmp instead since temps go there.
Also, you could maybe print out ##SPID to make sure that you are actually doing things in the sessions you think you are. Just a thought.
Related
QUESTION:
What approach should I use to notify one databases about the changes made to a table in another database. Note: I need one notification per statement level event, this includes the merge statement which does a insert, update and delete in one.
BACKGROUND:
We're working with a third party to transfer some data from one system to another. There are two databases of interest here, one which the third party populates with normalised staging data and a second database which will be populated with the de-normalised post processed data. I've created MERGE scripts which do the heavy lifting of processing and transferral of the data from these staging tables into our shiny denormalised version, and I've written a framework which manages the data dependencies such that look-up tables are populated prior to the main data etc.
I need a reliable way to be notified of when the staging tables are updated so that my import scripts are run autonomously.
METHODS CONSIDERED:
SQL DML Triggers
I initially created a generic trigger which sends change information to the denormalised database via service broker, however this trigger is firing three times, once for insert, update and delete and is thus sending three distinct messages which is causing the import process to run three times for a single data change. It should be noted that these staging tables are also being updated using the MERGE functionality within SQL Server, so is handled in a single statement.
SQL Query Notification
This appears to be perfect for what I need, however there doesn't appear to be anyway to subscribe to notifications from within SQL Server, this can only be of used to notify change at an application layer written in .net. I guess I maybe able to manage this via CLR integration, however I'd still need to drive the notification down to the processing database to trigger the import process. This appears to be my best option although it will be long winded, difficult to debug and monitor, and probably over complicating an otherwise simple issue.
SQL Event Notification
This would be perfect although doesn't appear to function for DML, regardless of what you might find in the MS documentation. The create event notification command takes a single parameter for event_type so can be thought of as operating at the database level. DML operates at an entity level and there doesn't appear to be anyway to target a specific entity using the defined syntax.
SQL Change Tracking
This appears to capture changes on a database table but at a row level and this seems to be too heavy handed for what I require. I simply need to know that a change has happened, I'm not really interested in which rows or how many, besides I'd still need to convert this into an event to trigger the import process.
SQL Change Data Capture
This is an extension of Change Tracking and records both the change and the history of the change at the row level. This is again far too detailed and still leaves me with the issue of turning this into a notification of some kind so that import process can be kicked off.
SQL Server Default Trace / Audit
This appears to require a target which must be of either a Windows Application / Security event log or a file on the IO which I'd struggle to monitor and hook into for changes.
ADDITIONAL
My trigger based method would work wonderfully if only the trigger was fired once. I have considered creating a table to record the first of the three DML commands which could then be used to suspend the posting of information within the other two trigger operations, however I'm reasonable sure that all three DML triggers (insert, update delete) will fire in parallel rending this method futile.
Can anyone please advise on a suitable approach that ideally doesn't use a scheduled job to check for changes. Any suggestions gratefully received.
This simplest approach has been to create a secondary table to record when the trigger code is run.
CREATE TABLE [service].[SuspendTrigger]
(
[Index] [int] IDENTITY(1,1) NOT NULL,
[Name] [nvarchar](200) NOT NULL,
[DateTime] [datetime] NOT NULL,
[SPID] [int] NOT NULL,
CONSTRAINT [pk_suspendtrigger_index] PRIMARY KEY CLUSTERED
(
[Index] ASC
) ON [PRIMARY]
) ON [PRIMARY]
Triggers run sequentially so even when a merge statement is applied to an existing table the insert, update and delete trigger code run one after the other.
The first time we enter the trigger we can therefore write to this suspension table to record the event and then execute what ever code needs to be executed.
The second time we enter the trigger we can check to see if a record already exists and therefore prevent execution of any further statements.
alter trigger [dbo].[trg_ADDRESSES]
on [dbo].[ADDRESSES]
after insert, update, delete
as
begin
set nocount on;
-- determine the trigger action - not trigger may fire
-- when nothing in either update or delete table
------------------------------------------------------
declare #action as nvarchar(6) = (case when ( exists ( select top 1 1 from inserted )
and exists ( select top 1 1 from deleted )) then N'UPDATE'
when exists ( select top 1 1 from inserted ) then N'INSERT'
when exists ( select top 1 1 from deleted ) then N'DELETE'
end)
-- check for valid action
-------------------------
if #action is not null
begin
if not exists ( select *
from [service].[SuspendTrigger] as [suspend]
where [suspend].[SPID] = ##SPID
and [suspend].[DateTime] >= dateadd(millisecond, -300, getdate())
)
begin
-- insert a suspension event
-----------------------------
insert into [service].[SuspendTrigger]
(
[Name] ,
[DateTime] ,
[SPID]
)
select object_name(##procid) as [Name] ,
getdate() as [DateTime] ,
##SPID as [SPID]
-- determine the message content to send
----------------------------------------
declare #content xml = (
select getdate() as [datetime] ,
db_name() as [source/catelogue] ,
'DBO' as [source/table] ,
'ADDRESS' as [source/schema] ,
(select [sessions].[session_id] as [#id] ,
[sessions].[login_time] as [login_time] ,
case when ([sessions].[total_elapsed_time] >= 864000000000) then
formatmessage('%02i DAYS %02i:%02i:%02i.%04i',
(([sessions].[total_elapsed_time] / 10000 / 1000 / 60 / 60 / 24)),
(([sessions].[total_elapsed_time] / (1000*60*60)) % 24),
(([sessions].[total_elapsed_time] / (1000*60)) % 60),
(([sessions].[total_elapsed_time] / (1000*01)) % 60),
(([sessions].[total_elapsed_time] ) % 1000))
else
formatmessage('%02i:%02i:%02i.%i',
(([sessions].[total_elapsed_time] / (1000*60*60)) % 24),
(([sessions].[total_elapsed_time] / (1000*60)) % 60),
(([sessions].[total_elapsed_time] / (1000*01)) % 60),
(([sessions].[total_elapsed_time] ) % 1000))
end as [duration] ,
[sessions].[row_count] as [row_count] ,
[sessions].[reads] as [reads] ,
[sessions].[writes] as [writes] ,
[sessions].[program_name] as [identity/program_name] ,
[sessions].[host_name] as [identity/host_name] ,
[sessions].[nt_user_name] as [identity/nt_user_name] ,
[sessions].[login_name] as [identity/login_name] ,
[sessions].[original_login_name] as [identity/original_name]
from [sys].[dm_exec_sessions] as [sessions]
where [sessions].[session_id] = ##SPID
for xml path('session'), type)
for xml path('persistence_change'), root('change_tracking'))
-- holds the current procedure name
-----------------------------------
declare #procedure_name nvarchar(200) = object_name(##procid)
-- send a message to any remote listeners
-----------------------------------------
exec [service].[usp_post_content_via_service_broker] #MessageContentType = 'Source Data Change', #MessageContent = #content, #CallOriginator = #procedure_name
end
end
end
GO;
All we need to do now is create an index on the [datetime] field within the suspension table so that this is used during the check. I'll probably also create a job to clear down any entries older than a couple of minutes to try to keep the contents down.
Either way, this provides a way of ensuring that only on notification is generated per table level modification.
if your interested the message contents will look something like this ...
<change_tracking>
<persistence_change>
<datetime>2016-08-01T16:08:10.880</datetime>
<source>
<catelogue>[MY DATABASE NAME]</catelogue>
<table>DBO</table>
<schema>ADDRESS</schema>
</source>
<session id="1014">
<login_time>2016-08-01T15:03:01.993</login_time>
<duration>00:00:01.337</duration>
<row_count>1</row_count>
<reads>37</reads>
<writes>68</writes>
<identity>
<program_name>Microsoft SQL Server Management Studio - Query</program_name>
<host_name>[COMPUTER NAME]</host_name>
<nt_user_name>[MY ACCOUNT]</nt_user_name>
<login_name>[MY DOMAIN]\[MY ACCOUNT]</login_name>
<original_name>[MY DOMAIN]\[MY ACCOUNT]</original_name>
</identity>
</session>
</persistence_change>
</change_tracking>
I could send over the action that triggered the notification but I'm only really interested in the fact that some data has changed in this table.
I have a a number of sp's that create a temporary table #TempData with various fields. Within these sp's I call some processing sp that operates on #TempData. Temp data processing depends on sp input parameters. SP code is:
CREATE PROCEDURE [dbo].[tempdata_proc]
#ID int,
#NeedAvg tinyint = 0
AS
BEGIN
SET NOCOUNT ON;
if #NeedAvg = 1
Update #TempData set AvgValue = 1
Update #TempData set Value = -1;
END
Then, this sp is called in outer sp with the following code:
USE [BN]
--GO
--DBCC FREEPROCCACHE;
GO
Create table #TempData
(
tele_time datetime
, Value float
--, AvgValue float
)
Create clustered index IXTemp on #TempData(tele_time);
insert into #TempData(tele_time, Value ) values( GETDATE(), 50 ); --sample data
declare
#ID int,
#UpdAvg int;
select
#ID = 1000,
#UpdAvg = 1
;
Exec dbo.tempdata_proc #ID, #UpdAvg ;
select * from #TempData;
drop table #TempData
This code throws an error: Msg 207, Level 16, State 1, Procedure tempdata_proc, Line 8: Invalid column name "AvgValue".
But if only I uncomment declaration AvgValue float - everything works OK.
The question: is there any workaround letting the stored proc code remain the same and providing a tip to the optimizer - skip this because AvgValue column will not be used by the sp due to params passed.
Dynamic SQL is not a welcomed solution BTW. Using alternative to #TempData tablename is undesireable solution according to existing tsql code (huge modifications necessary for that).
Tried SET FMTONLY, tempdb.tempdb.sys.columns, try-catch wrapping without any success.
The way that stored procedures are processed is split into two parts - one part, checking for syntactical correctness, is performed at the time that the stored procedure is created or altered. The remaining part of compilation is deferred until the point in time at which the store procedure is executed. This is referred to as Deferred Name Resolution and allows a stored procedure to include references to tables (not just limited to temp tables) that do not exist at the point in time that the procedure is created.
Unfortunately, when it comes to the point in time that the procedure is executed, it needs to be able to compile all of the individual statements, and it's at this time that it will discover that the table exists but that the column doesn't - and so at this time, it will generate an error and refuse to run the procedure.
The T-SQL language is unfortunately a very simplistic compiler, and doesn't take runtime control flow into account when attempting to perform the compilation. It doesn't analyse the control flow or attempt to defer the compilation in conditional paths - it just fails the compilation because the column doesn't (at this time) exist.
Unfortunately, there aren't any mechanisms built in to SQL Server to control this behaviour - this is the behaviour you get, and anything that addresses it is going to be perceived as a workaround - as evidenced already by the (valid) suggestions in the comments - the two main ways to deal with it are to use dynamic SQL or to ensure that the temp table always contains all columns required.
One way to workaround your concerns about maintenance if you go down the "all uses of the temp table should have all columns" is to move the column definitions into a separate stored procedure, that can then augment the temporary table with all of the required columns - something like:
create procedure S_TT_Init
as
alter table #TT add Column1 int not null
alter table #TT add Column2 varchar(9) null
go
create procedure S_TT_Consumer
as
insert into #TT(Column1,Column2) values (9,'abc')
go
create procedure S_TT_User
as
create table #TT (tmp int null)
exec S_TT_Init
insert into #TT(Column1) values (8)
exec S_TT_Consumer
select Column1 from #TT
go
exec S_TT_User
Which produces the output 8 and 9. You'd put your temp table definition in S_TT_Init, S_TT_Consumer is the inner query that multiple stored procedures call, and S_TT_User is an example of one such stored procedure.
Create the table with the column initially. If you're populating the TEMP table with SPROC output just make it an IDENTITY INT (1,1) so the columns line up with your output.
Then drop the column and re-add it as the appropriate data type later on in the SPROC.
The only (or maybe best) way i can thing off beyond dynamic SQL is using checks for database structure.
if exists (Select 1 From tempdb.sys.columns Where object_id=OBJECT_ID('tempdb.dbo.#TTT') and name = 'AvgValue')
begin
--do something AvgValue related
end
maybe create a simple function that takes table name and column or only column if its always #TempTable and retursn 1/0 if the column exists, would be useful in the long run i think
if dbo.TempTableHasField('AvgValue')=1
begin
-- do something AvgValue related
end
EDIT1: Dang, you are right, sorry about that, i was sure i had ... this.... :( let me thing a bit more
We consume a web service that decided to alter the max length of a field from 255. We have a legacy vendor table on our end that is still capped at 255. We are hoping to use a trigger to address this issue temporarily until we can implement a more business-friendly solution in our next iteration.
Here's what I started with:
CREATE TRIGGER [mySchema].[TruncDescription]
ON [mySchema].[myTable]
INSTEAD OF INSERT
AS
BEGIN
SET NOCOUNT ON;
INSERT INTO [mySchema].[myTable]
SELECT SubType, type, substring(description, 1, 255)
FROM inserted
END
However, when I try to insert on myTable, I get the error:
String or binary data would be
truncated. The statement has been
terminated.
I tried experimenting with SET ANSI_WARNINGS OFF which allowed the query to work but then simply didn't insert any data into the description column.
Is there any way to use a trigger to truncate the too-long data or is there another alternative that I can use until a more eloquent solution can be designed? We are fairly limited in table modifications (i.e. we can't) because it's a vendor table, and we don't control the web service we're consuming so we can't ask them to fix it either. Any help would be appreciated.
The error cannot be avoided because the error is happening when the inserted table is populated.
From the documentation:
http://msdn.microsoft.com/en-us/library/ms191300.aspx
"The format of the inserted and deleted tables is the same as the format of the table on which the INSTEAD OF trigger is defined. Each column in the inserted and deleted tables maps directly to a column in the base table."
The only really "clever" idea I can think of is to take advantage of schemas and the default schema used by a login. If you can get the login that the web service is using to reference another table, you can increase the column size on that table and use the INSTEAD OF INSERT trigger to perform the INSERT into the vendor table. A variation of this is to create the table in a different database and set the default database for the web service login.
CREATE TRIGGER [myDB].[mySchema].[TruncDescription]
ON [myDB].[mySchema].[myTable]
INSTEAD OF INSERT
AS
BEGIN
SET NOCOUNT ON;
INSERT INTO [VendorDB].[VendorSchema].[VendorTable]
SELECT SubType, type, substring(description, 1, 255)
FROM inserted
END
With this setup everything works OK for me.
Not to state the obvious but are you sure there is data in the description field when you are testing? It is possible they change one of the other fields you are inserting as well and maybe one of those is throwing the error?
CREATE TABLE [dbo].[DataPlay](
[Data] [nvarchar](255) NULL
) ON [PRIMARY]
GO
and a trigger like this
Create TRIGGER updT ON DataPlay
Instead of Insert
AS
BEGIN
SET NOCOUNT ON;
INSERT INTO [tempdb].[dbo].[DataPlay]
([Data])
(Select substring(Data, 1, 255) from inserted)
END
GO
then inserting with
Declare #d as nvarchar(max)
Select #d = REPLICATE('a', 500)
SET ANSI_WARNINGS OFF
INSERT INTO [tempdb].[dbo].[DataPlay]
([Data])
VALUES
(#d)
GO
I am unable to reproduce this issue on SQL 2008 R2 using:
Declare #table table ( fielda varchar(10) )
Insert Into #table ( fielda )
Values ( Substring('12345678901234567890', 1, 10) )
Please make sure that your field is really defined as varchar(255).
I also strongly suggest you use an Insert statement with an explicit field list. While your Insert is syntactically correct, you really should be using an explicit field list (like in my sample). The problem is when you don't specify a field list you are at the mercy of SQL and the table definition for the field order. When you do use a field list you can change the order of the fields in the table (or add new fields in the middle) and not care about your insert statements.
I am working with an insert trigger within a Sybase database. I know I can access the ##nestlevel to determine whether I am being called directly or as a result of another trigger or procedure.
Is there any way to determine, when the nesting level is deeper than 1, who performed the action causing the trigger to fire?
For example, was the table inserted to directly, was it inserted into by another trigger and if so, which one.
As far as I know, this is not possible. Your best bet is to include it as a parameter to your stored procedure(s). As explained here, this will also make your code more portable since any method used would likely rely on some database-specific call. The link there was specific for SQL Server 2005, not Sybase, but I think you're pretty much in the same boat.
I've not tested this myself, but assuming you are using Sybase ASE 15.03 or later, have your monitoring tables monProcessStatement and monSysStatement enabled, and appropriate permissions set to allow them to be accessed from your trigger you could try...
declare #parent_proc_id int
if ##nestlevel > 1
begin
create table #temp_parent_proc (
procId int,
nestLevel int,
contextId int
)
insert into #temp_parent_proc
select mss.ProcedureID,
mss.ProcNestLevel,
mss.ContextID
from monSysStatement mss
join monProcessStatement mps
on mss.KPID = mps.KPID
and mss.BatchID = mps.BatchID
and mss.SPID = mps.SPID
where mps.ProcedureID =##procid
and mps.SPID = ##spid
select #parent_proc_id = (select tpp.procId
from #temp_parent_proc tpp,
#temp_parent_proc2 tpp2
where tpp.nestLevel = tpp2.nestLevel-1
and tpp.contextId < tpp2.contextId
and tpp2.procId = ##procid
and tpp2.nestLevel = ##nestlevel
group by tpp.procId, tpp.contextId
having tpp.contextId = max(tpp.contextId ))
drop table #temp_parent_proc
end
The temp table is required because of the nature of monProcessStatement and monSysStatement.
monProcessStatement is transient and so if you reference it more than once, it may no longer hold the same rows.
monSysStatement is a historic table and is guaranteed to only return an individual rown once to any process accessing it.
if you do not have or want to set permissions to access the monitoring tables, you could put this into a stored procedure you pass ##procid, ##spid, and ##nestlevel to as parameters.
If this also isn't an option, since you cannot pass parameters into triggers, another possible work around would be to use a temporary table.
in each proc that might trigger this...
create table #trigger_parent (proc_id int)
insert into #trigger_parent ##procid
then in your trigger the temp table will be available...
if object_id('#trigger_parent') is not null
set #parent_proc = select l proc_id from #trigger_parent
you will know it was triggered from within another proc.
The trouble with this is it doesn't 'just work'. You have to enforce temp table setup.
You could do further checking to find cases where there is no #trigger_parent but the nesting level > 1 and combine a similar query to the monitoring tables as above to find potential candidates that would need to be updated.
Brief history:
I'm writing a stored procedure to support a legacy reporting system (using SQL Server Reporting Services 2000) on a legacy web application.
In keeping with the original implementation style, each report has a dedicated stored procedure in the database that performs all the querying necessary to return a "final" dataset that can be rendered simply by the report server.
Due to the business requirements of this report, the returned dataset has an unknown number of columns (it depends on the user who executes the report, but may have 4-30 columns).
Throughout the stored procedure, I keep a column UserID to track the user's ID to perform additional querying. At the end, however, I do something like this:
UPDATE #result
SET Name = ppl.LastName + ', ' + ppl.FirstName
FROM #result r
LEFT JOIN Users u ON u.id = r.userID
LEFT JOIN People ppl ON ppl.id = u.PersonID
ALTER TABLE #result
DROP COLUMN [UserID]
SELECT * FROM #result r ORDER BY Name
Effectively I set the Name varchar column (that was previously left NULL while I was performing some pivot logic) to the desired name format in plain text.
When finished, I want to drop the UserID column as the report user shouldn't see this.
Finally, the data set returned has one column for the username, and an arbitrary number of INT columns with performance totals. For this reason, I can't simply exclude the UserID column since SQL doesn't support "SELECT * EXCEPT [UserID]" or the like.
With this known (any style pointers are appreciated but not central to this problem), here's the problem:
When I execute this stored procedure, I get an execution error:
Invalid column name 'userID'.
However, if I comment out my DROP COLUMN statement and retain the UserID, the stored procedure performs correctly.
What's going on? It certainly looks like the statements are executing out of order and it's dropping the column before I can use it to set the name strings!
[Edit 1]
I defined UserID previously (the whole stored procedure is about 200 lies of mostly irrelevant logic, so I'll paste snippets:
CREATE TABLE #result ([Name] NVARCHAR(256), [UserID] INT);
Case sensitivity isn't the problem but did point me to the right line - there was one place in which I had userID instead of UserID. Now that I fixed the case, the error message complains about UserID.
My "broken" stored procedure also works properly in SQL Server 2008 - this is either a 2000 bug or I'm severely misunderstanding how SQL Server used to work.
Thanks everyone for chiming in!
For anyone searching this in the future, I've added an extremely crude workaround to be 2000-compatible until we update our production version:
DECLARE #workaroundTableName NVARCHAR(256), #workaroundQuery NVARCHAR(2000)
SET #workaroundQuery = 'SELECT [Name]';
DECLARE cur_workaround CURSOR FOR
SELECT COLUMN_NAME FROM [tempdb].INFORMATION_SCHEMA.Columns WHERE TABLE_NAME LIKE '#result%' AND COLUMN_NAME <> 'UserID'
OPEN cur_workaround;
FETCH NEXT FROM cur_workaround INTO #workaroundTableName
WHILE ##FETCH_STATUS = 0
BEGIN
SET #workaroundQuery = #workaroundQuery + ',[' + #workaroundTableName + ']'
FETCH NEXT FROM cur_workaround INTO #workaroundTableName
END
CLOSE cur_workaround;
DEALLOCATE cur_workaround;
SET #workaroundQuery = #workaroundQuery + ' FROM #result ORDER BY Name ASC'
EXEC(#workaroundQuery);
Thanks everyone!
A much easier solution would be to not drop the column, but don't return it in the final select.
There are all sorts of reasons why you shouldn't be returning select * from your procedure anyway.
EDIT: I see now that you have to do it this way because of an unknown number of columns.
Based on the error message, is the database case sensitive, and so there's a difference between userID and UserID?
This works for me:
CREATE TABLE #temp_t
(
myInt int,
myUser varchar(100)
)
INSERT INTO #temp_t(myInt, myUser) VALUES(1, 'Jon1')
INSERT INTO #temp_t(myInt, myUser) VALUES(2, 'Jon2')
INSERT INTO #temp_t(myInt, myUser) VALUES(3, 'Jon3')
INSERT INTO #temp_t(myInt, myUser) VALUES(4, 'Jon4')
ALTER TABLE #temp_t
DROP Column myUser
SELECT * FROM #temp_t
DROP TABLE #temp_t
It says invalid column for you. Did you check the spelling and ensure there even exists that column in your temp table.
You might try wrapping everything preceding the DROP COLUMN in a BEGIN...COMMIT transaction.
At compile time, SQL Server is probably expanding the * into the full list of columns. Thus, at run time, SQL Server executes "SELECT UserID, Name, LastName, FirstName, ..." instead of "SELECT *". Dynamically assembling the final SELECT into a string and then EXECing it at the end of the stored procedure may be the way to go.