I am updating a Trades Transactions Log using a SQL Stored Procedure, and I am updating Current Order Table at the same time with the same sproc.
Because I had a serious problem where the Log table did not update and the Current Order table did... I added a (3rd) routine to the bottom which checks to see if the Log Table was updated referencing an ID (ClientID), then entering an error if not present to error table.
I am asking... how badly written is this sproc ?? Help or advice appreciated.
ALTER PROCEDURE dbo.sprocVT4_addTradeLong
#seqno varchar(35) = NULL,
#exctyp varchar(35) = NULL,
#ordstat varchar(35) = NULL,
#clid varchar(35) = NULL,
#exid varchar(35) = NULL,
#type varchar(35) = NULL,
#side varchar(35) = NULL,
#exch varchar(35) = NULL,
#sym varchar(35) = NULL,
#lstqty varchar(35) = NULL,
#lstpri varchar(35) = NULL,
#text varchar(35) = NULL,
#cumqty varchar(35) = NULL,
#lftqty varchar(35) = NULL,
#now varchar(35) = NULL
AS
BEGIN
-- NO EXISTS ------------
Declare #RC int
SELECT [Symbol] FROM TradesLongForex T WHERE T.ExecId = #exid
SELECT #RC = ##ROWCOUNT
IF #RC <= 0
INSERT INTO TradesLongForex ([SeqNo], [ExecType], [Status], [ClientId], [ExecId], [Type], [Side], [Exchange], [Symbol], [LastQty], [LastPrice], [Text], [CummQty], [LeftQty], [Date])
VALUES (#seqno, #exctyp, #ordstat, #clid, #exid, #type, #side, #exch, #sym, #lstqty, #lstpri, #text, #cumqty, #lftqty, #now)
UPDATE OrdersIdHoldForex SET [OrdExcType] = #exctyp, [OrdStatus] = #ordstat, [OrdType] = #type, [OrdSide] = #side, [OrdPrice] = #lstpri, [OrdQty] = #cumqty, [OrdRemain] = #lftqty
WHERE [Ticker] = #sym
DECLARE #RC2 int
SELECT #RC2 = ##ROWCOUNT
SELECT [ClientId] FROM TradesLongForex WHERE [ClientId] = #clid
if #RC2 <=0
INSERT INTO ERRLOG ([Date], [Message])
VALUES (GETDATE(), 'ERROR INSERTING TRADESLONGFOREX CLID = ' + CONVERT(varchar(10),#CLID))
END
Phil makes a good point about transactions. This concept is called "Atomicity" and basically means each transaction/process is atomic and self contained.
The general syntax for transactions in SQL server would be something like:
BEGIN TRY
BEGIN TRANSACTION
...
your code here
...
COMMIT TRANSACTION
END TRY
BEGIN CATCH
IF ##TRANCOUNT > 1 ROLLBACK
... error reporting code ...
END CATCH
The gist of this is, use TRY/CATCH blocks to trap the errors, and only commit the transaction if you get through the whole TRY block without issues. Any errors send you to the CATCH block, which rolls back the open transaction.
Here's a primer on error handling.
I'm not 100% sure what you are asking, but it seems that you need to read up a bit on database transactions. Essentially you can wrap the set of queries in a transaction, and it will ensure that either all of the operations are completed, or none of them are. So if an error occurs, the entire operation will be rolled back.
http://en.wikipedia.org/wiki/Database_transaction
Related
I have a table similar to following schema in SQL Server 2017:
Table Sample in the main database where TaxID column is encrypted using SQL Server "Always Encrypted" feature:
CREATE TABLE [dbo].[Sample]
(
[CreatedDt] [smalldatetime] NOT NULL,
[LastModDt] [smalldatetime] NOT NULL,
[CompanyID] [int] IDENTITY(1,1) NOT NULL,
[CompanyName] [varchar](250) NOT NULL,
[CompanyTaxName] [varchar](250) NULL,
[TaxID] [varchar](15) COLLATE Latin1_General_BIN2 ENCRYPTED WITH (COLUMN_ENCRYPTION_KEY =
[CEK_Auto1], ENCRYPTION_TYPE = Deterministic, ALGORITHM = 'AEAD_AES_256_CBC_HMAC_SHA_256') NOT NULL,
[Active] [bit] NOT NULL
)
Then we have another table with same schema in archive database for history purposes with TaxID encrypted.
This is the table Sample in the Main_Archive database:
CREATE TABLE [dbo].[Sample]
(
[CreatedDt] [smalldatetime] NOT NULL,
[LastModDt] [smalldatetime] NOT NULL,
[CompanyArchiveID] [int] IDENTITY(1,1) NOT NULL,
[CompanyID] [int] IDENTITY(1,1) NOT NULL,
[CompanyName] [varchar](250) NOT NULL,
[CompanyTaxName] [varchar](250) NULL,
[TaxID] [varchar](15) COLLATE Latin1_General_BIN2 ENCRYPTED WITH (COLUMN_ENCRYPTION_KEY =
[CEK_Auto1], ENCRYPTION_TYPE = Deterministic, ALGORITHM = 'AEAD_AES_256_CBC_HMAC_SHA_256') NOT NULL,
[Active] [bit] NOT NULL
)
Now, we want to have a trigger on the main Sample table that inserts a new record into the archive Sample table for every update.
The trigger for the Sample table in the main database is as follows:
CREATE TRIGGER [dbo].[tr_iud_Sample]
ON [dbo].[Sample]
FOR INSERT, UPDATE, DELETE
AS
BEGIN
SET NOCOUNT ON
DECLARE #CurrDt AS SMALLDATETIME
SELECT #CurrDt = GETDATE()
DECLARE #CurrYear AS INT
SELECT #CurrYear = YEAR(#CurrDt)
UPDATE Sample
SET LastModDt = #CurrDt,
CreatedDt = CASE WHEN d.CompanyID IS NULL THEN #CurrDt ELSE Sample.CreatedDt END
FROM inserted i WITH (NOLOCK)
LEFT JOIN deleted d WITH (NOLOCK) ON d.CompanyID= i.CompanyID
WHERE Sample.CompanyID = i.CompanyID
INSERT INTO [Main_Archive].[dbo].Sample
SELECT CreatedDt, LastModDt, CompanyID, CompanyName, CompanyTaxName, TaxID, Active
FROM deleted
END
ALTER TABLE [dbo].[Sample] ENABLE TRIGGER [tr_iud_Sample]
GO
ALTER TABLE [dbo].[Vendor] DISABLE TRIGGER [tr_iud_Sample]
GO
But this fails and I get this error:
Msg 4920, Level 16, State 0, Line 50
Operand type clash: varchar(15) encrypted with (encryption_type = 'DETERMINISTIC', encryption_algorithm_name = 'AEAD_AES_256_CBC_HMAC_SHA_256', column_encryption_key_name = 'CEK_Auto1', column_encryption_key_database_name = 'NCI_COMMON') collation_name = 'Latin1_General_BIN2' is incompatible with varchar
Is there a way to have a trigger on encrypted table and if so, how to achieve the
desired functionality?
Also, if SQL Server currently does not support that, is there any work around to achieve that?
Thank you in advance
As you are using Always Encrypted your SQL Server version System-Versioned Temporal Tables.
You can make your table system-versioned and leave the work of maintaining the history to the SQL Server Engine (also, when you are changing your table design, the engine will mitigate the changes to the history table).
Temporal tables can be queried using special clauses and bring to you new ways for analyzing historical data.
One disadvantage I have faced is that the history table columns must match the target table ones - so, if you need to have a ModifiedBy column in the history, you must change your application to populate such value in the original table.
I have created a trigger which must update the total amount from Account table. Whenever some data is update from Sale table, the trigger executes a store procedure calculating the current amount and inserting it into Account, but when it's about to update the Account table, some quite strange error occurs:
The data in row 5 was not committed Error Source: .Net SqlClient
DataProvider Error Message: Error converting data type nvarchar to
bigint. The statement have been terminated.
Below there is the Sale's trigger script:
CREATE TRIGGER [dbo].[Trigger_Sale]
ON [dbo].[Sale]
FOR DELETE, INSERT, UPDATE
AS
BEGIN
exec ComputeAccountAmount ID_Account
END
And the procedure ComputeAccountAmount:
CREATE PROCEDURE [dbo].[ComputeAccountAmount]
#IdAccount bigint
AS
begin transaction
update Account set AccountAmount = (SELECT sum(AmountSold)
from Sale
where #IdAccount = ID_Account)
where #IdAccount = ID_Account
commit
I've already checked out all the types the procedures uses, yet its tables and everything is bigint as shown below:
CREATE TABLE [dbo].[Account] (
[ID_Account] BIGINT IDENTITY (1, 1) NOT NULL,
[ExpireDate] DATE NOT NULL,
[PurchaseLimit] MONEY NOT NULL,
[OpeningDate] DATE NOT NULL,
[ID_Customer] INT NOT NULL,
[AccountAmount] MONEY NULL,
CONSTRAINT [PK_Account] PRIMARY KEY CLUSTERED ([ID_Account] ASC),
CONSTRAINT [FK_Account_Customer] FOREIGN KEY ([ID_Customer]) REFERENCES [dbo].[Customer] ([ID_Customer]) ON DELETE CASCADE ON UPDATE CASCADE
);
CREATE TABLE [dbo].[Sale] (
[ID_Sale] INT IDENTITY (1, 1) NOT NULL,
[SaleDate] DATE NOT NULL,
[AmountSold] MONEY NOT NULL,
[ID_Account] BIGINT NULL,
PRIMARY KEY CLUSTERED ([ID_Account] ASC)
);
For testing, I'm using the Visual Studio to manually verify the trigger. What's going on?
Obviously you have syntax error in your trigger
CREATE TRIGGER [dbo].[Trigger_Sale]
ON [dbo].[Sale]
FOR DELETE, INSERT, UPDATE
AS
BEGIN
exec ComputeAccountAmount ID_Account
END
What is ID_Account? It will throw an error
You need to select distinct accountIDs from INSERTED and DELETED tables in your trigger and for each of this account call exec ComputeAccountAmount. Something like:
CREATE TRIGGER [dbo].[Trigger_Sale] ON [dbo].[Sale]
FOR DELETE, INSERT, UPDATE
AS
BEGIN
DECLARE #AccountID BIGINT
DECLARE trCur CURSOR FAST_FORWARD READ_ONLY
FOR
SELECT AccountID FROM DELETED
UNION
SELECT AccountID FROM INSERTED
OPEN trCur
FETCH NEXT FROM trCur INTO #AccountID
WHILE ##FETCH_STATUS = 0
BEGIN
EXEC ComputeAccountAmount #AccountID
FETCH NEXT FROM trCur INTO #AccountID
END
CLOSE trCur
DEALLOCATE trCur
END
I am using a SQL Server function which returns a bigInt and using this in a trigger to assign a value to a column of type bigint. However when I run the trigger, an overflow exception occurs (Arithmetic overflow error converting expression to data type int.), i.e. it treats it as an int, not a bigint
The function is:
ALTER FUNCTION [dbo].[longIntDateTime] ()
RETURNS bigint
AS
BEGIN
-- Declare the return variable here
DECLARE #ResultVar bigint;
DECLARE #now Datetime;
set #now = getdate();
SET #ResultVar=DATEPART(YYYY,#now)*100000000 + DATEPART(MM,#now)*1000000 + DATEPART(DD,#now)*10000 + DATEPART(HH,#now)*100;
-- DATEPART(HH,#now)*100 + DATEPART(MI,#now);
-- Return the result of the function
RETURN (#ResultVar);
END
The trigger is:
ALTER TRIGGER [dbo].[employeesInsert]
ON [dbo].[employees]
AFTER INSERT
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
-- Insert statements for trigger here
UPDATE employees
SET changeTimeStamp = dbo.longIntDateTime()
FROM inserted INNER JOIN employees On inserted._id = employees._id
END
and the table definition is:
CREATE TABLE [dbo].[employees](
[_id] [int] IDENTITY(1,1) NOT NULL,
[employee_name] [varchar](50) NOT NULL,
[password] [varchar](50) NOT NULL,
[isActive] [int] NOT NULL,
[isDeleted] [int] NOT NULL,
[changeTimeStamp] [bigint] NOT NULL,
CONSTRAINT [PK_employees] PRIMARY KEY CLUSTERED ([_id] ASC)
)
ALTER TABLE [dbo].[employees]
ADD CONSTRAINT [DF_employees_isActive] DEFAULT ((0)) FOR [isActive]
ALTER TABLE [dbo].[employees]
ADD CONSTRAINT [DF_employees_isDeleted] DEFAULT ((0)) FOR [isDeleted]
GO
If I take two '0's off the first yyyy part of the function, the trigger succeeds, however as is, it fails.
Clearly the value produced is less than a big int.
any ideas?
anton
The problem is this line of code:
SET #ResultVar=DATEPART(YYYY,#now)*100000000 + DATEPART(MM,#now)*1000000 + DATEPART(DD,#now)*10000 + DATEPART(HH,#now)*100;
The constants are interpreted as int so the entire calculate is done that way. You can fix this easily by casting the first to bigint:
SET #ResultVar=DATEPART(YYYY,#now)*cast(100000000 as bigint)+ DATEPART(MM,#now)*1000000 + DATEPART(DD,#now)*10000 + DATEPART(HH,#now)*100;
It's because DATEPART returns int.Try to cast to bigint before multiplying
cast (DATEPART(YYYY,#now) as bigint)*100000000
I'm trying to execute an sql text file to create stored procedures on an sql server db. I'm also using this method to create user defined table types, which the stored procedure(s) will use.
The creation of the table types works perfectly. However, when I go to create the stored procedure I'm getting the error,
'CREATE/ALTER PROCEDURE' must be the first statement in a query batch.
Here is the code which reads the file and executes it against the db:
public static void LoadStoredProcedures()
{
const string procedureLocation = "C:\\StoredProcedures.txt";
var reader = File.ReadAllText(procedureLocation);
var context = new prismEntities();
context.Database.ExecuteSqlCommand(reader);
}
public static void CreateTables()
{
const string tableLocation = "C:\\CreateTables.txt";
var reader = File.ReadAllText(tableLocation);
var context = new prismEntities();
context.Database.ExecuteSqlCommand(reader);
}
an example of the user defined table:
if not exists (select * from sys.table_types
where name like 'TextbookTable')
create type [dbo].[TextbookTable] as table (
[TXUID] [int] NOT NULL,
[SKU] [int] NOT NULL,
[UsedSKU] [int] NOT NULL,
[BindingID] [int] NOT NULL,
[TextStatusID] [int] NOT NULL,
[StatusDate] [datetime] NULL,
[Author] [char](45) NOT NULL,
[Title] [char](80) NOT NULL,
[ISBN] [char](30) NULL,
[Imprint] [char](10) NULL,
[Edition] [char](2) NULL,
[Copyright] [char](2) NULL,
[Type] [char](10) NULL,
[Bookkey] [varchar](10) NULL,
[Weight] [decimal](10, 4) NULL,
[ImageURL] [char](128) NULL,
primary key clustered
(
[TXUID] ASC
) with (ignore_dup_key = on)
)
an example of the stored procedure I'm attempting to create :
if exists (select * from dbo.sysobjects where id = object_id(N'[dbo].[AddTextbook]') and OBJECTPROPERTY(id, N'IsProcedure') = 1)
drop procedure [dbo].[AddTextbook]
create procedure [dbo].[AddTextbook]
(
#textbook TextbookTable readonly
)
as
begin
set nocount on;
set identity_insert textbook on
begin try
merge Textbook txt
using (select * from #textbook) as source
on txt.TXUID = source.TXUID
when not matched then
insert (TXUID, SKU, UsedSKU, BindingID, TextStatusID, StatusDate, Author,
Title, ISBN, Imprint, Edition, Copyright, Type, Bookkey, Weight, ImageURL)
values ( source.TXUID, source.SKU, source.UsedSKU, source.BindingID, source.TextStatusID,
source.StatusDate, source.Author, source.Title, source.ISBN,
source.Imprint, source.Edition,
source.Copyright, source.Type, source.Bookkey, source.Weight, source.ImageURL);
set identity_insert textbook off
end try
begin catch
declare #message varchar(128) = error_message()
select
ERROR_NUMBER() as ErrorNumber,
ERROR_SEVERITY() as ErrorSeverity,
ERROR_STATE() as ErrorState,
ERROR_PROCEDURE() as ErrorProcedure,
ERROR_LINE() as ErrorLine,
ERROR_MESSAGE() as ErrorMessage;
raiserror(#message, 16, 10)
end catch
end
grant execute on [dbo].[AddTextbook] to [public]
Now, the order of the calls, is the CreateTables is called first then the LoadStoredProcedures. The tables get created with no problems. The stored procedures do not get created and generate the above mentioned error. I have removed the 'if exists...' line and the stored procedure will get created, however, if there are others that I'm trying to create in the same file, they will error out and not get created. I want to be able to manage this with one file, not multiple ones for each stored procedure.
Does anyone know a work around for this? Hopefully I have provided ample information. Thanks in advance.
Basically you're missing a bunch of GO statements between command such as:
You have to change
if exists (select * from dbo.sysobjects where id = object_id(N'[dbo].[AddTextbook]') and OBJECTPROPERTY(id, N'IsProcedure') = 1)
drop procedure [dbo].[AddTextbook]
create procedure [dbo].[AddTextbook]
To be
if exists (select * from dbo.sysobjects where id = object_id(N'[dbo].[AddTextbook]') and OBJECTPROPERTY(id, N'IsProcedure') = 1)
drop procedure [dbo].[AddTextbook]
GO
create procedure [dbo].[AddTextbook]
I have a set of tables that are used to track bills. These tables are loaded from an SSIS process that runs weekly.
I am in the process of creating a second set of tables to track adjustments to the bills that are made via the web. Some of our clients hand key their bills and all of those entries need to be backed up on a more regular schedule (the SSIS fed data can always be imported again so it isn't backed up).
Is there a best practice for this type of behavior? I'm looking at implementing a DDL trigger that will parse the ALTER TABLE call and change the table being called. This is somewhat painful, and I'm curious if there is a better way.
I personally would have the SSIS-fed tables in one database (set to simple recovery mode) and the other tables in a separate database on the same server which is set to full recovery mode,. Then I would set up backups on the second datbase on a regular schedule. A typical backup schedule would be full backup once a week, differntials nightly and transaction backups every 15-30 minutes depending on how much data is being input.) Be sure to periodically test recovering the backups, learning how to do that when the customer is screaming becasue the datbase is down isn;t a good thing.
I ended up using a DDL trigger to make a copy of changes from one table to the other. The only problem is that if a table or column name contains part of a reserved word - ARCH for VARCHAR - it will cause problems with the modification script.
Thanks, once again, to Brent Ozar for error checking my thoughts before I blogged them.
-- Create pvt and pvtWeb as test tables
CREATE TABLE [dbo].[pvt](
[VendorID] [int] NULL,
[Emp1] [int] NULL,
[Emp2] [int] NULL,
[Emp3] [int] NULL,
[Emp4] [int] NULL,
[Emp5] [int] NULL
) ON [PRIMARY];
GO
CREATE TABLE [dbo].[pvtWeb](
[VendorID] [int] NULL,
[Emp1] [int] NULL,
[Emp2] [int] NULL,
[Emp3] [int] NULL,
[Emp4] [int] NULL,
[Emp5] [int] NULL
) ON [PRIMARY];
GO
IF EXISTS(SELECT * FROM sys.triggers WHERE name = ‘ddl_trigger_pvt_alter’)
DROP TRIGGER ddl_trigger_pvt_alter ON DATABASE;
GO
-- Create a trigger that will trap ALTER TABLE events
CREATE TRIGGER ddl_trigger_pvt_alter
ON DATABASE
FOR ALTER_TABLE
AS
DECLARE #data XML;
DECLARE #tableName NVARCHAR(255);
DECLARE #newTableName NVARCHAR(255);
DECLARE #sql NVARCHAR(MAX);
SET #sql = ”;
-- Store the event in an XML variable
SET #data = EVENTDATA();
-- Get the name of the table that is being modified
SELECT #tableName = #data.value(‘(/EVENT_INSTANCE/ObjectName)[1]‘, ‘NVARCHAR(255)’);
-- Get the actual SQL that was executed
SELECT #sql = #data.value(‘(/EVENT_INSTANCE/TSQLCommand/CommandText)[1]‘, ‘NVARCHAR(MAX)’);
-- Figure out the name of the new table
SET #newTableName = #tableName + ‘Web’;
-- Replace the original table name with the new table name
-- str_replace is from Robyn Page and Phil Factor’s delighful post on
-- string arrays in SQL. The other posts on string functions are indispensible
-- to handling string input
--
-- http://www.simple-talk.com/sql/t-sql-programming/tsql-string-array-workbench/
-- http://www.simple-talk.com/sql/t-sql-programming/sql-string-user-function-workbench-part-1/
--http://www.simple-talk.com/sql/t-sql-programming/sql-string-user-function-workbench-part-2/
SET #sql = dbo.str_replace(#tableName, #newTableName, #sql);
-- Debug the SQL if needed.
--PRINT #sql;
IF OBJECT_ID(#newTableName, N’U’) IS NOT NULL
BEGIN
BEGIN TRY
-- Now that the table name has been changed, execute the new SQL
EXEC sp_executesql #sql;
END TRY
BEGIN CATCH
-- Rollback any existing transactions and report the full nasty
-- error back to the user.
IF ##TRANCOUNT > 0
ROLLBACK TRANSACTION;
DECLARE
#ERROR_SEVERITY INT,
#ERROR_STATE INT,
#ERROR_NUMBER INT,
#ERROR_LINE INT,
#ERROR_MESSAGE NVARCHAR(4000);
SELECT
#ERROR_SEVERITY = ERROR_SEVERITY(),
#ERROR_STATE = ERROR_STATE(),
#ERROR_NUMBER = ERROR_NUMBER(),
#ERROR_LINE = ERROR_LINE(),
#ERROR_MESSAGE = ERROR_MESSAGE();
RAISERROR(‘Msg %d, Line %d, :%s’,
#ERROR_SEVERITY,
#ERROR_STATE,
#ERROR_NUMBER,
#ERROR_LINE,
#ERROR_MESSAGE);
END CATCH
END
GO
ALTER TABLE pvt
ADD test INT NULL;
GO
EXEC sp_help pvt;
GO
ALTER TABLE pvt
DROP COLUMN test;
GO
EXEC sp_help pvt;
GO