I'm creating a SQL Server table via a trigger, and I want the table name to be specific each time.
For the end result, I want the table name to be tblTEMP_INA_DATA_12345.
I could obviously, just type tblTEMP_INA_DATA_12345, but the #PlanID value will be different each time.
How could I modify the create table statement to do what I want? Is this possible?
I have searched, but I'm not sure what search terms to even use. I appreciate any and all responses even if the answer is no.
DECLARE #PlanID varchar(80)
SET #PlanID = 12345
CREATE TABLE [dbo].[tblTEMP_INA_DATA_]
(
[strQuestion] [varchar](max) NULL,
[strAnswer] [varchar](max) NULL
) ON [PRIMARY]
You can use dynamic sql to do this. Like below
Declare #PlanID varchar(80),#sql nvarchar(max);
Set #PlanID = 123456
set #sql= 'Create TABLE [dbo].' + QUOTENAME('tblTEMP_INA_DATA_' + #PlanID) + '
([strQuestion] [varchar](max) NULL,
[strAnswer] [varchar](max) NULL
) ON [PRIMARY]'
exec (#sql);
Related
I have a table similar to following schema in SQL Server 2017:
Table Sample in the main database where TaxID column is encrypted using SQL Server "Always Encrypted" feature:
CREATE TABLE [dbo].[Sample]
(
[CreatedDt] [smalldatetime] NOT NULL,
[LastModDt] [smalldatetime] NOT NULL,
[CompanyID] [int] IDENTITY(1,1) NOT NULL,
[CompanyName] [varchar](250) NOT NULL,
[CompanyTaxName] [varchar](250) NULL,
[TaxID] [varchar](15) COLLATE Latin1_General_BIN2 ENCRYPTED WITH (COLUMN_ENCRYPTION_KEY =
[CEK_Auto1], ENCRYPTION_TYPE = Deterministic, ALGORITHM = 'AEAD_AES_256_CBC_HMAC_SHA_256') NOT NULL,
[Active] [bit] NOT NULL
)
Then we have another table with same schema in archive database for history purposes with TaxID encrypted.
This is the table Sample in the Main_Archive database:
CREATE TABLE [dbo].[Sample]
(
[CreatedDt] [smalldatetime] NOT NULL,
[LastModDt] [smalldatetime] NOT NULL,
[CompanyArchiveID] [int] IDENTITY(1,1) NOT NULL,
[CompanyID] [int] IDENTITY(1,1) NOT NULL,
[CompanyName] [varchar](250) NOT NULL,
[CompanyTaxName] [varchar](250) NULL,
[TaxID] [varchar](15) COLLATE Latin1_General_BIN2 ENCRYPTED WITH (COLUMN_ENCRYPTION_KEY =
[CEK_Auto1], ENCRYPTION_TYPE = Deterministic, ALGORITHM = 'AEAD_AES_256_CBC_HMAC_SHA_256') NOT NULL,
[Active] [bit] NOT NULL
)
Now, we want to have a trigger on the main Sample table that inserts a new record into the archive Sample table for every update.
The trigger for the Sample table in the main database is as follows:
CREATE TRIGGER [dbo].[tr_iud_Sample]
ON [dbo].[Sample]
FOR INSERT, UPDATE, DELETE
AS
BEGIN
SET NOCOUNT ON
DECLARE #CurrDt AS SMALLDATETIME
SELECT #CurrDt = GETDATE()
DECLARE #CurrYear AS INT
SELECT #CurrYear = YEAR(#CurrDt)
UPDATE Sample
SET LastModDt = #CurrDt,
CreatedDt = CASE WHEN d.CompanyID IS NULL THEN #CurrDt ELSE Sample.CreatedDt END
FROM inserted i WITH (NOLOCK)
LEFT JOIN deleted d WITH (NOLOCK) ON d.CompanyID= i.CompanyID
WHERE Sample.CompanyID = i.CompanyID
INSERT INTO [Main_Archive].[dbo].Sample
SELECT CreatedDt, LastModDt, CompanyID, CompanyName, CompanyTaxName, TaxID, Active
FROM deleted
END
ALTER TABLE [dbo].[Sample] ENABLE TRIGGER [tr_iud_Sample]
GO
ALTER TABLE [dbo].[Vendor] DISABLE TRIGGER [tr_iud_Sample]
GO
But this fails and I get this error:
Msg 4920, Level 16, State 0, Line 50
Operand type clash: varchar(15) encrypted with (encryption_type = 'DETERMINISTIC', encryption_algorithm_name = 'AEAD_AES_256_CBC_HMAC_SHA_256', column_encryption_key_name = 'CEK_Auto1', column_encryption_key_database_name = 'NCI_COMMON') collation_name = 'Latin1_General_BIN2' is incompatible with varchar
Is there a way to have a trigger on encrypted table and if so, how to achieve the
desired functionality?
Also, if SQL Server currently does not support that, is there any work around to achieve that?
Thank you in advance
As you are using Always Encrypted your SQL Server version System-Versioned Temporal Tables.
You can make your table system-versioned and leave the work of maintaining the history to the SQL Server Engine (also, when you are changing your table design, the engine will mitigate the changes to the history table).
Temporal tables can be queried using special clauses and bring to you new ways for analyzing historical data.
One disadvantage I have faced is that the history table columns must match the target table ones - so, if you need to have a ModifiedBy column in the history, you must change your application to populate such value in the original table.
I use this code to update one of my table by calling a function which generates a random ID each item. I started with around 1000 rows but now the size is growing and i find that there are duplicate ID's in the table. Is there any way to can modify the code i am using, so that it look for ID's that are already generated in the table so that it will generate a new code if there is a similar one. I also noticed
Your code shows you setting the field password, but the results show that UniqueID is the duplicated field. (Maybe it's password renamed?)
Assuming userId is unique: (if not, ADD an actual identity column NOW, "ALTER TABLE dbo.Users ADD ID INT NOT NULL IDENTITY(1, 1)" should do the trick) and assuming password is the field to change, use the following:
DECLARE #FN VARCHAR(20);
DECLARE #LN VARCHAR(20);
DECLARE #PW VARCHAR(20);
DECLARE #ID INT;
SELECT TOP 1
#FN = FirstName,
#LN = LastName,
#ID = userID
FROM dbo.Users
WHERE Password IS NULL;
WHILE ##ROWCOUNT = 1
BEGIN
SET #PW = dbo.GenerateID(FirstName, LastName);
WHILE EXIST (SELECT TOP 1 Password FROM dbo.Users WHERE Password = #PW)
SET #PW = dbo.GenerateID(FirstName, LastName);
UPDATE dbo.Users SET Password = #PW WHERE userId = #ID;
SELECT TOP 1
#FN = FirstName,
#LN = LastName,
#ID = userID
FROM dbo.Users
WHERE Password IS NULL;
END
This should look for a blank password. If none is found the outer loop is skipped. If one is found, we generate passwords until we find one not in the table. Next we look for another row with a blank password before the end of the outer loop.
Sounds like your new to this. Don't worry, TSQL is pretty easy to learn. First thing first, I suggest that you create a unique non-clustered index on the UniqueID column--this will prevent duplicates values from being inserted into your table. If someone does try to insert a duplicate value into the table it will throw an exception. Before you can use this though you'll need to remove all the duplicate 'UniqueID' values from your table.
CREATE UNIQUE NONCLUSTERED INDEX [IDX_UniqueID] ON [dbo].[Users]
(
[UniqueID] ASC
) ON [PRIMARY]
You can learn more about non-clustered indexes here: https://learn.microsoft.com/en-us/sql/relational-databases/indexes/clustered-and-nonclustered-indexes-described
I also suggest that you consider changing the underlying type of your UniqueID field to a 'uniqueidentifier.' Here's an example of a table schema that uses a 'uniqueidentifier' column type for the UniqueID column:
CREATE TABLE [dbo].[Users](
[personId] [int] IDENTITY(1,1) NOT NULL,
[firstName] [nvarchar](50) NOT NULL,
[lastName] [nvarchar](50) NOT NULL,
[UniqueID] [uniqueidentifier] NOT NULL,
CONSTRAINT [PK_Users] PRIMARY KEY CLUSTERED
(
[personId] ASC
) ON [PRIMARY]
) ON [PRIMARY]
A 'uniqueidentifier' column type in SQL Serever holds a Global Unique Identifier (aka a GUID or UUID). It's easy to generate a GUID in most languages. To generate a GUID in TSQL you just new to invoke the NEWID() function.
SELECT NEWID() -- output: D100FC00-B482-4580-A161-199BE264C1D1
You can learn more about GUIDs here: https://en.wikipedia.org/wiki/Universally_unique_identifier
Hope this helps. Best of luck on your project. :)
So I'm new to creating SPs and right now I am trying to create an SP to insert values into my table Report below.
CREATE TABLE Report (
ID UNIQUEIDENTIFIER PRIMARY KEY NOT NULL,
STAFF VARCHAR(1000)NOT NULL,
EMAIL VARCHAR(1000)NOT NULL,
LASTCHANGE DATE NOT NULL
)
CREATE PROCEDURE spInsertOrUpdate(
#ID UNIQUEIDENTIFIER,
#STAFF VARCHAR(1000),
#EMAIL VARCHAR(1000),
#LASTCHANGE DATETIME
) AS
BEGIN
INSERT INTO Report(ID, STAFF, EMAIL, LASTCHANGE)
VALUES(#ID, #STAFF, #EMAIL, #LASTCHANGE)
END
EXEC spInsertOrUpdate NEWID, 'Evlyn Dawson', 'evdawson#gmail.com', GETDATE
Right After executing the SP I following error:
Msg 8114, Level 16, State 5, Procedure spInsertOrUpdate, Line 0 Error converting data type nvarchar to uniqueidentifier
Can someone please help me out with this issue?
So if your just calling your stored procedure with getdate and newid why dont you just add them as default on your table?
Table
CREATE TABLE [dbo].[Report](
[ID] [uniqueidentifier] NOT NULL CONSTRAINT [DF_Report_ID] DEFAULT (newid()),
[STAFF] [varchar](1000) NOT NULL,
[EMAIL] [varchar](1000) NOT NULL,
[LASTCHANGE] [datetime] NOT NULL CONSTRAINT [DF_Report_LASTCHANGE] DEFAULT (getdate()),
CONSTRAINT [PK__Report__3214EC27D2D8BF72] PRIMARY KEY CLUSTERED
(
[ID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
Procedure
create PROCEDURE spInsertOrUpdate(
#STAFF VARCHAR(1000),
#EMAIL VARCHAR(1000)
) AS
BEGIN
INSERT INTO Report(STAFF, EMAIL)
VALUES(#STAFF, #EMAIL)
END
Execute statement
EXEC spInsertOrUpdate 'Evlyn Dawson', 'evdawson#gmail.com'
Edit
Please also note that your lastchanged column is of type DATE, however if you want date with timestamp you should use datetime
This error message is a bit of a wild goose chase, the problem is that both NEWID() and GETDATE() are functions, so require parentheses. Unforunately, you cannot pass a function as a parameter to a stored procedure, so you would first need to assign the values to a variable:
DECLARE #ID UNIQUEIDENTIFIER = NEWID(),
#Date DATE = GETDATE();
EXEC #spInsertOrUpdate #ID, 'Evlyn Dawson', 'evdawson#gmail.com', #Date;
As an aside, a UNIQUEIDENTIFIER column is a very poor choice for a clustering key
I would start by calling the functions correctly:
EXEC spInsertOrUpdate NEWID(), 'Evlyn Dawson', 'evdawson#gmail.com', GETDATE();
NEWID() and GETDATE() are functions, so you need parentheses after them.
However, I don't think the lack of parentheses would cause that particular error. You would need to set variables first, and then use them for the exec.
EDIT:
A better approach is to set the ids and dates automatically:
CREATE TABLE Report (
ID UNIQUEIDENTIFIER PRIMARY KEY NOT NULL DEFAULT NEWID(),
STAFF VARCHAR(1000) NOT NULL,
EMAIL VARCHAR(1000) NOT NULL,
LASTCHANGE DATE NOT NULL DEFAULT GETDATE()
);
CREATE PROCEDURE spInsertOrUpdate (
#STAFF VARCHAR(1000),
#EMAIL VARCHAR(1000)
) AS
BEGIN
INSERT INTO Report(STAFF, EMAIL)
VALUES (#STAFF, #EMAIL)
END;
EXEC spInsertOrUpdate 'Evlyn Dawson', 'evdawson#gmail.com';
I would also discourage you from using unique identifiers as primary keys in the table. They are rather inefficient, because they can lead to page fragmentation. Use an identity column instead.
Thanks For all your help.I finally found a proper way of doing this within the SP
and I got a proper understanding of SPs now.This is how I resolved the issue
CREATE PROCEDURE spInsertOrUpdate(#STAFF VARCHAR(1000),#EMAIL VARCHAR(1000),#CARS VARCHAR(1000))
AS
BEGIN
INSERT INTO Report(ID,STAFF,EMAIL,CARS,LASTCHANGE)
VALUES(NEWID(),#STAFF,#EMAIL,#CARS,GETDATE())
END
EXEC spInsertOrUpdate 'Evlyn Dawson','evdawson#gmail.com','Ferrari'
Note that I have also a CARS column
I am using a SQL Server function which returns a bigInt and using this in a trigger to assign a value to a column of type bigint. However when I run the trigger, an overflow exception occurs (Arithmetic overflow error converting expression to data type int.), i.e. it treats it as an int, not a bigint
The function is:
ALTER FUNCTION [dbo].[longIntDateTime] ()
RETURNS bigint
AS
BEGIN
-- Declare the return variable here
DECLARE #ResultVar bigint;
DECLARE #now Datetime;
set #now = getdate();
SET #ResultVar=DATEPART(YYYY,#now)*100000000 + DATEPART(MM,#now)*1000000 + DATEPART(DD,#now)*10000 + DATEPART(HH,#now)*100;
-- DATEPART(HH,#now)*100 + DATEPART(MI,#now);
-- Return the result of the function
RETURN (#ResultVar);
END
The trigger is:
ALTER TRIGGER [dbo].[employeesInsert]
ON [dbo].[employees]
AFTER INSERT
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
-- Insert statements for trigger here
UPDATE employees
SET changeTimeStamp = dbo.longIntDateTime()
FROM inserted INNER JOIN employees On inserted._id = employees._id
END
and the table definition is:
CREATE TABLE [dbo].[employees](
[_id] [int] IDENTITY(1,1) NOT NULL,
[employee_name] [varchar](50) NOT NULL,
[password] [varchar](50) NOT NULL,
[isActive] [int] NOT NULL,
[isDeleted] [int] NOT NULL,
[changeTimeStamp] [bigint] NOT NULL,
CONSTRAINT [PK_employees] PRIMARY KEY CLUSTERED ([_id] ASC)
)
ALTER TABLE [dbo].[employees]
ADD CONSTRAINT [DF_employees_isActive] DEFAULT ((0)) FOR [isActive]
ALTER TABLE [dbo].[employees]
ADD CONSTRAINT [DF_employees_isDeleted] DEFAULT ((0)) FOR [isDeleted]
GO
If I take two '0's off the first yyyy part of the function, the trigger succeeds, however as is, it fails.
Clearly the value produced is less than a big int.
any ideas?
anton
The problem is this line of code:
SET #ResultVar=DATEPART(YYYY,#now)*100000000 + DATEPART(MM,#now)*1000000 + DATEPART(DD,#now)*10000 + DATEPART(HH,#now)*100;
The constants are interpreted as int so the entire calculate is done that way. You can fix this easily by casting the first to bigint:
SET #ResultVar=DATEPART(YYYY,#now)*cast(100000000 as bigint)+ DATEPART(MM,#now)*1000000 + DATEPART(DD,#now)*10000 + DATEPART(HH,#now)*100;
It's because DATEPART returns int.Try to cast to bigint before multiplying
cast (DATEPART(YYYY,#now) as bigint)*100000000
I have a set of tables that are used to track bills. These tables are loaded from an SSIS process that runs weekly.
I am in the process of creating a second set of tables to track adjustments to the bills that are made via the web. Some of our clients hand key their bills and all of those entries need to be backed up on a more regular schedule (the SSIS fed data can always be imported again so it isn't backed up).
Is there a best practice for this type of behavior? I'm looking at implementing a DDL trigger that will parse the ALTER TABLE call and change the table being called. This is somewhat painful, and I'm curious if there is a better way.
I personally would have the SSIS-fed tables in one database (set to simple recovery mode) and the other tables in a separate database on the same server which is set to full recovery mode,. Then I would set up backups on the second datbase on a regular schedule. A typical backup schedule would be full backup once a week, differntials nightly and transaction backups every 15-30 minutes depending on how much data is being input.) Be sure to periodically test recovering the backups, learning how to do that when the customer is screaming becasue the datbase is down isn;t a good thing.
I ended up using a DDL trigger to make a copy of changes from one table to the other. The only problem is that if a table or column name contains part of a reserved word - ARCH for VARCHAR - it will cause problems with the modification script.
Thanks, once again, to Brent Ozar for error checking my thoughts before I blogged them.
-- Create pvt and pvtWeb as test tables
CREATE TABLE [dbo].[pvt](
[VendorID] [int] NULL,
[Emp1] [int] NULL,
[Emp2] [int] NULL,
[Emp3] [int] NULL,
[Emp4] [int] NULL,
[Emp5] [int] NULL
) ON [PRIMARY];
GO
CREATE TABLE [dbo].[pvtWeb](
[VendorID] [int] NULL,
[Emp1] [int] NULL,
[Emp2] [int] NULL,
[Emp3] [int] NULL,
[Emp4] [int] NULL,
[Emp5] [int] NULL
) ON [PRIMARY];
GO
IF EXISTS(SELECT * FROM sys.triggers WHERE name = ‘ddl_trigger_pvt_alter’)
DROP TRIGGER ddl_trigger_pvt_alter ON DATABASE;
GO
-- Create a trigger that will trap ALTER TABLE events
CREATE TRIGGER ddl_trigger_pvt_alter
ON DATABASE
FOR ALTER_TABLE
AS
DECLARE #data XML;
DECLARE #tableName NVARCHAR(255);
DECLARE #newTableName NVARCHAR(255);
DECLARE #sql NVARCHAR(MAX);
SET #sql = ”;
-- Store the event in an XML variable
SET #data = EVENTDATA();
-- Get the name of the table that is being modified
SELECT #tableName = #data.value(‘(/EVENT_INSTANCE/ObjectName)[1]‘, ‘NVARCHAR(255)’);
-- Get the actual SQL that was executed
SELECT #sql = #data.value(‘(/EVENT_INSTANCE/TSQLCommand/CommandText)[1]‘, ‘NVARCHAR(MAX)’);
-- Figure out the name of the new table
SET #newTableName = #tableName + ‘Web’;
-- Replace the original table name with the new table name
-- str_replace is from Robyn Page and Phil Factor’s delighful post on
-- string arrays in SQL. The other posts on string functions are indispensible
-- to handling string input
--
-- http://www.simple-talk.com/sql/t-sql-programming/tsql-string-array-workbench/
-- http://www.simple-talk.com/sql/t-sql-programming/sql-string-user-function-workbench-part-1/
--http://www.simple-talk.com/sql/t-sql-programming/sql-string-user-function-workbench-part-2/
SET #sql = dbo.str_replace(#tableName, #newTableName, #sql);
-- Debug the SQL if needed.
--PRINT #sql;
IF OBJECT_ID(#newTableName, N’U’) IS NOT NULL
BEGIN
BEGIN TRY
-- Now that the table name has been changed, execute the new SQL
EXEC sp_executesql #sql;
END TRY
BEGIN CATCH
-- Rollback any existing transactions and report the full nasty
-- error back to the user.
IF ##TRANCOUNT > 0
ROLLBACK TRANSACTION;
DECLARE
#ERROR_SEVERITY INT,
#ERROR_STATE INT,
#ERROR_NUMBER INT,
#ERROR_LINE INT,
#ERROR_MESSAGE NVARCHAR(4000);
SELECT
#ERROR_SEVERITY = ERROR_SEVERITY(),
#ERROR_STATE = ERROR_STATE(),
#ERROR_NUMBER = ERROR_NUMBER(),
#ERROR_LINE = ERROR_LINE(),
#ERROR_MESSAGE = ERROR_MESSAGE();
RAISERROR(‘Msg %d, Line %d, :%s’,
#ERROR_SEVERITY,
#ERROR_STATE,
#ERROR_NUMBER,
#ERROR_LINE,
#ERROR_MESSAGE);
END CATCH
END
GO
ALTER TABLE pvt
ADD test INT NULL;
GO
EXEC sp_help pvt;
GO
ALTER TABLE pvt
DROP COLUMN test;
GO
EXEC sp_help pvt;
GO