IF-THEN-ELSE: Create / Truncate Table - SQL Server - sql

I am attempting to create a backup table without having to re-create it every single time. If the table already exists in the next run then it should simply truncate the table.
But it doesn't seem to be working. It says backup_reportsettings is already in the database. Can anyone assist me with this?
--Only re-create table if table does not exist otherwise truncate the existing table.
IF NOT EXISTS (SELECT * FROM [Misc].sys.tables where name= 'dbo.backup_reportsettings')
CREATE TABLE [MISC].dbo.backup_reportsettings
(
[datestamp] [datetime] NULL,
[reportsettingid] [char](8) NOT NULL,
[description] [char](30) NOT NULL,
[formname] [char](30) NOT NULL,
[usersid] [char](8) NOT NULL,
[settings] [text] NOT NULL,
[notes] [varchar](255) NOT NULL,
[userdefault] [char](1) NOT NULL
)
ELSE
TRUNCATE TABLE [Misc].dbo.backup_reportsettings;
What am I doing wrong? Note: this is done within a transaction.

Object names in sys.tables don't have the schema as part of the name. Remove the table schema when verifying whether the table exists:
IF NOT EXISTS (SELECT * FROM [Misc].sys.tables where name= 'backup_reportsettings')

Despite the use of IF, SQL Server needs to Parse/Compile all the statements in your script, so when it sees a CREATE TABLE statement it will give you a compilation error if the table already exists, even though the IF would prevent that code from being executed when that is the case.
The way to get around this is to put your CREATE TABLE statement in dynamic SQL, which will not be parsed/compiled before execution.

Related

SQL Server Prevent Update on Column (datetime2 column value set by database on insert)

I have a table Values with 3 columns:
CREATE TABLE [dbo].[Values]
(
[Id] [uniqueidentifier] NOT NULL,
[Value] [nvarchar](150) NOT NULL,
[CreatedOnUtc] [datatime2](7) NOT NULL
)
I want SQL Server to set the value of CreatedOnUtc to UTC-Now whenever a new entry is created, and not allow an external command to set this value.
Is this possible?
This is sort of two questions. For the first:
CREATE TABLE [dbo].[Values] (
[Id] [uniqueidentifier] NOT NULL,
[Value] [nvarchar](150) NOT NULL,
[CreatedOnUtc] [datetime2](7) NOT NULL DEFAULT SYSUTCDATETIME()
);
The canonical way to prevent changes to the column is to use a trigger that prevents the value from being updated or inserted.
Note that Values is a really bad name for a table because it is a SQL keyword and SQL Server reserved word. Choose identifiers that do not need to be escaped.
There are other ways. For instance, you could turn off DML access to the table. Then create a view without CreatedOnUtc and only allow inserts and updates through the view.

Insert only modified values and column names into a table

I have a sql server 2012 database. In which i have a changeLog table that contains
TableName, ColumnName, FromValue and ToValue columns. Which will be used to keep track of modified columns and data.
So if any update occur through application then only modified columns should insert into this table with its new and old value.
Can anyone help me in this.
For Example:
If the procedure updates all columns of property table (propertyName, address)
then if user update propertyName (but update also contains address column but with no data change) then only propertyName and its data will be inserted into ChangeLog table not address column and its data because address data does not contains any data change.
IF there is no other auditing requirement at all - you would not be thinking about Auditing in any way without this - then OK, go for it. However this is a very limited use of Auditing: User X changed this field at time Y. Generally this is interesting as part of a wider question: what did user X do? What happened to that customer data in the database to end up the way it is now?
Questions like that are harder to answer if you have the data structure you propose and would be quite onerous to reconstruct. My usual approach would be as follows. Starting from a base table like so (this from one of my current projects):
CREATE TABLE [de].[Generation](
[Id] [int] IDENTITY(1,1) NOT NULL,
[LocalTime] [datetime] NOT NULL,
[EntityId] [int] NOT NULL,
[Generation] [decimal](18, 4) NOT NULL,
[UpdatedAt] [datetime] NOT NULL CONSTRAINT [DF_Generation_UpdatedAt] DEFAULT (getdate()),
CONSTRAINT [PK_Generation] PRIMARY KEY CLUSTERED
(
[Id] ASC
)
(I've excluded FK definitions as they aren't relevant here.)
First create an Audit table for this table:
CREATE TABLE [de].[GenerationAudit](
[AuditId] int identity(1, 1) not null,
[Id] [int] NOT NULL,
[LocalTimeOld] [datetime] NULL,
[EntityIdOld] [int] NULL,
[GenerationOld] [decimal](18, 4) null,
[UpdatedAtOld] [datetime] null,
[LocalTimeNew] [datetime] null,
[EntityIdNew] [int] null,
[GenerationNew] [decimal](18, 4) null,
[UpdatedAtNew] [datetime] NOT NULL CONSTRAINT [DF_GenerationAudit_UpdatedAt] DEFAULT (getdate()),
[UpdatedBy] varchar(60) not null
CONSTRAINT [PK_GenerationAudit] PRIMARY KEY CLUSTERED
(
[AuditId] ASC
)
This table has an *Old and a *New version of each column that can't change. The Id, being an IDENTITY PK, can't change so no need for an old/new. I've also added an UpdatedBy column. It also has a new AuditId IDENTITY PK.
Next create three triggers on the base table: one for INSERT, one for UPDATE and one for DELETE. In the Insert trigger, insert a row into the Audit table with the New columns selected from the inserted table and the Old values as null. In the UPDATE one, the Oldvalues come from the deleted and the new from the inserted. In the DELETE trigger, old from from deleted and the new are all null.
The UPDATE trigger would look like this:
CREATE TRIGGER GenerationAuditUpdate
ON de.Generation
AFTER UPDATE
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
insert into de.GenerationAudit (Id, LocalTimeOld, EntityIdOld, GenerationOld, UpdatedAtOld,
LocalTimeNew, EntityIdNew, GenerationNew, UpdatedAtNew,
UpdatedBy)
select isnull(i.Id, d.Id), d.LocalTime, d.EntityId, d.Generation, d.UpdatedAt,
i.LocalTime, i.EntityId, d.Generation, getdate(),
SYSTEM_USER)
from inserted i
full outer join deleted d on d.Id = i.Id;
END
GO
You then have a full before/after picture of each change (and it'll be faster than seperating out diffs column by column). You can create views over the Audit table to get entries where the Old value is different to the new, and include the base table Id (which you will also need in your structures!), the user who did it, and the time they did it (UpdatedAtNew).
That's my version of Auditing and it's mine!

Why does adding a nullable default constraint to an existing column take so long?

I have an existing table with approximately 400 million rows. That table includes a set of bit columns named IsModified, IsDeleted, and IsExpired.
CREATE TABLE [dbo].[ActivityAccumulator](
[ActivityAccumulator_SK] [int] IDENTITY(1,1) NOT NULL,
[ActivityAccumulatorPK1] [int] NULL,
[UserPK1] [int] NULL,
[Data] [varchar](510) NULL,
[CoursePK1] [int] NULL,
[TimeStamp] [datetime] NULL,
[SessionID] [int] NULL,
[Status] [varchar](50) NULL,
[EventType] [varchar](40) NULL,
[DWCreated] [datetime] NULL,
[DWModified] [datetime] NULL,
[IsModified] [bit] NULL,
[DWDeleted] [datetime] NULL,
[IsDeleted] [bit] NULL,
[ActivityAccumulatorKey] [bigint] NULL,
[ContentPK1] [bigint] NULL
) ON [PRIMARY]
I would like to add a default constraint to the table that, for all future inserted rows, will default those bit columns to 0. I'm trying to do this via the following command:
ALTER TABLE ActivityAccumulator
ADD CONSTRAINT DF_ActivityAccumulatorIsExpired DEFAULT (0) FOR IsExpired
ALTER TABLE ActivityAccumulator
ADD CONSTRAINT DF_ActivityAccumulatorIsDeleted DEFAULT (0) FOR IsDeleted
ALTER TABLE ActivityAccumulator
ADD CONSTRAINT DF_ActivityAccumulatorIsModified DEFAULT (0) FOR IsModified
I'd eventually like to go back and clean up the existing data to put the zero value in wherever there are NULL values, but I don't really need to do so right now.
Just trying to run the first ADD CONSTRAINT command has been executing for over an hour now. Given that I'm not trying to change any existing values, why is this taking so long?
One possibility may be that you have another process on your server that's locking this table.
Imagine I have two SSMS windows open, and in the first one I execute these commands:
-- Session 1
CREATE TABLE Foo(IsTrue BIT)
INSERT INTO Foo VALUES (1),(1),(0)
BEGIN TRANSACTION
UPDATE Foo SET IsTrue = 1 - IsTrue
And then leave the SSMS window open so that the transaction never closes, trying to execute this simple constraint command in the other SSMS session will hang forever:
-- Session 2
ALTER TABLE Foo ADD CONSTRAINT FooDefault DEFAULT(0) FOR IsTrue
Note that in this example, the size or complexity of the table is irrelevant; I'm forced to wait for the transaction to complete. My alter instruction in session 2 won't complete until I release the lock on Foo either by COMMITing the transaction or closing session 1.
How can you tell if this is your problem? Have a look at the "Processes" list in the SSMS activity monitor. If your ALTER instruction is waiting for something else to complete, there'll be a number in the "Blocked By" column indicating the Session ID of the command that's causing your problem.
That session may in turn be waiting on another and so forth. If you follow these references, you eventually find a process with a 1 in the "Head Blocker" column. From there you can decide whether the appropriate action is to kill the offending process, or just wait it out.
recreate the object with all the constrains
dump the data
lock the original object
switch the object names
this way is the fastest if you want to optimize, re-index and avoid conflicts like the one mentioned by Dan

Triggering a timestamp update

For every INSERT, how do I populate my DateStamp field with the current datetime?
I've created an error output table for my SSIS task:
Here's the table:
CREATE TABLE [dbo].[gbs_CRMErrorOutput](
[ID] [uniqueidentifier] NULL,
[ErrorCode] [nvarchar](50) NULL,
[ErrorColumn] [nvarchar](500) NULL,
[CrmErrorMessage] [nvarchar](max) NULL,
[targetid] [uniqueidentifier] NULL,
[subordinateid] [uniqueidentifier] NULL,
[DateStamp] [datetime] NULL
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
Please note that I do not have an auto-increment or any key in the table.
I'm also wondering what would be a best practice for this?
Here is an example of using not null with a default. In your real table you may want to name your default constraint. If you define the constraint inline like this it will still be named but it will be automatically assigned.
CREATE TABLE #MyTable
(
MyID INT IDENTITY NOT NULL,
SomeValue VARCHAR(10),
DateCreated DATETIME NOT NULL DEFAULT GETDATE()
)
INSERT #MyTable(SomeValue)
VALUES ('Value1')
--This next line just waits for 1 second.
--This will demonstrate multiple inserts at different times so you can the values change
WAITFOR DELAY '00:0:01'
INSERT #MyTable(SomeValue)
VALUES ('Value2')
SELECT *
FROM #MyTable
DROP TABLE #MyTable
Two good options:
1) Create a DEFAULT CONSTRAINT on your table with GETDATE() specified for your column (good example here). Within SSIS, do not map any value to that column - leave it as Ignore. Make sure that Keep Nulls is not checked. Note that you might have to fiddle with the settings of your OLE DB Destination - uncheck Identity Insert if there's a problem. I've also seen cases where the column had to allow NULLs - that only affects certain scenarios.
2) Add a Derived Column transformation to your data flow, setting it up to add a new column to the flow. I usually use the System::StartTime variable here, so that all records inserted during a single ETL run will share the same inserted date, but you could just as easily use the SSIS function GETDATE().
Map the new column you just created to your OLE DB Destination.

How to maintain history of multiple tables in a single table without using CDC feature

Is it possible to consolidate the history of all the tables into a single table?
I tried to use the CDC feature provided by SQL server 2012 enterprise edition, but for that it creates a copy of every table, which increases the number of tables in the database.
Is it also possible track & insert the table name & column name in which DML has occurred into the history table? Will this cause any issues with performance?
Here is one solution using triggers.
1 - Create a trigger for each table that you want history on.
2 - Copy the modified data (INS, UPD, DEL) from base table to audit table during the action.
3 - Store all the data in XML format so that multiple tables can store data in the same audit table.
I did cover this in one of my blog articles. It is a great solution for auditing small amounts of data. There might be an overhead concern when dealing with thousands of record changes per second.
Please test before deploying to a production environment!
Here is the audit table that keeps track of the table name as well as the type of change.
/*
Create data level auditing - table.
*/
-- Remove table if it exists
IF EXISTS (SELECT * FROM sys.objects WHERE object_id =
OBJECT_ID(N'[ADT].[LOG_DML_CHANGES]') AND type in (N'U'))
DROP TABLE [ADT].[LOG_DML_CHANGES]
GO
CREATE TABLE [ADT].[LOG_DML_CHANGES]
(
[ChangeId]BIGINT IDENTITY(1,1) NOT NULL,
[ChangeDate] [datetime] NOT NULL,
[ChangeType] [varchar](20) NOT NULL,
[ChangeBy] [nvarchar](256) NOT NULL,
[AppName] [nvarchar](128) NOT NULL,
[HostName] [nvarchar](128) NOT NULL,
[SchemaName] [sysname] NOT NULL,
[ObjectName] [sysname] NOT NULL,
[XmlRecSet] [xml] NULL,
CONSTRAINT [pk_Ltc_ChangeId] PRIMARY KEY CLUSTERED ([ChangeId] ASC)
)
GO
Here is the article.
http://craftydba.com/?p=2060
The image below shows a single [LOG_DML_CHANGES] table with multiple [TRG_TRACK_DML_CHGS_XXX] triggers.
If you want to more than record that user x updated/deleted/inserted table y id x at time t then it will cause problems.
Choose the tables you want to audit; create Audit tables for them and update them from triggers on the base table. Lot of work, but the best way of doing it.