I have around 500 tables in my database and each table is having a minimum of 100 columns.We total 5 person are working in the same database.So whenever requirement arises then a new column or a table is added.What ever I make changes,I keep a record but my colleagues didnt do it.So I am facing problem now what others have created column in the existing table or a new table is created.
So can anybody please tell me is it possible to know whether a new column is added to an existing table and if added what is the column name?
May be this query help you
SELECT
t.name AS table_name,
SCHEMA_NAME(schema_id) AS schema_name,
c.name AS column_name,
modify_date, create_date
FROM
sys.tables AS t
INNER JOIN
sys.columns c ON t.OBJECT_ID = c.OBJECT_ID
ORDER BY
modify_date DESC
EDIT
To Audit this, You have to use DDL trigger
Step 1:- Create New Audit Table
CREATE TABLE DDLAudit
(
PostTime datetime, DatabaseName varchar(256), Event nvarchar(100),
ObjectName varchar(256), TSQL nvarchar(2000), Login varchar(256)
)
Step 2:- Create DDL Trigger
CREATE TRIGGER AuditChanges
ON DATABASE
FOR CREATE_TABLE, ALTER_TABLE, DROP_TABLE
AS
DECLARE #ed XML
SET #ed = EVENTDATA()
INSERT INTO DDLAudit (PostTime, DatabaseName, Event, ObjectName, TSQL, Login)
VALUES
(
GetDate(),
#ed.value('(/EVENT_INSTANCE/DatabaseName)[1]', 'varchar(256)'),
#ed.value('(/EVENT_INSTANCE/EventType)[1]', 'nvarchar(100)'),
#ed.value('(/EVENT_INSTANCE/ObjectName)[1]', 'varchar(256)'),
#ed.value('(/EVENT_INSTANCE/TSQLCommand)[1]', 'nvarchar(2000)'),
#ed.value('(/EVENT_INSTANCE/LoginName)[1]', 'varchar(256)')
)
Now, Every Changes will be logged in Your DDLAudit. You can Filter out based on datetime filter on PostTime column.
Using the below query you can find the tables which were altered recently.
Query to know the table last altered
SELECT * FROM sys.tables
order by modify_date desc
Query to know the Column altered
SELECT TOP (select count(distinct(TransactionID))
from ::fn_trace_gettable( LEFT((select path from sys.traces where is_default = 1 ),len((select path from sys.traces where is_default = 1 )) - PATINDEX('%\%', reverse((select path from sys.traces where is_default = 1 )))) + '\log.trc', default )
where EventClass in (46,47,164) and EventSubclass = 0 and
DatabaseID <> 2 and
ObjectName='table1' and StartTime>'2015-01-10 00:00:00') [name],[colorder]
FROM [sys].[syscolumns]
where id=(SELECT object_id FROM sys.tables
where name='table1')
order by colorder desc
Note: this query will not work if there was any column dropped or the multiple columns of the table was altered using the SQL server UI but will keep track of multiple alter in the same query
The dropped column can be identified by the colorder. You will find the order will be missing but the column information you will not be able to see.
If you provide the table name and the date time, it gives the columns which were altered with order.
If it doesnt return any value then it means there was no change made on the table.
Related
I have a main table . I will get some real time records added to that table .I want to fetch all records which has been added ,altered or changed in previous existing records.
How can i Achieve this ?
You can use 2 commonly used approaches:
Track changes with another table through a trigger.
Should be something similar to this:
CREATE TABLE Tracking (
ID INT,
-- Your original table columns
TrackDate DATETIME DEFAULT GETDATE(),
TrackOperation VARCHAR(100))
GO
CREATE TRIGGER TrackingTrigger ON OriginalTable AFTER UPDATE, INSERT, DELETE
AS
BEGIN
INSERT INTO Tracking(
ID,
TrackOperation
-- Other columns
)
SELECT
ID = ISNULL(I.ID, D.ID),
TrackOperation = CASE
WHEN I.ID IS NOT NULL AND D.ID IS NOT NULL THEN 'Update'
WHEN I.ID IS NOT NULL THEN 'Insert'
ELSE 'Delete' END
-- Other columns
FROM
inserted AS I
FULL JOIN deleted AS D ON I.ID = D.ID -- ID is primary key
END
GO
Include CreatedDate, ModifiedDate and IsDeleted columns on your table. CreatedDate should have a default with current date, ModifiedDate should be updated each time your data is updated and IsDeleted should be flagged when you are deleting (and not actually being deleted). This option requires a lot more handling that the previous one, and you won't be able to track consecutive updates.
You have to search your table first from the sys.objects and grab that object id before using the usage_stats table.
declare #objectid int
select #objectid = object_id from sys.objects where name = 'YOURTABLENAME'
select top 1 * from sys.dm_db_index_usage_stats where object_id = #objectid
and last_user_update is not null
order by last_user_update
If you have Identity column in your table you may find last inserted row information through SQL query. And for that, we have multiple options like:
##IDENTITY
SCOPE_IDENTITY
IDENT_CURRENT
All three functions return last-generated identity values. However, the scope and session on which last is defined in each of these functions differ.
Hello good day everyone.
I have a table design in SQL server that looks like this:
NAME AGE WORK BIRTH
TEST 21 NONE 12/12/2000
In this table, I have created a trigger upon updating table and would save the data to audit log table. This audit log table holds the value being updated and the columns that has been update. I have a query snippet here that gets the columns that has been updated. I also get this query her on stack.
DECLARE #idTable INT
SELECT #idTable = T.id
FROM sysobjects P JOIN sysobjects T ON P.parent_obj = T.id
WHERE P.id = ##procid
-- Get COLUMNS_UPDATED if update
--
DECLARE #Columns_Updated VARCHAR(50)
SELECT #Columns_Updated = ISNULL(#Columns_Updated + ', ', '') + name
FROM syscolumns
WHERE id = #idTable
AND CONVERT(VARBINARY,REVERSE(COLUMNS_UPDATED())) & POWER(CONVERT(BIGINT, 2), colorder - 1) > 0
Now my audit log table looks like this:
OLD NEW COLUMNS_UPDATED
tEST,21,NONE TEST2,20,TEACHER AGE,NAME,WORK
Now my problem is, how I can sort the columns updated that looks like also the design table. My preferred output should look like this.
OLD NEW COLUMNS_UPDATED
tEST,21,NONE TEST2,20,TEACHER NAME,AGE,WORK
I hope anyone could help me with this.
Thanks.
Try:
SELECT COLUMN_NAME
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = 'myTable'
ORDER BY ORDINAL_POSITION
I am working on a script, that needs to be run in many different SQL servers. Some of them, shared the same structure, in other words, they are identical, but the filegroups and the DB names are different. This is because is one per client.
Anyway, I would like when running a script, If I chose the wrong DB, it should not be executed. I am trying to mantain a clean DB. here is my example, which only works for dropping a view if exists, but does not work for creating a new one. I also wonder how it would be for creating a stored procedure.
IF EXISTS (SELECT *
FROM dbo.sysobjects
WHERE id = object_id(N'[dbo].[ContentModDate]')
AND OBJECTPROPERTY(id, N'IsView') = 1)
AND CHARINDEX('Content', DB_NAME()) > 0
DROP VIEW [dbo].[ContentModDate]
GO
IF (CHARINDEX('Content', DB_NAME()) > 0)
BEGIN
CREATE VIEW [dbo].[Rx_ContentModDate] AS
SELECT 'Table1' AS TableName, MAX(ModDate) AS ModDate
FROM Tabl1 WHERE ModDate IS NOT NULL
UNION
SELECT 'Table2', MAX(ModDate) AS ModDate
FROM Table2 WHERE ModDate IS NOT NULL
END
END
GO
Exactly the same for a stored proc.
I'd also do this too because the code above won't work. CREATE xxxx must usually first in the batch. And your code will also find databases called "ContentFoo"
IF OBJECT_ID('dbo.myView') IS NOT NULL AND DB_NAME() = 'Content'
DROP VIEW [dbo].[ContentModDate]
GO
IF DB_NAME() = 'Content'
EXEC ('
CREATE VIEW [dbo].[Rx_ContentModDate] AS
SELECT ''Table1'' AS TableName, MAX(ModDate) AS ModDate
FROM Table1 WHERE ModDate IS NOT NULL
UNION
SELECT ''Table2'', MAX(ModDate) AS ModDate
FROM Table2 WHERE ModDate IS NOT NULL
')
Note: is the view name meant to be different?
I have a SQL script that populates a temp column and then drops the column at the end of the script. The first time it runs, it works fine because the column exists, then it gets dropped. The script breaks the 2nd time because the column no longer exists, even though the IF statement ensures that it won't run again. How do I get around SQL checking for this field?
IF EXISTS (SELECT name FROM syscolumns
WHERE name = 'COLUMN_THAT_NO_LONGER_EXISTS')
BEGIN
INSERT INTO TABLE1
(
COLUMN_THAT_NO_LONGER_EXISTS,
COLUMN_B,
COLUMN_C
)
SELECT 1,2,3 FROM TABLE2
ALTER TABLE TABLE1 DROP COLUMN COLUMN_THAT_NO_LONGER_EXISTS
END
I had a similar problem once and got round it by building all the queries as strings and executing them using the Exec() call. That way the queries (selects, inserts or whatever) don't get parsed till they are executed.
It wasn't pretty or elegant though.
e.g
exec('INSERT INTO TABLE1(COLUMN_THAT_NO_LONGER_EXISTS,COLUMN_B,COLUMN_C) SELECT 1,2,3 FROM TABLE2')
Are you checking the column isnt on another table ? If not you probably to check the table too see if statement below.
If you are already doing that is it running a in a single transaction and not picking up the that dropped column has gone ?
IF Not EXISTS (SELECT name FROM sys.columns
WHERE name = 'COLUMN_THAT_NO_LONGER_EXISTS' and Object_Name(object_id) = 'Table1')
Created a quick script program for this; can you confirm this matches what you are trying to do because in SQL 2007 at least this isnt returning an error. If i create the table and run through with teh alter table to add colc it works; if i then run the if / insert that works even after dropping the table.
create table tblTests
(
TestID int identity (1,1),
TestColA int null,
TestColB int null
)
go -- Ran this on its own
insert into tblTests (TestColA, TestColB)
Select 1,2
go 10
-- Insert some initial data
alter table tblTests
add TestColC Int
go -- alter the table to add new column
-- Run this with column and then after it has removed it
IF EXISTS (SELECT name FROM sys.columns a
WHERE name = 'TestColC' AND
OBJECT_NAME(object_id) = 'tblTests')
Begin
insert into tblTests (TestColA, TestColB, testcolc)
select 1,2,3
alter table tblTests
drop column TestColC
End
Is there a special way to declare a DateCreated column in a MS Sql Server table so that it will automatically fill it with the appropriate time-stamp when created?
Or.. do I have to provide the datetime to it when I do the query, manually?
Default values suffer from two major drawbacks.
if the insert statement specifies a value for the column, the default isn't used.
the column can be updated any time.
These mean that you can't be certain that the values haven't been modified outside of your control.
If you want true data integrity (so that you're sure the date in the row is the creation date), you need to use triggers.
An insert trigger to set the column to the current date and an update trigger to prevent changes to that column (or, more precisely, set it to its current value) are the way to implement a DateCreated column.
An insert and update trigger to set the column to the current date is the way to implement a DateModified column.
(edit from user Gabriel - here's my attempt to implement this as described - i'm not 100% sure it's correct but I'm hoping the OP reviews it...):
CREATE TRIGGER [dbo].[tr_Affiliate_IU]
ON [dbo].[Affiliate]
AFTER INSERT, UPDATE
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
-- Get the current date.
DECLARE #getDate DATETIME = GETDATE()
-- Set the initial values of date_created and date_modified.
UPDATE
dbo.Affiliate
SET
date_created = #getDate
FROM
dbo.Affiliate A
INNER JOIN INSERTED I ON A.id = I.id
LEFT OUTER JOIN DELETED D ON I.id = D.id
WHERE
D.id IS NULL
-- Ensure the value of date_created does never changes.
-- Update the value of date_modified to the current date.
UPDATE
dbo.Affiliate
SET
date_created = D.date_created
,date_modified = #getDate
FROM
dbo.Affiliate A
INNER JOIN INSERTED I ON A.id = I.id
INNER JOIN DELETED D ON I.id = D.id
END
You can set the default value of the column to "getdate()"
We have DEFAULT on CreatedDate and don't enforce with Triggers
There are times when we want to set the date explicitly - e.g. if we import data from some other source.
There is a risk that Application Bug could mess with the CreateDate, or a disgruntled DBA for that matter (we don't have non-DBAs connecting direct to our DBs)
I suppose you might set Column-level permissions on CreateDate.
A half-way-house might be to have an INSERT TRIGGER create a row in a 1:1 table, so that column was outside the main table. The second table could have SELECT permissions, where the main table has UPDATE permissions, and thus not need an UPDATE trigger to prevent changes to CreateDate - which would remove some "weight" when updating rows normally.
I suppose you coul have an UPDATE/DELETE trigger on the second table to prevent change (which would never be executed in normal circumstances, so "lightweight")
Bit of a pain to have the extra table though ... could have one table for all CreateDates - TableName, PK, CreateDate. Most database architects will hate that though ...
Certainly is.
Here is an example in action for you.
Create table #TableName
(
ID INT IDENTITY(1,1) PRIMARY KEY,
CreatedDate DATETIME NOT NULL DEFAULT GETDATE(),
SomeDate VARCHAR(100)
)
INSERT INTO #TableName (SomeDate)
SELECT 'Some data one' UNION ALL SELECT 'some data two'
SELECT * FROM #TableName
DROP TABLE #TableName
Setting the default value isn't enough, you should add a trigger to prevent updating:
CREATE TRIGGER UpdateRecord ON my_table
AFTER UPDATE AS UPDATE my_table
SET [CreatedDate] = ((SELECT TOP 1 [CreatedDate] FROM Deleted d where d.[id]=[id]))