The following code I found somewhere on SO, and slightly modified the variable names.
-- If data type differs, drop constraints and change data type.
WHILE 0=0
BEGIN
SET #ConstraintName = (SELECT TOP 1 constraint_name FROM information_schema.constraint_column_usage WHERE table_name = #TableName and column_name = #FieldName)
IF #ConstraintName IS NULL BREAK;
EXEC('ALTER TABLE ' + #TableName + ' DROP CONSTRAINT "' + #ConstraintName + '"');
END
EXEC('ALTER TABLE ' + #TableName + ' ALTER COLUMN ' + #FieldName + ' ' + #FieldType + ' NOT NULL');
Albeit dropping all constraints found in information schema, the subsequent execution still throws the error:
The object 'DF_settings_monitoring' is dependent on column 'monitoring'.
ALTER TABLE ALTER COLUMN monitoring failed because one or more objects access this column.
in some cases. Only some, not all.
I have found that the information_schema.constraint_column_usage table does not contain all default constraints, but I am unsure why.
Is there a more reliable source of information about existing constraint on a certain column?
As answered here, information_schema doesn't include default constraints.
So if you replace your select with this:
select top 1 name from sys.default_constraints
...
You should be good to go.
Related
My problem.
I have a database and it is a big one, I want to remove a specific constraint between to table and i wanna do it with a migration scripts written in Visual Studio. I have it for development purposes running locally on my pc but it also runs on a staging server.
I could use the name of the constraint locally and it would work, but on the staging server the name is different. So thats why I want to do without knowing its name.
I've been reading a lot post here on stackoverflow regarding same issue but none of them works for me.
I've build a small databaseto try different code on before i try it on the big DB.
It looks like this:
I've tried this:
SELECT *
FROM sys.foreign_keys
WHERE referenced_object_id = object_id('Genres')
SELECT
'ALTER TABLE ' + OBJECT_SCHEMA_NAME(parent_object_id) +
'.[' + OBJECT_NAME(parent_object_id) +
'] DROP FOREIGN_KEY_CONSTRAINT ' + name
FROM sys.foreign_keys
WHERE referenced_object_id = object_id('Genres')
And this
DECLARE #SQL NVARCHAR(MAX) = N'';
SELECT #SQL += N'
ALTER TABLE ' + OBJECT_NAME(PARENT_OBJECT_ID) + ' DROP CONSTRAINT ' + OBJECT_NAME(OBJECT_ID) + ';'
FROM SYS.OBJECTS
WHERE TYPE_DESC LIKE '%CONSTRAINT' AND OBJECT_NAME(PARENT_OBJECT_ID) = 'Albums';
EXECUTE #SQL
In both cases it finds the constraint tries to drop it, but is still there.
It even prints out the name of the constraint i want to remove.
I want to DROP all the tables with name starts with particular name, I wrote the script for the same but, when I try to delete the TABLE , I am getting Constraint issue.
So I want the approach to drop each table, in loop without/ignoring Relationships/Forieng_Key. Please suggested optimized script to perform the same operation. Please share your thoughts.
DECLARE #SqlStatement VARCHAR(MAX)
SET #SqlStatement = ''
Print 'Deleting Tables and Columns from the Agency Table schema'
SELECT #SqlStatement =
COALESCE(#SqlStatement, '') + 'DROP TABLE ['+#agencyName+'].' + QUOTENAME(TABLE_NAME) + ';' + CHAR(13)
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_SCHEMA = #agencyName;
PRINT #SqlStatement
--exec(#SqlStatement) -- In this Line I am getting the Foreign Key constraint issue. how do I achieve the functionality.
--DROP SCHEMA Agency3
Using SQL Server 2008, I've created a database where every table has a datetime column called "CreatedDt". What I'd like to do is create a trigger for each table so that when a value is inserted, the CreatedDt column is populated with the current date and time.
If you'll pardon my pseudocode, what I'm after is the T-SQL equivalent of:
foreach (Table in MyDatabase)
{
create trigger CreatedDtTrigger
{
on insert createddt = datetime.now;
}
}
If anyone would care to help out, I'd greatly appreciate it. Thanks!
As #EricZ says, the best thing to do is bind a default for the column. Here's how you'd add it to every table using a cursor and dynamic SQL:
Sure, You can do it with a cursor:
declare #table sysname, #cmd nvarchar(max)
declare c cursor for
select name from sys.tables where is_ms_shipped = 0 order by name
open c; fetch next from c into #table
while ##fetch_status = 0
begin
set #cmd = 'ALTER TABLE ' + #table + ' ADD CONSTRAINT DF_' + #table + '_CreateDt DEFAULT GETDATE() FOR CreateDt'
exec sp_executesql #cmd
fetch next from c into #table
end
close c; deallocate c
No need to go for Cursors. Just copy the result of below Query and Execute.
select distinct 'ALTER TABLE '+ t.name +
' ADD CONSTRAINT DF_'+t.name+'_crdt DEFAULT getdate() FOR '+ c.name
from sys.tables t
inner join sys.columns c on t.object_id=c.object_id
where c.name like '%your column name%'
Here's another method:
DECLARE #SQL nvarchar(max);
SELECT #SQL = Coalesce(#SQL + '
', '')
+ 'ALTER TABLE ' + QuoteName(T.TABLE_SCHEMA) + '.' + QuoteName(T.TABLE_NAME)
+ ' ADD CONSTRAINT ' + QuoteName('DF_'
+ CASE WHEN T.TABLE_SCHEMA <> 'dbo' THEN T.Table_Schema + '_' ELSE '' END
+ C.COLUMN_NAME) + ' DEFAULT (GetDate()) FOR ' + QuoteName(C.COLUMN_NAME)
+ ';'
FROM
INFORMATION_SCHEMA.TABLES T
INNER JOIN INFORMATION_SCHEMA.COLUMNS C
ON T.TABLE_SCHEMA = C.TABLE_SCHEMA
AND T.TABLE_NAME = C.TABLE_NAME
WHERE
C.COLUMN_NAME = 'CreatedDt'
;
EXEC (#SQL);
This yields, and runs, a series of statements similar to the following:
ALTER TABLE [schema].[TableName] -- (line break added)
ADD CONSTRAINT [DF_schema_TableName] DEFAULT (GetDate()) FOR [ColumnName];
Some notes:
This uses the INFORMATION_SCHEMA views. It is best practice to use these where possible instead of the system tables because they are guaranteed to not change between versions of SQL Server (and moreover are supported on many DBMSes, so all things being equal it's best to use standards-compliant/portable code).
In a database with a case-sensitive default collation, one MUST use upper case for the INFORMATION_SCHEMA view names and column names.
When creating script it's important to pay attention to schema names and proper escaping (using QuoteName). Not doing so will break in someone's system some day.
I think it is best practice to put the DEFAULT expression inside parentheses. While no error is received without it in this case, with it, if the function GetDate() is parameterized and/or ever changed to a more complex expression, nothing will break.
If you decide that column defaults are not going to work for you, then the triggers you imagined are still possible. But it will take some serious work to manage whether the trigger already exists and alter or create it appropriately, JOIN to the inserted meta-table inside the trigger, and do it based on the full list of primary key columns for the table (if they exist, and if they don't, then you're out of luck). It is quite possible, but extremely difficult--you could end up with nested, nested, nested dynamic SQL. I have such automated object-creating script that contains 13 quote marks in a row...
I am trying to change data type from text to varchar in all of my tables at once.
This query
select *
from information_schema.columns
where data_type = 'text'
shows me all of my text data types, but how do I use this to then alter type to varchar.
I would change all of your text, ntext and image datatypes to the newer varchar(max), nvarchar(max) and varbinary(max) datatypes:
select 'alter table ' + quotename(c.TABLE_SCHEMA) + '.' + quotename(c.TABLE_NAME)
+ ' alter column ' + quotename(c.COLUMN_NAME) + ' '
+ case c.DATA_TYPE when 'image' then 'varbinary(max)' when 'ntext' then 'nvarchar(max)' when 'text' then 'varchar(max)' end + ' '
+ case c.IS_NULLABLE when 'YES' then 'not' else '' end + ' null;' as SqlCommand
, *
from INFORMATION_SCHEMA.COLUMNS c
join INFORMATION_SCHEMA.TABLES t on c.TABLE_CATALOG = t.TABLE_CATALOG
and c.TABLE_SCHEMA = t.TABLE_SCHEMA
and c.TABLE_NAME = t.TABLE_NAME
where c.DATA_TYPE in ('image', 'ntext', 'text')
and t.TABLE_TYPE = 'BASE TABLE'
Simply copy and run the SqlCommand column as a batch.
This will handle nullability and filter-out views.
I don't believe you can put any type unique constraint on a text/ntext/image column, so you probably don't have to check for PK/FK/UK. You would have to account for other types of constraints and defaults, however. If you have a lot of those, it might be easier to make these changes in SSMS diagram mode.
Copy these commands from query result and execute them:
select
cmd = 'alter table [' + c.table_schema + '].[' + c.table_name + '] alter column [' + c.column_name + '] varchar(<yoursize>)'
,*
from information_schema.columns c
where c.data_type='text'
But:
you need to join tables information to select only tables (query above would return views, too)
it won't let you alter some columns, e.g. with PK/FK constrints
this select may list also tables that you don't want to / cannot modify. E.g. sysdiagrams
It's generally not a good idea to do it like tht if you're not limiting it to few tables that you exactly know the structure of.
At first it may seem as simple as this:
Use a cursor loop over the select from information_schema.
Compose dynamic SQL for each column of interest that will change its datatype using an ALTER TABLE statement.
Execute the dynamic SQL.
However, you need to drop any indices on these columns first, foreign key constraints, etc.
Then when you are done, you can recreate the constraints and indices.
See this SO question:
Single SQL Query to update datatypes of all columns in a table at one shot
I want to find a sql command or something that can do this where I have a table named tblFoo and I want to name it tblFooBar. However, I want the primary key to also be change, for example, currently it is:
CONSTRAINT [PK_tblFoo] PRIMARY KEY CLUSTERED
And I want a name change to change it to:
CONSTRAINT [PK_tblFooBar] PRIMARY KEY CLUSTERED
Then, recursively go through and cascade this change on all tables that have a foreigh key relationship, eg. from this:
CHECK ADD CONSTRAINT [FK_tblContent_tblFoo] FOREIGN KEY([fooID])
To this:
CHECK ADD CONSTRAINT [FK_tblContent_tblFooBar] FOREIGN KEY([fooID])
Naturally, I am trying not to go through and do this all manually because a) it is an error prone process, and b)it doesn't scale.
This is just off the top of my head and isn't complete (you'd need to add similar code for indexes). Also, you would need to either add code to avoid renaming objects from a table with the same base name, but additional characters - for example, this code would also list tblFoo2 and all of its associated objects. Hopefully it's a start for you though.
DECLARE
#old_name VARCHAR(100),
#new_name VARCHAR(100)
SET #old_name = 'tblFoo'
SET #new_name = 'tblFooBar'
SELECT
'EXEC sp_rename ''' + name + ''', ''' + REPLACE(name, #old_name, #new_name) + ''''
FROM dbo.sysobjects
WHERE name LIKE '%' + #old_name + '%'
Good answer by Tom
I've just extended his query here to include indexes
declare
#old nvarchar(100),
#new nvarchar(100)
set #old = 'OldName'
set #new = 'NewName'
select 'EXEC sp_rename ''' + name + ''', ''' +
REPLACE(name, #old, #new) + ''''
from sys.objects
where name like '%' + #old + '%'
union -- index renames
select 'EXEC sp_rename ''' + (sys.objects.name + '.' + sys.indexes.name) + ''', ''' +
REPLACE(sys.indexes.name, #old, #new) + ''', ''INDEX'''
from sys.objects
left join sys.indexes on sys.objects.object_id = sys.indexes.object_id
where sys.indexes.name like '%' + #old + '%'
A great tool that takes the pain out of renaming tables is Red Gate SQL Refactor
It will automatically find your dependency's and work all that stuff out for you too.
Big fan :-)
SQL Server won't do this directly as far as I am aware. You would have to manually build the script to do the change. This can be achieved by generating the SQL for the table definition (SSMS will do this) and doing a search and replace on the names.