SQL Server -- updating the `sys.*` tables and not just reading them - sql

In an attempt to the query
UPDATE sys.columns
SET user_type_id = 106
WHERE object_id in (select object_id from sys.objects where type = 'U') and user_type_id = 108
I'm getting the error:
Msg 259, Level 16, State 1, Line 1
Ad hoc updates to system catalogs are not allowed.
Is there a way to get around this? In this case, I'm looking to change the types of all decimal fields of all the tables in the database.
Can do this "externally"-- without direct tampering with sys.* tables (haven't yet pinned down how-to though), but I'm looking to know whether I can update the sys.* tables -- and if so, which ones, when/how?
// =========================
EDIT:
would i be able to get any "deeper" than alter table... if i had full privileges for db access?
not sure what kind of privileges i have now, but would look into it.

These tables are informational only. I want to make this clear: the sys.* and INFORMATION_SCHEMA.* views exist to provide schema information from the database engine in a useful format. They do not represent the actual schema of the database*, and modifying them is thus impossible. The only way to change your schema is to use DDL (Data Definition Language) statements, such as ALTER TABLE.
In your case, you can use a cursor to iterate through all columns with the wrong type, generate SQL statements to correct that, and execute them dynamically. Here's a skeleton of how that would look:
DECLARE column_cursor CURSOR FOR
SELECT schemas.name AS schema_name,
objects.name AS table_name,
columns.name AS column_name
FROM sys.columns
JOIN sys.objects
ON objects.object_id = columns.object_id
JOIN sys.schemas
ON schemas.schema_id = objects.schema_id
WHERE objects.type = 'U'
AND columns.user_type_id = 108
DECLARE #schema_name VARCHAR(255)
DECLARE #table_name VARCHAR(255)
DECLARE #column_name VARCHAR(255)
OPEN column_cursor
FETCH NEXT FROM column_cursor INTO #schema_name, #table_name, #column_name
WHILE ##FETCH_STATUS = 0
BEGIN
DECLARE #sql VARCHAR(MAX)
-- TODO: modify to change to the actual type, scale and precision you want; also you may need to adjust for NOT NULL constraints, default constraints and foreign keys (all exercises for the reader)
SET #sql = 'ALTER TABLE ' + QUOTENAME(#schema_name) + '.' + QUOTENAME(#table_name) + ' CHANGE COLUMN ' + QUOTENAME(#column_name) + ' DECIMAL(12, 2)'
EXEC(#sql)
FETCH NEXT FROM column_cursor INTO #schema_name, #table_name, #column_name
END
CLOSE column_cursor
DEALLOCATE column_cursor
Because of the potential increase in complexity for dealing with constraints and keys, I'd recommend either updating the columns manually, building the ALTER TABLE statements manually, dumping your schema to script, updating that and recreating the tables and objects, or looking for a 3rd party tool that does this kind of thing (I don't know of any).
*For the sys.* views, at least, it's possible that they closely represent the underlying data structures, though I think there's still some abstraction. INFORMATION_SCHEMA is ANSI-defined, so it is unlikely to match the internal structures of any database system out there.

Related

Drop all objects in SQL Server database that belong to different schemas?

Is there a way to drop all objects in a db, with the objects belonging to two different schemas?
I had been previously working with one schema, so I query all objects using:
Select * From sysobjects Where type=...
then dropped everything I using
Drop Table ...
Now that I have introduced another schema, every time I try to drop it says something about I don't have permission or the object does not exist. BUT, if I prefix the object with the [schema.object] it works. I don't know how to automate this, cause I don't know what objects, or which of the two schemas the object will belong to. Anyone know how to drop all objects inside a db, regardless of which schema it belongs to?
(The user used is owner of both schemas, the objects in the DB were created by said user, as well as the user who is removing the objects - which works if the prefix I used IE. Drop Table Schema1.blah)
Use sys.objects in combination with OBJECT_SCHEMA_NAME to build your DROP TABLE statements, review, then copy/paste to execute:
SELECT 'DROP TABLE ' +
QUOTENAME(OBJECT_SCHEMA_NAME(object_id)) + '.' +
QUOTENAME(name) + ';'
FROM sys.objects
WHERE type_desc = 'USER_TABLE';
Or use sys.tables to avoid need of the type_desc filter:
SELECT 'DROP TABLE ' +
QUOTENAME(OBJECT_SCHEMA_NAME(object_id)) + '.' +
QUOTENAME(name) + ';'
FROM sys.tables;
SQL Fiddle
Neither of the other questions seem to have tried to address the all objects part of the question.
I'm amazed you have to roll your own with this - I expected there to be a drop schema blah cascade. Surely every single person who sets up a dev server will have to do this and having to do some meta-programming before being able to do normal programming is seriously horrible. Anyway... rant over!
I started looking at some of these articles as a way to do it by clearing out a schema: There's an old article about doing this, however the tables mentioned on there are now marked as deprecated. I've also looked at the documentation for the new tables to help understand what is going on here.
There's another answer and a great dynamic sql resource it links to.
After looking at all this stuff for a while it just all seemed a bit too messy.
I think the better option is to go for
ALTER DATABASE 'blah' SET SINGLE_USER WITH ROLLBACK IMMEDIATE
drop database 'blah'
create database 'blah'
instead. The extra incantation at the top is basically to force drop the database as mentioned here
It feels a bit wrong but the amount of complexity involved in writing the drop script is a good reason to avoid it I think.
If there seem to be problems with dropping the database I might revisit some of the links and post another answer
try this with sql2012 or above,
this script may help to delete all objects by selected schema
Note: below script for dbo schema for all objects but you may change in very first line #MySchemaName
DECLARE #MySchemaName VARCHAR(50)='dbo', #sql VARCHAR(MAX)='';
DECLARE #SchemaName VARCHAR(255), #ObjectName VARCHAR(255), #ObjectType VARCHAR(255), #ObjectDesc VARCHAR(255), #Category INT;
DECLARE cur CURSOR FOR
SELECT (s.name)SchemaName, (o.name)ObjectName, (o.type)ObjectType,(o.type_desc)ObjectDesc,(so.category)Category
FROM sys.objects o
INNER JOIN sys.schemas s ON o.schema_id = s.schema_id
INNER JOIN sysobjects so ON so.name=o.name
WHERE s.name = #MySchemaName
AND so.category=0
AND o.type IN ('P','PC','U','V','FN','IF','TF','FS','FT','PK','TT')
OPEN cur
FETCH NEXT FROM cur INTO #SchemaName,#ObjectName,#ObjectType,#ObjectDesc,#Category
SET #sql='';
WHILE ##FETCH_STATUS = 0 BEGIN
IF #ObjectType IN('FN', 'IF', 'TF', 'FS', 'FT') SET #sql=#sql+'Drop Function '+#MySchemaName+'.'+#ObjectName+CHAR(13)
IF #ObjectType IN('V') SET #sql=#sql+'Drop View '+#MySchemaName+'.'+#ObjectName+CHAR(13)
IF #ObjectType IN('P') SET #sql=#sql+'Drop Procedure '+#MySchemaName+'.'+#ObjectName+CHAR(13)
IF #ObjectType IN('U') SET #sql=#sql+'Drop Table '+#MySchemaName+'.'+#ObjectName+CHAR(13)
--PRINT #ObjectName + ' | ' + #ObjectType
FETCH NEXT FROM cur INTO #SchemaName,#ObjectName,#ObjectType,#ObjectDesc,#Category
END
CLOSE cur;
DEALLOCATE cur;
SET #sql=#sql+CASE WHEN LEN(#sql)>0 THEN 'Drop Schema '+#MySchemaName+CHAR(13) ELSE '' END
PRINT #sql
EXECUTE (#sql)
I do not know wich version of Sql Server are you using, but assuming that is 2008 or later, maybe the following command will be very useful (check that you can drop ALL TABLES in one simple line):
sp_MSforeachtable "USE DATABASE_NAME DROP TABLE ?"
This script will execute DROP TABLE .... for all tables from database DATABASE_NAME. Is very simple and works perfectly. This command can be used for execute other sql instructions, for example:
sp_MSforeachtable "USE DATABASE_NAME SELECT * FROM ?"

Sql Server change fill factor value for all indexes by tsql

I have to expoet my DB into a bacpac file to import it into Azure.
When I try to export I get an error because any indexes have a fillFactor value.
I've found how to set a fillFactor value for all indexes but I can't specify 0, the value have to be between 1 an 100. If I change the value in the management studio I can set it to 0.
The problem is that I have got lots of indexes to change and I would like to change the fillFactor value to all of them trough tsql.
Any ideas?.
Thanks.
something simpler for all tables in a single database:
select 'ALTER INDEX ALL ON '
+ quotename(s.name) + '.' + quotename(o.name) + ' REBUILD WITH (FILLFACTOR = 99)'
from sys.objects o
inner join sys.schemas s on o.schema_id = s.schema_id
where type='u' and is_ms_shipped=0
generates statements you can then copy & execute.
This isn't a straight T-SQL way of doing it. Though it does generate a pure T-SQL solution that you can apply to your DB.
Your results may vary depending on your DB... For example poor referential integrity might make this a bit trickier..
Also this comes with a DO AT YOUR OWN RISK disclaimer :-)
Get the DB you want to migrate into an SSDT project
http://msdn.microsoft.com/en-us/library/azure/jj156163.aspx
http://blogs.msdn.com/b/ssdt/archive/2012/04/19/migrating-a-database-to-sql-azure-using-ssdt.aspx
This is a nice way to migrate any schema to Azure regardless... It's way better then just creating a bacpac file.. fixing... exporting...fixing.. etc... So I would recommend doing this anytime you want to migrate a DB to Azure
For the FILLFACTOR fixes I just used a find and replace to remove all the FILLFACTORS from the generated schema files... Luckily the DB I was using had them all set to 90 so it was fairly easy to do a solution wide find and replace (CTRL-SHIFT-F)... If yours vary then you can probably use the RegEx find features of Visual Studio to find all the fillfactors and just remove them from the indexes.
I'm not that great at RegEx but I think this works
WITH \((.)*FILLFACTOR(.)*\)
At this point you'll have to fix any additional exceptions around Azure compliance.. The links provided describe how to go about doing this
Now that you're at the point where you have an SSDT project that's AZURE SQL compliant.
Here comes the DO AT YOUR OWN RISK PART
I used these scripts to remove all FK, PK, and Unique Constraints from the DB.
while(exists(select 1 from INFORMATION_SCHEMA.TABLE_CONSTRAINTS where CONSTRAINT_TYPE IN ('FOREIGN KEY', 'PRIMARY KEY', 'UNIQUE')))
begin
declare #sql nvarchar(2000)
SELECT TOP 1 #sql=('ALTER TABLE ' + TABLE_SCHEMA + '.[' + TABLE_NAME
+ '] DROP CONSTRAINT [' + CONSTRAINT_NAME + ']')
FROM information_schema.table_constraints
WHERE CONSTRAINT_TYPE IN ('FOREIGN KEY', 'PRIMARY KEY', 'UNIQUE')
exec (#sql)
end
declare #qry nvarchar(max);
select #qry =
(SELECT 'DROP INDEX [' + ix.name + '] ON [' + OBJECT_NAME(ID) + ']; '
FROM sysindexes ix
WHERE ix.Name IS NOT null and ix.OrigFillFactor <> 0
for xml path(''));
exec sp_executesql #qry
I do this because AFAIK the only way to completely remove the fill factor option is to drop and re-create the index. This comes with a cascading set of issues :-/ PK's with fill factors need the FK's dropped etc.... There's probably a smarter way to do this so you don't remove ALL FK's and PK's and you look at the dependency trees...
Now go back to your Azure Compliant SSDT project and do a SCHEMA COMPARISON of that project against your DB... This will create a script that recreates all your FK's, PK's, and Unique Constraints (without the Fill Factor).... At this point you can just click "update" or you can click the button just to the right of update which will generate the script you can use... So now armed with
the script above to remove FKs, Pks, and Unique.
The script created by SSDT
Ample testing and review of said scripts to ensure nothing was missed
You should be able to update your current DB to an Azure compliant SCHEMA
Additional Thoughts:
In my case the fill factors on the Production DB weren't really doing anything useful. They were just created as a default thing to do. In your case the fill factors might be important so don't just remove them all on your non Azure Production box without knowing the consequences.
There's additional things to consider when doing this to a production system... For example this might cause some mirroring delays and it might cause your log files to grow in a way you aren't anticipating. Which both only really matter if you're applying directly to production...
It'd be nice if setting them all to FILL FACTOR 100 worked :-/
There's 3rd party tools out there (so I've heard) that you can use to migrate to Azure...
Another option is to use
https://sqlazuremw.codeplex.com/
Use that to create a SCHEMA that's Azure compliant and then it uses BCP to copy all the data.
BUT if you want to make your current SCHEMA Azure compliant so you can create a bacpac file to upload into Azure this worked for me the one time I've had to do it.
EDIT:
Azure V12 supports fill factors
SQL Azure apparently does not support FILLFACTOR:
"SQL Azure Database does not support specifying FILLFACTOR with the
CREATE INDEX statement. If we create indexes in a SQL Azure database,
we will find that the index fillfactor values are all 0."
You would have to remove all FILLFACTOR statements from the CREATE INDEX scripts. Likewise, SORT_IN_TEMPDB and DATA_COMPRESSION and several other options are also not supported.
A full list of supported keywords in SQL Azure can be found here.
Update: SQL Azure V12 (introduced in 2015) does support FILLFACTOR. See here.
I found a very useful script here that would do the job of assigning a new value to all indexes and rebuilding them. As long as you are not afraid if using dynamic T-SQL you might find it useful for your task and environment, just set the values appropriately.
(I didn't find the license information on the original page so I copy the script here)
DECLARE #Database VARCHAR(255)
DECLARE #Table VARCHAR(255)
DECLARE #cmd NVARCHAR(500)
DECLARE #fillfactor INT
SET #fillfactor = 90
DECLARE DatabaseCursor CURSOR FOR
SELECT name FROM master.dbo.sysdatabases
WHERE name NOT IN ('master','msdb','tempdb','model','distribution')
ORDER BY 1
OPEN DatabaseCursor
FETCH NEXT FROM DatabaseCursor INTO #Database
WHILE ##FETCH_STATUS = 0
BEGIN
SET #cmd = 'DECLARE TableCursor CURSOR FOR SELECT ''['' + table_catalog + ''].['' + table_schema + ''].['' +
table_name + '']'' as tableName FROM [' + #Database + '].INFORMATION_SCHEMA.TABLES
WHERE table_type = ''BASE TABLE'''
-- create table cursor
EXEC (#cmd)
OPEN TableCursor
FETCH NEXT FROM TableCursor INTO #Table
WHILE ##FETCH_STATUS = 0
BEGIN
IF (##MICROSOFTVERSION / POWER(2, 24) >= 9)
BEGIN
-- SQL 2005 or higher command
SET #cmd = 'ALTER INDEX ALL ON ' + #Table + ' REBUILD WITH (FILLFACTOR = ' + CONVERT(VARCHAR(3),#fillfactor) + ')'
EXEC (#cmd)
END
ELSE
BEGIN
-- SQL 2000 command
DBCC DBREINDEX(#Table,' ',#fillfactor)
END
FETCH NEXT FROM TableCursor INTO #Table
END
CLOSE TableCursor
DEALLOCATE TableCursor
FETCH NEXT FROM DatabaseCursor INTO #Database
END
CLOSE DatabaseCursor
DEALLOCATE DatabaseCursor
It seems you want to use the server default fill factor (0) which omits the FILLFACTOR statement from the creation scripts. There is no way to do this by just rebuilding the index, you must drop and re-create it (see here). There doesn't seem to be a clean way of doing this, though its kind of a moot point now.
ALTER INDEX yourindex ON table.column
REBUILD WITH (FILLFACTOR = 0);
does the job. 0 is equal to 100 (see http://msdn.microsoft.com/en-us/library/ms177459.aspx), meaning no gaps are left in the index.
you have to run this for every index. the rebuilding can take considerable time, though.

how to batch edit triggers?

I have many triggers for which I'd like to build a list of table using a wildcard, then update the existing triggers on them by adding some column names to the trigger. The column names will be the same in each trigger, but I'm not clear how build the list of tables or how to loop through the list in a single alter trigger statement. I assume I'll have to use a cursor....
There is no magic wand to say "add this code to all the triggers" (or any other object type, for that matter).
For many object types, for batch editing you can quickly generate a script for multiple objects using Object Explorer Details and sorting and/or filtering within that view. For example, if you highlight "Stored Procedures" in Object Explorer, they're all listed in Object Explorer Details, and you can select multiple objects, right-click, and Script Stored Procedure as > CREATE To >
Since triggers are nested under tables, there isn't a handy way to do this (nor are triggers an entity type you can select when you right-click a database and choose Tasks > Generate Scripts). But you can pull the scripts from the metadata quite easily (you'll want Results to Text in Management Studio when running this):
SET NOCOUNT ON;
SELECT OBJECT_DEFINITION([object_id])
+ CHAR(13) + CHAR(10) + 'GO' + CHAR(13) + CHAR(10)
FROM sys.triggers
WHERE type = 'TR';
You can take the output, copy and paste it into the top pane, then once you have added your new code to each trigger, you'll have to do a little more work to do, e.g. search/replace 'CREATE TRIGGER' for 'ALTER TRIGGER'. You could do that as part of the query too, but it relies on the creator(s) having consistent coding conventions. Since some triggers might look like this...
create trigger
... you may have to massage some by hand.
You can also filter the query above if you are only interested in a certain set of tables. For example, to only alter triggers associated with tables that start with Sales you could say:
AND OBJECT_NAME(parent_id) LIKE N'Sales%';
Or only for tables in the Person schema:
AND OBJECT_SCHEMA_NAME(parent_id) = N'Person';
Anyway once you have made all necessary adjustments to the script, you can just run it. A lot easier than expanding every single table and generating a script for those triggers.
In addition to Aarons suggestion, which worked great on a bunch of complex triggers with inconsistent object naming convention, I then attempted to cook something up so I'd remember what I did in 3 months. Enjoy. Create or alter the SP then execute with no params.
CREATE PROCEDURE SP_ALTER_CONTOUR_TRIGS
--sp to bulk edit many triggers at once
--NO ERROR HANDLING!
AS
DECLARE
#sql VARCHAR(500),
#tableName VARCHAR(128),
#triggerName VARCHAR(128),
#tableSchema VARCHAR(128)
DECLARE triggerCursor CURSOR
FOR
SELECT
so_tr.name AS TriggerName,
so_tbl.name AS TableName,
t.TABLE_SCHEMA AS TableSchema
FROM
sysobjects so_tr
INNER JOIN sysobjects so_tbl ON so_tr.parent_obj = so_tbl.id
INNER JOIN INFORMATION_SCHEMA.TABLES t
ON
t.TABLE_NAME = so_tbl.name
WHERE
--here's where you want to build filters to make sure you're
--targeting the trigs you want edited
--BE CAREFUL!
--test the select statement first against sysobjects
--to see that it returns what you expect
so_tr.type = 'TR'
and so_tbl.name like '%contours'
and so_tr.name like'%location_id'
ORDER BY
so_tbl.name ASC,
so_tr.name ASC
OPEN triggerCursor
FETCH NEXT FROM triggerCursor
INTO #triggerName, #tableName, #tableSchema
WHILE ( ##FETCH_STATUS = 0 )
BEGIN
--insert alter statement below
--watch out for cr returns and open and close qoutes!
--seems to act finicky if you don't use schema-bound naming convention
SET #sql = '
ALTER TRIGGER ['+ #tableSchema +'].['
+ #triggerName + '] ON ['+ #tableSchema +'].['
+ #tableName + ']
INSTEAD OF INSERT
AS
BEGIN
SET NOCOUNT ON;
INSERT ['+ #tableSchema +'].['+ #tableName + ']
(OBJECTID, Contour, Type, Shape, RuleID, Override)
SELECT
a.OBJECTID, a.Contour, a.Type, a.Shape, a.RuleID, a.Override
FROM
(SELECT
OBJECTID, Contour, Type, Shape, RuleID, Override
FROM inserted)
AS a
END
'
PRINT 'Executing Statement - '+ #sql
EXECUTE ( #sql )
FETCH NEXT FROM triggerCursor
INTO #triggerName, #tableName, #tableSchema
END
CLOSE triggerCursor
DEALLOCATE triggerCursor

Way to react on deleting any row from table in sql server 2005

In our database we have many tables (of course). There is one table in our database say files. This table has the list of all the files related to each table like people, contacts etc. Now there is one column (parent record) in files table that store parent record key (not foreign key in database level because it is not possible that same column has relation to multiple tables) to either people or contacts. Now i want to create something at database level at one place which see we have deleted row from people, so delete all the rows from contacts.
One way to create trigger on each table but we have hundreds of tables.
We cannot use cascade delete, because relation is not of foreign key.
We cannot change the structure of table as we got existing data.
Thanks
The approach suggested by 'pst' is essentially a garbage collection approach, and might be favored over the use of triggers. Not knowing the overall characteristics of the system, if responsiveness is more important, this approach would not add any time to the deletion of a given record. Put another way, the risk of having unreferenced resources is less than risk of slowing down the delete step.
This sample code will loop through tables with identities and delete any orphans of that table in the centralized 'resource' table.
Assumptions:
Each of the parent tables has an identity column
The Resource table 'files' identifies somehow the table to which its parent can be found (this sample has the column 'TableName' hard-coded as an example)
(Posted on Microsft Script Center)
http://gallery.technet.microsoft.com/scriptcenter/Garbage-Collection-of-a-99594c13
DECLARE #ResourceTableName sysname
SET #ResourceTableName = 'Resource'
--cursor variables
DECLARE #sql nvarchar(max)
DECLARE #primarySchema sysname
DECLARE #primaryTableName sysname
DECLARE #identityColumnName sysname
DECLARE curTableName CURSOR LOCAL FAST_FORWARD READ_ONLY FOR
SELECT table_schema, table_name, column_name
FROM INFORMATION_SCHEMA.COLUMNS
WHERE COLUMNPROPERTY(object_id(TABLE_NAME), COLUMN_NAME, 'IsIdentity') = 1
AND table_name <> #ResourceTableName
-- loop through tables
OPEN curTableName
FETCH NEXT FROM curTableName
INTO #primarySchema, #primaryTableName, #identityColumnName
WHILE ##FETCH_STATUS = 0
BEGIN
SET #sql = '
DELETE --SELECT *
FROM ' + #ResourceTableName + '
WHERE TableName = ''' + #primaryTableName + '''
AND ParentID NOT IN (
SELECT ' + #identityColumnName + ' FROM ' + #primarySchema + '.' + #primaryTableName + '
)'
--PRINT #sql
EXEC sp_ExecuteSQL #sql
FETCH NEXT FROM curTableName
INTO #primarySchema, #primaryTableName, #identityColumnName
END
-- close a cursor
CLOSE curTableName
DEALLOCATE curTableName

How to forward demo data dates using a stored procedure?

I am looking for a clean way to forward some demo data using a stored procedure. The data that I want to forward are date types. Due to the nature of my app, some of the data in my app will only appear when certain dates in the data are in the future. I hope this makes sense. : S
Since my database is ever expanding, I was thinking to write a stored procedure which essentially forwards all dates in all tables in my database that belongs to a demo user account. I will also keep track of the date the demo data was forwarded last. Obviously the stored proc will get run on login of a demo data, and when the difference between last date the demo data was forwarded and the current date has met a certain time difference (e.g. 30 days). This way I do not have to keep altering the script as much.
Now to the technical part:
I am using this to retrieve all the tables in the db:
Select
table_name
from
Information_Schema.Tables
Where
TABLE_TYPE like 'BASE TABLE'
and table_name not like 'Report_%'
and table_name not in ('Accounts', 'Manifest', 'System', 'Users')
What I need is a way to iterate through the table names, find the column names and column types. Then I wish to update all columns in each table that is of type datetime. I have read looping in SQL is not ideal, but I would like to minimise the number of database calls rather than putting this on the serverside code.
Am I going down the wrong path to solve this issue?
Thanks in advance.
I agree with the comment that it might not be a good idea to do this automatically and in a hidden manner, but if you want to you can use this.
(Note this assumes SQL Server)
select T.Name, C.Name
from sys.tables T
join sys.columns C
on T.object_id = C.object_id
and C.system_type_id = 61 -- I would do a little researcht o make sure 61 is all you need to return here
This will get you a list of all datetime columns, along with the table it is in by name.
Then the way I would accomplish it is to have a cursor which builds the update strings on the fly, and exec them kinda like:
DECLARE #UpdateString varchar(500)
DECLARE #DaysToAdd int
DECLARE #TableName VARCHAR(100)
DECLARE #ColumnName VARCHAR(100)
set #DaysToAdd = 10
DECLARE db_cursor CURSOR FOR
select T.Name, C.Name
from sys.tables T
join sys.columns C
on T.object_id = C.object_id
and C.system_type_id = 61
OPEN db_cursor
FETCH NEXT FROM db_cursor INTO #TableName, #ColumnName
WHILE ##FETCH_STATUS = 0
BEGIN
set #UpdateString = 'Update ' + #TableName + ' set ' + #ColumnName + ' = dateadd(dd, ' + cast(#DaysToAdd as varchar) + ', ' + #ColumnName + ') where ...'
exec(#UpdateString)
FETCH NEXT FROM db_cursor INTO #TableName, #ColumnName
END
CLOSE db_cursor
DEALLOCATE db_cursor
There are many things I don't like about this, the cursor, the fact its behind the scenes, and the exec call, along with I'm unsure how you will "update only the test data" since it will be very hard to write the where clause for a generic table in your database. But I think that will get you started.
On the side maybe you should think about having some test data population script which you can run to insert new data which satisfies your date requirements.