what would be the simplest way to change every nvarchar column in a database to a varchar?
I personally would prefer nvarchar, but the data arch has specified that varchar must be used.
Here, to get you started:
Select 'Alter Table [' + TABLE_SCHEMA + '].[' + TABLE_NAME + '] Alter Column [' + COLUMN_NAME + '] VarChar(' + CAST(CHARACTER_MAXIMUM_LENGTH As VARCHAR) + ')'
From INFORMATION_SCHEMA.COLUMNS
WHERE DATA_TYPE = 'NVARCHAR'
This will generate all the needed alter statements for you (cut, paste, run).
Note that this does not take any constraints into account.
In order to handle MAX and exclude the niggly sysdiagrams:
SELECT
'
ALTER TABLE [' + TABLE_SCHEMA + '].[' + TABLE_NAME + ']
ALTER COLUMN [' + COLUMN_NAME + ']
VARCHAR(' +
(CASE WHEN CHARACTER_MAXIMUM_LENGTH = -1
THEN 'MAX'
ELSE CAST(CHARACTER_MAXIMUM_LENGTH AS VARCHAR)
END)
+ ')
'
FROM INFORMATION_SCHEMA.COLUMNS
WHERE DATA_TYPE = 'NVARCHAR' AND TABLE_NAME <> 'SYSDIAGRAMS'
Ask the data arch to do it?
or
Generate a script of all objects in your system, alter then nvarchar's, then create a new database and import the data into it from the old one.
or
Write alter scripts to update the existing database.
(This may be the best approach if it's a production database, or a client database.)
Related
I have 2 different sql servers (2 different databases).
The 2 servers have the same tables.
Now I want to transfer from Server 1's Person table to Server 2's Person table only the records with ID between 1.000 and 50.000.
How could I do it in the easiest way ?
Tried with Generate Scripts, but there isn't an option to select just those IDs, the script transfers all the records.
Tried by using a SELECT statement on Server 1 and exporting the data as CSV, then importing the CSV file on Server 2, but apparently there are some problems because of the datetimeoffset fields...
I had the same problem where I had data in 2 domains that could not see each other over the network, I had to get some date, not all data and move that to the "other" server.
I wrote a script that took all data from a file group and created a dump of that data as well as the script to load the data.
A little later they also started to dump data out to archive for data that needed to be retained as the "csv" version can always be restored, regardless of the database used "7 years" from now...
Anyway, it's just a big "print" statement that uses BCP to move massive amounts of data between servers. You can tweak it to do what you like, just alter the query a bit, the top of the file contains the "control" variables.
/*******************************************************************
this script will generate thebcp out commands for all data from the
users current connected database. The this script will only work if
both databases have the same ddl version, meaning same tables, same
columns same data definitions.
*******************************************************************/
SET NOCOUNT ON
GO
DECLARE #Path nvarchar(2000) = 'f:\export\' -- storage location for bcp dump (needs to have lots of space!)
, #Batchsize nvarchar(40) = '1000000' -- COMMIT EVERY n RECORDS
, #Xmlformat bit = 0 -- 1 for yes to xml format, 0 for not xml
, #SourceServerinstance nvarchar(200) = 'localhost'-- SQL Server \ Instance name
, #Security nvarchar(800) = ' -T ' -- options are -T (trusted), -Uloginid -Ploginpassword
, #GenerateDump bit = 0 -- 0 for storing data to disk, not 1 for loading from disk
, #FileGroup sysname = 'Data'; -- Table filegroup that we are intrested in
--> set output to text and execute the query, then copy the generated commands, validate and execucte them
--------------------------------Do not edit below this line-----------------------------------------------------------------
DECLARE #filter TABLE(TABLE_NAME sysname)
INSERT INTO #filter (TABLE_NAME)
SELECT o.name
FROM sys.indexes as i
JOIN sys.objects as o on o.object_id = i.object_id
WHERE i.data_space_id = FILEGROUP_ID(#FileGroup)
AND i.type_desc ='CLUSTERED'
and o.name not like 'sys%'
order by 1
if(#GenerateDump=0)
begin
--BCP-OUT TABLES
SELECT 'bcp "' + QUOTENAME( TABLE_CATALOG ) + '.' + QUOTENAME( TABLE_SCHEMA )
+ '.' + QUOTENAME( TABLE_NAME ) + '" out "' + #path + '' + TABLE_NAME + '.dat" -q -b"'
+ #batchsize + '" -e"' + #path + 'Error_' + TABLE_NAME + '.err" -n -CRAW -o"' + #path + ''
+ TABLE_NAME + '.out" -S"' + #SourceServerinstance + '" ' + #security + ''
FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_TYPE = 'BASE TABLE' AND TABLE_NAME IN (SELECT TABLE_NAME FROM #filter)
if(#Xmlformat=0)
begin
print 'REM CREATE NON-XML FORMAT FILE '
SELECT 'bcp "' + QUOTENAME( TABLE_CATALOG ) + '.' + QUOTENAME( TABLE_SCHEMA ) + '.'+
QUOTENAME( TABLE_NAME ) + '" format nul -n -CRAW -f "' + #path + ''
+ TABLE_NAME + '.fmt" -S"' + #SourceServerinstance + '" ' + #security + ''
FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_TYPE = 'BASE TABLE' AND TABLE_NAME IN (SELECT TABLE_NAME FROM #filter)
end
else
begin
PRINT 'REM XML FORMAT FILE'
SELECT 'bcp "' +QUOTENAME( TABLE_CATALOG ) + '.' + QUOTENAME( TABLE_SCHEMA )
+ '.' + QUOTENAME( TABLE_NAME ) + '" format nul -x -n -CRAW -f "'
+ #path + '' + TABLE_NAME + '.xml" -S"' + #SourceServerinstance + '" ' + #security + ''
FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_TYPE = 'BASE TABLE' AND TABLE_NAME IN (SELECT TABLE_NAME FROM #filter)
end
end
else
begin
print '--Make sure you backup your database first'
--GENERATE CONSTRAINT NO CHECK
PRINT '--NO CHECK CONSTRAINTS'
SELECT 'ALTER TABLE ' + QUOTENAME(TABLE_SCHEMA)+'.'+QUOTENAME( TABLE_NAME ) + ' NOCHECK CONSTRAINT ' + QUOTENAME( CONSTRAINT_NAME )
FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS WHERE TABLE_NAME IN (SELECT TABLE_NAME FROM #filter)
PRINT '--DISABLE TRIGGERS'
SELECT 'ALTER TABLE ' + QUOTENAME(TABLE_SCHEMA)+'.'+QUOTENAME( TABLE_NAME ) + ' DISABLE TRIGGER ALL'
FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_TYPE = 'BASE TABLE' AND TABLE_NAME IN (SELECT TABLE_NAME FROM #filter)
--TRUNCATE TABLE
SELECT 'TRUNCATE TABLE ' +QUOTENAME(TABLE_SCHEMA)+'.'+QUOTENAME( TABLE_NAME ) + '
GO '
FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_TYPE = 'BASE TABLE' AND TABLE_NAME IN (SELECT TABLE_NAME FROM #filter)
--BULK INSERT
SELECT DISTINCT 'BULK INSERT ' + QUOTENAME(TABLE_CATALOG) + '.'
+ QUOTENAME( TABLE_SCHEMA ) + '.' + QUOTENAME( TABLE_NAME ) + '
FROM ''' + #path + '' + TABLE_NAME + '.Dat''
WITH (FORMATFILE = ''' + #path + '' + TABLE_NAME + '.FMT'',
BATCHSIZE = ' + #batchsize + ',
ERRORFILE = ''' + #path + 'BI_' + TABLE_NAME + '.ERR'',
TABLOCK);
GO '
FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_TYPE = 'BASE TABLE' AND TABLE_NAME IN (SELECT TABLE_NAME FROM #filter)
--GENERATE CONSTRAINT CHECK CONSTRAINT TO VERIFY DATA AFTER LOAD
PRINT '--CHECK CONSTRAINT'
SELECT 'ALTER TABLE ' + QUOTENAME(TABLE_SCHEMA)+'.'+QUOTENAME( TABLE_NAME ) + ' CHECK CONSTRAINT '
FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_TYPE = 'BASE TABLE' AND TABLE_NAME IN (SELECT TABLE_NAME FROM #filter)
SELECT 'ALTER TABLE ' + QUOTENAME(TABLE_SCHEMA)+'.'+ QUOTENAME( TABLE_NAME ) + ' ENABLE TRIGGER ALL'
FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_TYPE = 'BASE TABLE' AND TABLE_NAME IN (SELECT TABLE_NAME FROM #filter)
end
At the end the easiest way was to use create a linked server between the 2 and execute my queries taking data from both the servers and excluding the IDs from the first server.
Thanks to everyone for the responses.
I need to understand the following SQL query and would like to ask if anybody could explain it to me a bit more in detail (like the xml path) as well as update it with a replace element?
So I want to find all values with BlaBlaBlaBla and replace them with HaHaHaHa instead. At the moment the query is finding all values of BlaBlaBlaBla only.
DECLARE #searchstring NVARCHAR(255)
SET #searchstring = '%BlaBlaBlaBla%'
DECLARE #sql NVARCHAR(max)
SELECT #sql = STUFF((
SELECT
' UNION ALL SELECT ''' + TABLE_SCHEMA + '.' + TABLE_NAME + ''' AS tbl, ''' + COLUMN_NAME + ''' AS col, [' + COLUMN_NAME + '] AS val' +
' FROM [' + TABLE_SCHEMA + '].[' + TABLE_NAME + '] WHERE [' + COLUMN_NAME + '] LIKE ''' + #searchstring + ''''
FROM INFORMATION_SCHEMA.COLUMNS
WHERE DATA_TYPE in ('nvarchar', 'varchar', 'char', 'ntext') FOR XML PATH('')) ,1, 11, '')
Exec (#sql)
I believe that the XML PATH is a trick to get the strings to all concatenate together.
You could change it to REPLACE with something like this:
SELECT #sql =
SELECT
' UPDATE ' + QUOTENAME(TABLE_SCHEMA) + '.' + QUOTENAME(TABLE_NAME) + '
SET ' + QUOTENAME(COLUMN_NAME) + ' = REPLACE(' + QUOTENAME(COLUMN_NAME) + ', ''' + #search_string + ''', ' + ', ''' + #replace_string + '''
WHERE ' + QUOTENAME(COLUMN_NAME) + ' LIKE ''' + #searchstring + ''''
FROM INFORMATION_SCHEMA.COLUMNS
WHERE
DATA_TYPE in ('nvarchar', 'varchar', 'char', 'ntext') FOR XML PATH('')
EXEC(#sql)
Some caveats:
I haven't tested this. When you're generating code like this it's very easy to make minor errors with all of the start and end quotes, etc. I would print out the SQL and check it, repeating as necessary until you get the output SQL correct.
Also, this is generally not a good idea. If your database is large and/or has a large number of tables then performance is going to be miserable. You should usually do the analysis of where you think this sort of data is going to appear and write code that will correct it as necessary. The fact that data elements are buried in strings throughout your data is concerning.
Finally, be aware that this might easily update additional data that you didn't intend to update. If you try to update "123" with "456" and there's a string out there that is "My ID is 1234" it's going to become "My ID is 4564".
BTW, the QUOTENAME function is a way of enclosing your table and column names in [ and ], but if the quote character is changed in a DB implementation it should still work.
I have 15 SQL Server tables, each with about 50 columns.
Some of these columns have rows that contain quotes, commas and tabs.
I have a function that removes all of these from the row given the column name, but I don't know which columns have the issue.
I'd like a SQL Server 2005 Query that can return column names that have the bad data given a table name.
There's no way you can do this without some sort of dynamic SQL. Instead of jumping through those hoops, what I usually do is write a script that generates another script.
First, the outline of a script:
declare #cols table (name varchar(500))
...
select * from #cols
Then use a query like this to generate a series of statements that will check each column for bad values:
SELECT 'IF EXISTS (SELECT * FROM [' + TABLE_NAME + ']' +
' WHERE [' + COLUMN_NAME + '] like ''%,%''' +
' or [' + COLUMN_NAME + '] like ''%''''%''' +
' or [' + COLUMN_NAME + '] like ''%'' + char(9) + ''%'')' +
' insert into #cols values (''' + COLUMN_NAME + ''')'
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = 'Tablename' and DATA_TYPE in ('char','varchar')
It's just a matter of running this query, then pasting the results into your script and running that. You could also easily modify this to work on multiple tables at once.
I need to switch from using a #temp table to a #table variable so that I can use it in a function.
My query uses insert into #temp (from multiple tables) like so:
SELECT
a.col1,
a.col2,
b.col1...
INTO #temp
FROM ...
Is there an easy way to find out the data types of the columns in the #temp table so that I can create the #table variable with the same columns and data types as #temp?
You need to make sure sp_help runs in the same database where the table is located (tempdb). You can do this by prefixing the call directly:
EXEC tempdb.dbo.sp_help #objname = N'#temp';
Or by prefixing a join against tempdb.sys.columns:
SELECT [column] = c.name,
[type] = t.name, c.max_length, c.precision, c.scale, c.is_nullable
FROM tempdb.sys.columns AS c
INNER JOIN tempdb.sys.types AS t
ON c.system_type_id = t.system_type_id
AND t.system_type_id = t.user_type_id
WHERE [object_id] = OBJECT_ID(N'tempdb.dbo.#temp');
This doesn't handle nice things for you, like adjusting max_length for varchar differently from nvarchar, but it's a good start.
In SQL Server 2012 or better, you can use a new DMF to describe a resultset, which takes that issue away (and also assembles max_length/precision/scale for you). But it doesn't support #temp tables, so just inject the query without the INTO:
SELECT name, system_type_name, is_nullable
FROM sys.dm_exec_describe_first_result_set(N'SELECT
a.col1,
a.col2,
b.col1...
--INTO #temp
FROM ...;',NULL,1);
The accepted answer does not give the data type.Joining tempdb.sys.columns with sys.types gives the data type as mentioned in the comment of the answer.But joining on system_type_id yields one extra row with datatype "sysname". Instead "user_type_id" gives the exact solution as given below.
SELECT cols.NAME
,ty.NAME
FROM tempdb.sys.columns cols
JOIN sys.types ty ON cols.user_type_id = ty.user_type_id
WHERE object_id = OBJECT_ID('tempdb..#temp')
you need to qualify the sp_help process to run from the tempdb database to get details about a hash table, because that's where the hash table is actually stored. If you attempt to run sp_help from a different database you'll get an error that the table doesn't exist in that database.
If your query is executing outside of tempdb, as I assume it is, you can run the following:
exec tempdb..sp_help #temp
One benefit of this procedure is it includes a text description of the column datatypes for you. This makes it very easy to copy and paste into another query, e.g. if you're trying use the definition of a temp table to create a table variable.
You could find the same information in the Syscolumns table, but it will give you numeric indentifiers for the types which you'll have to map yourself. Using sp_help will save you a step.
The other answers will give you the information that you need, but still require you to type it all out when you define the table variable.
The following TSQL will allow you to quickly generate the table variable's definition for any given table.
This can save you a lot of time instead of manually typing table definitions like:
table(Field1Name nvarchar(4), Field2Name nvarchar(20), Field3Name int
, Field4Name numeric(28,12))
TSQL:
select top 10 *
into #temp
from db.dbo.myTable
declare #tableName nvarchar(max)
set #tableName = '#temp'
use tempdb
declare #tmp table(val nvarchar(max))
insert into #tmp
select case data_type
when 'binary' then COLUMN_NAME + ' ' + DATA_TYPE + '(' + cast(CHARACTER_MAXIMUM_LENGTH AS nvarchar(max)) + ')'
when 'char' then COLUMN_NAME + ' ' + DATA_TYPE + '(' + cast(CHARACTER_MAXIMUM_LENGTH AS nvarchar(max)) + ')'
when 'datetime2' then COLUMN_NAME + ' ' + DATA_TYPE + '(' + CAST(DATETIME_PRECISION as nvarchar(max)) + ')'
when 'datetimeoffset' then COLUMN_NAME + ' ' + DATA_TYPE + '(' + CAST(DATETIME_PRECISION as nvarchar(max)) + ')'
when 'decimal' then COLUMN_NAME + ' ' + DATA_TYPE + '(' + cast(NUMERIC_PRECISION as nvarchar(max)) + ',' + cast(NUMERIC_SCALE as nvarchar(max)) + ')'
when 'nchar' then COLUMN_NAME + ' ' + DATA_TYPE + '(' + cast(CHARACTER_MAXIMUM_LENGTH AS nvarchar(max)) + ')'
when 'numeric' then COLUMN_NAME + ' ' + DATA_TYPE + '(' + cast(NUMERIC_PRECISION as nvarchar(max)) + ',' + cast(NUMERIC_SCALE as nvarchar(max)) + ')'
when 'nvarchar' then COLUMN_NAME + ' ' + DATA_TYPE + '(' + cast(CHARACTER_MAXIMUM_LENGTH AS nvarchar(max)) + ')'
when 'time' then COLUMN_NAME + ' ' + DATA_TYPE + '(' + CAST(DATETIME_PRECISION as nvarchar(max)) + ')'
when 'varbinary' then COLUMN_NAME + ' ' + DATA_TYPE + '(' + cast(CHARACTER_MAXIMUM_LENGTH AS nvarchar(max)) + ')'
when 'varchar' then COLUMN_NAME + ' ' + DATA_TYPE + '(' + cast(CHARACTER_MAXIMUM_LENGTH AS nvarchar(max)) + ')'
-- Most standard data types follow the pattern in the other section.
-- Non-standard datatypes include: binary, char, datetime2, datetimeoffset, decimal, nvchar, numeric, nvarchar, time, varbinary, and varchar
else COLUMN_NAME + ' ' + DATA_TYPE
end + case when IS_NULLABLE <> 'YES' then ' NOT NULL' else '' end 'dataType'
from INFORMATION_SCHEMA.COLUMNS
where TABLE_NAME like #tableName + '%'
declare #result nvarchar(max)
set #result = ''
select #result = #result + [val] + N','
from #tmp
where val is not null
set #result = substring(#result, 1, (LEN(#result)-1))
-- The following will replce '-1' with 'max' in order to properly handle nvarchar(max) columns
set #result = REPLACE(#result, '-1', 'max')
select #result
Output:
Field1Name nvarchar(4), Field2Name nvarchar(20), Field3Name int
, Field4Name numeric(28,12)
to get columns name with data type use this
EXEC tempdb.dbo.sp_help N'#temp';
or
To get only columns name to use this
SELECT *
FROM tempdb.sys.columns
WHERE [object_id] = OBJECT_ID(N'tempdb..#temp');
Finding the data types of a SQL temporary table
METHOD 1 – Using SP_HELP
EXEC TempDB..SP_HELP #TempTable;
Note-
In the Table Structure, the Table Name shows something like ‘#TempTable__________________________________________________________________________________________________________0000000004CB’. Actually, the total length of each and every Temp Table name will be 128 . To handle the Same Temp Table name in Multiple Sessions differently, SQL Server will automatically add some underscores in between and alphanumeric’s at end.
METHOD 2 – Using SP_COLUMNS
EXEC TempDB..SP_COLUMNS '#TempTable';
METHOD 3 – Using System Tables like INFORMATION_SCHEMA.COLUMNS, SYS.COLUMNS, SYS.TABLES
SELECT * FROM TempDB.INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME IN (
SELECT NAME FROM TempDB.SYS.TABLES WHERE OBJECT_ID=OBJECT_ID('TempDB.dbo.#TempTable')
);
GO
SELECT * FROM TempDB.SYS.COLUMNS WHERE OBJECT_ID=OBJECT_ID('TempDB.dbo.#TempTable');
GO
SELECT * FROM TempDB.SYS.TABLES WHERE OBJECT_ID=OBJECT_ID('TempDB.dbo.#TempTable');
GO
I'd go the lazy route and use
use tempdb
GO
EXECUTE sp_help #temp
What you are trying to do is to get information about the system types of the columns you are querying.
For SQL Server 2012 and later you can use sys.dm_exec_describe_first_result_set function. It returns very detailed information about the columns and the system_type_column holds the complete system type definition (ready to use in your table definition):
For example:
SELECT *
FROM [sys].[dm_exec_describe_first_result_set] (N'SELECT object_id, name, type_desc FROM sys.indexes', null, 0);
Yes, the data types of the temp table will be the data types of the columns you are selecting and inserting into it. So just look at the select statement and determine each data type based on the column you select.
I am trying to rename the column datatype from text to ntext but getting error
Msg 4927, Level 16, State 1, Line 1
Cannot alter column 'ColumnName' to be data type ntext.
query that i m using is as follows:-
alter table tablename alter column columnname ntext null
Conversion not allowed. Add new column as ntext then copy converted data to new column, then delete old column. Might consume a lot of diskspace if it's a large table!
You should use NVARCHAR(MAX) instead of NTEXT which will not be supported in the future.
Msg 4927
I expect you'll need to copy the data out - i.e. add a scratch column, fill it; drop the old column; add the new column, copy the data back, remove the scratch column:
ALTER TABLE TableName ADD tmp text NULL
GO
UPDATE TableName SET tmp = ColumnName
GO
ALTER TABLE TableName DROP COLUMN ColumnName
GO
ALTER TABLE TableName ADD ColumnName ntext NULL
GO
UPDATE TableName SET ColumnName = tmp
GO
ALTER TABLE TableName DROP COLUMN tmp
For applying database-wide, you can script it out from info-schema (note you should filter out any system tables etc):
SELECT 'ALTER TABLE [' + TABLE_SCHEMA + '].[' + TABLE_NAME+ '] ADD [__tmp] text NULL
GO
UPDATE [' + TABLE_SCHEMA + '].[' + TABLE_NAME+ '] SET [__tmp] = [' + COLUMN_NAME + ']
GO
ALTER TABLE [' + TABLE_SCHEMA + '].[' + TABLE_NAME+ '] DROP COLUMN [' + COLUMN_NAME + ']
GO
ALTER TABLE [' + TABLE_SCHEMA + '].[' + TABLE_NAME+ '] ADD [' + COLUMN_NAME + '] ntext ' +
CASE IS_NULLABLE WHEN 'YES' THEN 'NULL' ELSE 'NOT NULL' END + '
GO
UPDATE [' + TABLE_SCHEMA + '].[' + TABLE_NAME+ '] SET [' + COLUMN_NAME + '] = [__tmp]
GO
ALTER TABLE [' + TABLE_SCHEMA + '].[' + TABLE_NAME+ '] DROP COLUMN [__tmp]'
FROM INFORMATION_SCHEMA.COLUMNS WHERE DATA_TYPE = 'text'
In MySQL, the query is:
ALTER TABLE [tableName] CHANGE [oldColumnName] [newColumnName] [newColumnType];