Related
I need to create a query that is executed on all databases of my SQL server instance. An additional constraint is, that the query should only be executed on databases that contain a special table with a special column. Background is that in some databases the special table does (not) have the special column.
Based on this solution, what I have until now is a query that executes only on databases that contain a certain table.
SELECT *
FROM sys.databases
WHERE DATABASEPROPERTY(name, 'IsSingleUser') = 0
AND HAS_DBACCESS(name) = 1
AND state_desc = 'ONLINE'
AND CASE WHEN state_desc = 'ONLINE'
THEN OBJECT_ID(QUOTENAME(name) + '.[dbo].[CERTAIN_TABLE]', 'U')
END IS NOT NULL
However, what is still missing is a constraint that the query should only select databases where the table CERTAIN_TABLE has a specific column. How can this be achieved?
When i want to loop through all databases, i do a loop like the following. Its easy to follow:
DECLARE #dbs TABLE ( dbName NVARCHAR(100) )
DECLARE #results TABLE ( resultName NVARCHAR(100) )
INSERT INTO #dbs
SELECT name FROM sys.databases
DECLARE #current NVARCHAR(100)
WHILE (SELECT COUNT(*) FROM #dbs) > 0
BEGIN
SET #current = (SELECT TOP 1 dbName FROM #dbs)
INSERT INTO #results
EXEC
(
'IF EXISTS(SELECT 1 FROM "' + #current + '".INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = ''Target_Table_Name'' AND COLUMN_NAME = ''Target_Column_Name'')
BEGIN
--table and column exists, execute query here
SELECT ''' + #current + '''
END'
)
DELETE FROM #dbs
WHERE dbName = #current
END
SELECT * FROM #results
You are going to need either some looping or dynamic sql for this. I really dislike loops so here is how you could do this with dynamic sql.
declare #TableName sysname = 'CERTAIN_TABLE'
, #ColumnName sysname = 'CERTAIN_COLUMN'
declare #SQL nvarchar(max) = ''
select #SQL = #SQL + 'select DatabaseName = ''' + db.name + ''' from ' + QUOTENAME(db.name) + '.sys.tables t join ' + QUOTENAME(db.name) + '.sys.columns c on c.object_id = t.object_id where t.name = ''' + QUOTENAME(#TableName) + ''' and c.name = ''' + QUOTENAME(#ColumnName) + '''' + char(10) + 'UNION ALL '
from sys.databases db
where db.state_desc = 'ONLINE'
order by db.name
select #SQL = substring(#SQL, 0, len(#SQL) - 9)
select #SQL
--uncomment the line below when you are comfortable the query generated is correct
--exec sp_executesql #SQL
I am trying to create a script to calculate the fill rates for each column in a table Data_table and insert it into a second table Metadata_table.
The Data_table has 30 columns in it, and some columns have 100% data in them, and some have less than 100% (due to nulls).
My code to calculate the fill rate looks like this,
select
cast(sum(case
when employee_id is null
then 0
else 1
end) / cast(count(1) as float ) * 100 as decimal(8,3)) as employee_id_fill,
.....--/*so on for 30 columns..*/
from
[Data_table]
The Metadata_table should look like this:
Table_name | Colmn_name | Fill_rate
[Data_table]| Colomn_a | 100%
[Data_table]| Colomn_b | 89%
[Data_table]| Colomn_c | 100%
and so on...
I think
unpivot
can work here, but i am unable to get the column names into the [Metadata_table] automatically.
I tried using this for automating the column names-
COL_NAME(OBJECT_ID('DBO.[DATA_TABLE]'),'COLOMN_A')
but this has not worked so far.
Any help is appreciated
You can use sys.columns for grabbing the column names. You can join it to sys.tables by the object_id if you ever need to associate the two.
For example:
SELECT c.NAME
FROM SYS.TABLES t
INNER JOIN SYS.COLUMNS c ON t.OBJECT_ID = c.OBJECT_ID
WHERE t.OBJECT_ID = OBJECT_ID('DBO.[Data_Table]');
You can generate SQL from here in the format you wanted by creating an expression to query your table and then unpivot it.
Another approach could be a while loop to do inserts into your metadata table. If you're working with a very large table this option will be more expensive so keep it in mind. I used an example table dbo.Attendance_Records and this script will print out the example SQL, not execute it. You would want to change it to call sp_executesql on that text.
DECLARE #Table NVARCHAR(128) = 'DBO.[Attendance_Records]'
,#MetaTable NVARCHAR(128) = 'DBO.[Metadata_Table]'
,#ColumnName NVARCHAR(128)
,#Iterator INT = 1
,#SQL NVARCHAR(MAX)
SELECT c.NAME
,c.COLUMN_ID
,ROW_NUMBER() OVER (ORDER BY COLUMN_ID) AS RN
INTO #Cols
FROM SYS.COLUMNS c
WHERE c.OBJECT_ID = OBJECT_ID(#Table);
WHILE #Iterator <= (SELECT ISNULL(MAX(RN),0) FROM #Cols)
BEGIN
SET #ColumnName = (SELECT NAME FROM #Cols WHERE RN = #Iterator)
SET #SQL = 'INSERT INTO ' + #MetaTable + ' (Table_Name, Column_Name, Fill_Rate) '
+ 'SELECT ''' + REPLACE(#Table,'DBO.','') + ''', ''' + #ColumnName + ''', 100 * CONVERT(DECIMAL(8,3), SUM(CASE WHEN [' + #ColumnName + '] IS NULL THEN 0 ELSE 1 END)) / COUNT(1) AS [' + #ColumnName + '_fill]' + ' FROM ' + #Table
PRINT #SQL
SET #Iterator += 1
END
Since you need to have the column names you would need to so something along these lines.
select ColumnName = 'Colomn_a'
, FillRate = count(distinct Colomn_a) / count(*) * 1.0 --must multiply by 1.0 to avoid integer math
from YourTable
UNION ALL
select 'Colomn_b'
, count(distinct Colomn_b) / count(*) * 1.0
from YourTable
Just an alternate method of Mike R's
CREATE OR ALTER PROCEDURE [dbo].[GetFillRate_new] -- EXEC [GetFillRate] 'TestEmp'
(
#TableName NVARCHAR(128),
#Include_BlankAsNotFilled BIT = 1 -- 0-OFF(Default); 1-ON(Blank As Not Filled Data)
)
AS
BEGIN
SET NOCOUNT ON;
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
IF NOT EXISTS(SELECT 1 FROM SYS.OBJECTS WHERE [TYPE]='U' AND [NAME]=#TableName )
BEGIN
SELECT Result = 1 , Reason ='Table not exists in this Database' ;
RETURN 1;
END;
declare #sql varchar(max)
set #sql=''
select
#sql=#sql+'select
'''+c.column_name+''' as [Column Name],
cast((100*(sum(
case when ' +
case
when #include_blankasnotfilled = 0
then '[' + c.column_name + '] is not null'
when c.collation_name is null
then '[' + c.column_name + '] is not null'
else 'isnull([' + c.column_name + '],'''')<>'''' ' end +
' then 1 else 0 end)*1.0 / count(*)))
as decimal(5,2)) as [Fill Rate (%)]
from '+c.table_name+'
union all '
from
information_schema.columns as c
inner join information_schema.tables as t
on c.table_name=t.table_name
where
t.table_type='base table' and
t.table_name =#tablename
set #sql=left(#sql,len(#sql)-10)
--print #sql
exec(#sql)
end
You can find more details into this blog post https://exploresql.com/2019/12/14/how-to-find-fill-rate-in-a-table/
Challenges:
The Schema changes like below things makes our Fill Rate approach little difficult than actual.
Table name changes
Column name changes
Data type changes
Removing Existing columns
Adding New Columns
Due to the above challenges, we cannot simply go for Static Solution to find Fill Rate of a table. Instead, we need something like Dynamic Approach to avoid our future re-works.
Prerequisite:
In the below sample, we are going to use one stored procedure named ‘Get_FillRate’ for demo. If any one have the same object name in database, please make sure to change the below stored procedure name.
Sample Table Creation with Data Loading Script
--dropping temp table if exists
IF OBJECT_ID('TempDb..#TestEmp') IS NOT NULL
DROP TABLE #TestEmp;
CREATE TABLE #TestEmp
(
[TestEmp_Key] INT IDENTITY(1,1) NOT NULL,
[EmpName] VARCHAR(100) NOT NULL,
[Age] INT NULL,
[Address] VARCHAR(100) NULL,
[PhoneNo] VARCHAR(11) NULL,
[Inserted_dte] DATETIME NOT NULL,
[Updated_dte] DATETIME NULL,
CONSTRAINT [PK_TestEmp] PRIMARY KEY CLUSTERED
(
TestEmp_Key ASC
)
);
GO
INSERT INTO #TestEmp
(EmpName,Age,[Address],PhoneNo,Inserted_dte)
VALUES
('Arul',24,'xxxyyy','1234567890',GETDATE()),
('Gokul',22,'zzzyyy',NULL,GETDATE()),
('Krishna',24,'aaa','',GETDATE()),
('Adarsh',25,'bbb','1234567890',GETDATE()),
('Mani',21,'',NULL,GETDATE()),
('Alveena',20,'ddd',NULL,GETDATE()),
('Janani',30,'eee','',GETDATE()),
('Vino',26,NULL,'1234567890',GETDATE()),
('Madhi',25,'ggg',NULL,GETDATE()),
('Ronen',25,'ooo',NULL,GETDATE()),
('Visakh',25,'www',NULL,GETDATE()),
('Jayendran',NULL,NULL,NULL,GETDATE());
GO
SELECT [TestEmp_Key],[EmpName],[Age],[Address],[PhoneNo],[Inserted_dte],[Updated_dte] FROM #TestEmp;
GO
Temp Table - #TestEmp
SQL Procedure For Finding Fill Rate in a Table - Dynamic Approach
Input Parameters
Both of the Input Parameters are mandatory.
#p_TableName - Data type used for this input Parameter is NVARCHAR(128) and Nullability is NOT NULL.
#p_Include_BlankAsNotFilled - Data type used for this input Parameter is BIT and Nullability is NOT NULL and either 0 or 1 needs to give. 0 is by Default and 0 means OFF. 1 is ON (when given as 1 - Blank entries will be considered As Not Filled Data).
Output Columns
There are Two output Columns. both of those are Non Nullable Output Columns.
[Column Name] - Data type used for this Output Column is sysname and Nullability is NOT NULL. All the Column Names for the user given Table Name would come as row values.
[Fill Rate (%)] - Data type used for this Output Column is DECIMAL(5,2) and Nullability is NOT NULL. Values from 0.00 to 100.00 would come in result with respective Column Names.
Info reg Stored Procedure
Created the store Procedure named - 'Get_FillRate'.
To avoid the number of rows returned, set NOCOUNT as ON.
Try, Catch Blocks are added for error handling's.
To read Uncommitted Modifications, set TRANSACTION ISOLATION LEVEL as READ UNCOMMITTED.
Parameter Sniffing Concept is also included.
Some handling's done on the Table Name input parameters to support user typing table name formats like '.table_name','..table_name','...table_name','table_name','[table_name]','dbo.table_name','dbo.[table_name]','[dbo].[table_name]' etc.,
Validation is included at the start, when user gives other than 'table name', stored procedure would throw 'Table not exists in this Database' as error message.
System table named SYS.OBJECTS and SYS.COLUMNS and System View named INFORMATION_SCHEMA.COLUMNS are used inside the stored procedure.
ORDINAL_POSITION from INFORMATION_SCHEMA.COLUMNS is used, to return the result set with the same column order that the table structure already has.
COLLATION_NAME from INFORMATION_SCHEMA.COLUMNS is used, to support conditions like blank is either need to consider or not, as not filled entries.
COLUMN_NAME from INFORMATION_SCHEMA.COLUMNS is used, to show the final result set with respective fill rates.
Dynamic Query is used, to support dynamic approach and this would avoid all the challenges that would come in static solutions like schema changes.
Both Method 1(Dynamic Query with WHILE LOOP) and Method 2(Dynamic Query with UNION ALL) produces same result sets and carries same functionality where some metrics like CPU time, Elapsed Time, Logical reads that are better in Method 2.
Method 1 - With the use of WHILE Loop
CREATE OR ALTER PROCEDURE [dbo].[Get_FillRate]
(
#p_TableName NVARCHAR(128),
#p_Include_BlankAsNotFilled BIT = 0 -- 0-OFF(Default); 1-ON(Blank As Not Filled Data)
)
AS
BEGIN
BEGIN TRY
SET NOCOUNT ON;
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
--Parameter Sniffing
DECLARE #TableName NVARCHAR(128),
#Include_BlankAsNotFilled BIT,
#ColumnName NVARCHAR(128),
#R_NO INT,
#DataType_Field BIT,
#i INT, --Iteration
#RESULT NVARCHAR(MAX);
SELECT #TableName = #p_TableName,
#Include_BlankAsNotFilled = #p_Include_BlankAsNotFilled,
#i = 1;
--To Support some of the table formats that user typing.
SELECT #TableName =REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(#TableName,'[',''),']',''),'dbo.',''),'...',''),'..',''),'.','');
--validation
IF NOT EXISTS(SELECT 1 FROM SYS.OBJECTS WHERE [TYPE]='U' AND [NAME]=#TableName )
BEGIN
SELECT Result = 1 , Reason ='Table not exists in this Database' ;
RETURN 1;
END;
--dropping temp table if exists - for debugging purpose
IF OBJECT_ID('TempDb..#Temp') IS NOT NULL
DROP TABLE #Temp;
IF OBJECT_ID('TempDb..#Columns') IS NOT NULL
DROP TABLE #Columns;
--temp table creations
CREATE TABLE #Temp
(
[R_NO] INT NOT NULL,
[ColumnName] NVARCHAR(128) NOT NULL,
[FillRate] DECIMAL(5,2) NOT NULL
PRIMARY KEY CLUSTERED (ColumnName)
);
CREATE TABLE #Columns
(
[R_NO] INT NOT NULL,
[Name] [sysname] NOT NULL,
[DataType_Field] BIT NOT NULL
PRIMARY KEY CLUSTERED ([Name])
);
INSERT INTO #Columns ([R_NO],[Name],[DataType_Field])
SELECT
COLUMN_ID,
[Name],
IIF(collation_name IS NULL,0,1)
FROM SYS.COLUMNS WHERE OBJECT_ID = OBJECT_ID(#TableName);
WHILE #i <= ( SELECT MAX(R_NO) FROM #Columns) --Checking of Iteration till total number of columns
BEGIN
SELECT #DataType_Field=DataType_Field,#ColumnName=[Name],#R_NO=[R_NO] FROM #Columns WHERE R_NO = #i;
SET #RESULT =
'INSERT INTO #Temp ([R_NO],[ColumnName], [FillRate]) ' +
'SELECT ' + QUOTENAME(#R_NO,CHAR(39)) + ',
''' + #ColumnName + ''',
CAST((100*(SUM(
CASE WHEN ' +
CASE
WHEN #Include_BlankAsNotFilled = 0
THEN '[' + #ColumnName + '] IS NOT NULL'
WHEN #DataType_Field = 0
THEN '[' + #ColumnName + '] IS NOT NULL'
ELSE 'ISNULL([' + #ColumnName + '],'''')<>'''' ' END +
' THEN 1 ELSE 0 END)*1.0 / COUNT(*)))
AS DECIMAL(5,2))
FROM ' + #TableName;
--PRINT(#RESULT); --for debug purpose
EXEC(#RESULT);
SET #i += 1; -- Incrementing Iteration Count
END;
--Final Result Set
SELECT
ColumnName AS [Column Name],
FillRate AS [Fill Rate (%)]
FROM #TEMP
ORDER BY [R_NO];
RETURN 0;
END TRY
BEGIN CATCH --error handling even it is fetching stored procedure
SELECT
ERROR_NUMBER() AS ErrorNumber
,ERROR_SEVERITY() AS ErrorSeverity
,ERROR_STATE() AS ErrorState
,ERROR_PROCEDURE() AS ErrorProcedure
,ERROR_LINE() AS ErrorLine
,ERROR_MESSAGE() AS ErrorMessage;
RETURN 1;
END CATCH;
END;
Execute this stored procedure - Method 1 by passing the table name like below
Execute like below if we need to consider NULL values alone as not filled
EXEC [Get_FillRate] #p_TableName='#TestEmp',#p_Include_BlankAsNotFilled=0;
Execute like below if we need to consider both NULL values and empty/blank values as not filled
EXEC [Get_FillRate] #p_TableName='#TestEmp',#p_Include_BlankAsNotFilled=1;
Method 1 -Output
Method 2 - With the use of UNION ALL
CREATE OR ALTER PROCEDURE [dbo].[Get_FillRate]
(
#p_TableName NVARCHAR(128),
#p_Include_BlankAsNotFilled BIT = 0 -- 0-OFF(Default); 1-ON(Blank As Not Filled Data)
)
AS
BEGIN
BEGIN TRY
SET NOCOUNT ON;
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
--Parameter Sniffing
DECLARE #TableName NVARCHAR(128),
#Include_BlankAsNotFilled BIT,
#RESULT NVARCHAR(MAX);
SELECT #TableName = #p_TableName,
#Include_BlankAsNotFilled = #p_Include_BlankAsNotFilled,
#RESULT = '';
--To Support some of the table formats that user typing.
SELECT #TableName =REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(#TableName,'[',''),']',''),'dbo.',''),'...',''),'..',''),'.','');
--validation
IF NOT EXISTS(SELECT 1 FROM SYS.OBJECTS WHERE [TYPE]='U' AND [NAME]=#TableName )
BEGIN
SELECT Result = 1 , Reason ='Table not exists in this Database' ;
RETURN 1;
END;
--dropping temp table if exists - for debugging purpose
IF OBJECT_ID('TempDb..#Columns') IS NOT NULL
DROP TABLE #Columns;
--temp table creations
CREATE TABLE #Columns
(
[ORDINAL_POSITION] INT NOT NULL,
[COLUMN_NAME] [sysname] NOT NULL,
[DataType_Field] BIT NOT NULL,
[TABLE_NAME] [sysname] NOT NULL
PRIMARY KEY CLUSTERED ([ORDINAL_POSITION],[COLUMN_NAME])
);
INSERT INTO #Columns ([ORDINAL_POSITION],[COLUMN_NAME],[DataType_Field],[TABLE_NAME])
SELECT
[ORDINAL_POSITION],
[COLUMN_NAME],
CASE WHEN COLLATION_NAME IS NOT NULL THEN 1 ELSE 0 END,
[TABLE_NAME]
FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME =#tablename; --Using System_View
--Final Result Set
SELECT #RESULT = #RESULT+ N'SELECT '''+C.COLUMN_NAME+''' AS [Column Name],
CAST((100*(SUM(
CASE WHEN ' +
CASE
WHEN #include_blankasnotfilled = 0
THEN '[' + C.COLUMN_NAME + '] IS NOT NULL'
WHEN C.[DataType_Field]=0
THEN '[' + C.COLUMN_NAME + '] IS NOT NULL'
ELSE 'ISNULL([' + C.COLUMN_NAME + '],'''')<>'''' ' END +
' THEN 1 ELSE 0 END)*1.0 / COUNT(*)))
AS DECIMAL(5,2)) AS [Fill Rate (%)]
FROM '+C.TABLE_NAME+' UNION ALL '
FROM #Columns C;
SET #RESULT=LEFT(#RESULT,LEN(#RESULT)-10); --To Omit 'Last UNION ALL '.
--PRINT(#RESULT); --for debug purpose
EXEC(#RESULT);
RETURN 0;
END TRY
BEGIN CATCH --error handling even it is fetching stored procedure
SELECT
ERROR_NUMBER() AS ErrorNumber
,ERROR_SEVERITY() AS ErrorSeverity
,ERROR_STATE() AS ErrorState
,ERROR_PROCEDURE() AS ErrorProcedure
,ERROR_LINE() AS ErrorLine
,ERROR_MESSAGE() AS ErrorMessage;
RETURN 1;
END CATCH;
END;
Execute this stored procedure - Method 2 by passing the table name like below
Execute like below if we need to consider NULL values alone as not filled
EXEC [Get_FillRate] #p_TableName='#TestEmp',#p_Include_BlankAsNotFilled=0;
Execute like below if we need to consider both NULL values and empty/blank values as not filled
EXEC [Get_FillRate] #p_TableName='#TestEmp',#p_Include_BlankAsNotFilled=1;
Method 2 -Output
Metrics Difference between Method 1 Vs Method 2
The below four metrics taken for consideration for knowing the difference between Method 1 Vs Method 2
No. of Query Sets in Exec Query Plan
Total CPU Time (in ms)
Total Elapsed Time (in ms)
Total Logical Reads
In conclusion, we have seen how to find the Fill Rate of a Table Using T-SQL Queries that is applicable to run in both AZURE and On-Premises SQL Databases. Thus, it would helps us to take business decisions effectively as well as immediately.
I wrote a stored procedure that gets:
"SearchKeys" - keys to search separated by ',' "key1,key2"
"ToSearch" - Tables to search in separated by ',' with colums after ':' separated by '.' "table1:column1.column2,table2:column1.column2"
At the end procedure returns table with name of table and row id were the key was found.
here is the code:
--Search keys in tables
CREATE PROCEDURE [dbo].[Search_All]
(
#SearchKeys nvarchar(50), --Keys to search separated by ','
#ToSearch varchar(200) --Tables to search in separated by ',' with colums after ':' separated by '.'
)
AS
BEGIN
--create table with found values
CREATE TABLE #Results (TargetId int, DBName varchar(20))
--Split SearchKeys to Keys
WHILE LEN(#SearchKeys) > 0
BEGIN
DECLARE #Key NVARCHAR(25)
IF CHARINDEX(',',#SearchKeys) > 0
SET #Key = SUBSTRING(#SearchKeys,0,CHARINDEX(',',#SearchKeys))
ELSE
BEGIN
SET #Key = #SearchKeys
SET #SearchKeys = ''
END
--Split ToSearch to Tables
WHILE LEN(#ToSearch) > 0
BEGIN
DECLARE #TableAndColums VARCHAR(200)
IF CHARINDEX(',',#ToSearch) > 0
SET #TableAndColums = SUBSTRING(#ToSearch,0,CHARINDEX(',',#ToSearch))
ELSE
BEGIN
SET #TableAndColums = #ToSearch
SET #ToSearch = ''
END
SET #ToSearch = REPLACE(#ToSearch,#TableAndColums + ',' , '')
--Split #TableAndColums to Table and Colums
--Select Table
DECLARE #Table VARCHAR(25)
SET #Table = SUBSTRING(#TableAndColums,0,CHARINDEX(':',#TableAndColums))
SET #TableAndColums = REPLACE(#TableAndColums,#Table + ':' , '')
--Split to Colums
WHILE LEN(#TableAndColums) > 0
BEGIN
DECLARE #Column VARCHAR(25)
IF CHARINDEX('.',#TableAndColums) > 0
SET #Column = SUBSTRING(#TableAndColums,0,CHARINDEX('.',#TableAndColums))
ELSE
BEGIN
SET #Column = #TableAndColums
SET #TableAndColums = ''
END
BEGIN
--insert result in to #Results table
INSERT INTO #Results
EXEC
(
'SELECT ' + #Table + '.Id AS ''TargetId'', '''+#Table+''' AS ''DBName''
FROM ' + #Table +
' WHERE ' + #Column + ' LIKE N''%' + #Key + '%'''
)
END
SET #TableAndColums = REPLACE(#TableAndColums,#Column + '.' , '')
END
END
SET #SearchKeys = REPLACE(#SearchKeys,#Key + ',' , '')
END
--return found values
SELECT DISTINCT TargetId , DBname FROM #Results
END
For some reason it searches only for the first key ignoring all the rest keys. I can not find out why this is happening. Please help!
The very first thing I'll warn you about here is your procedure is wide open to injection attack. Injection attack is in and of itself a broad topic. If you're interested, I suggest reading this article. If you do absolutely need this type of interface (i.e. you can't use static typed SQL or something like Entity Framework to take care of the queries for you), you must must MUST make sure that any strings being executed at run time (e.g. #column, #table, #key) are parametrized or bracketed. This procedure, as written, will also fail when an inputted table does not contain an ID column or when an inputted column doesn't exist.
http://www.sommarskog.se/dynamic_sql.html
In terms of how you're doing string splitting, I'd look at the article below. While there's no way to eliminate the need to loop over each table, by putting all your search strings into a table using a string splinting function like the ones mentioned in this article, you can search all search conditions on a single table at once. Something like this:
select *
from #SearchConditions a
inner join dbo.TargetTable b
on b.Name like '%' + a.SearchKey + '%'
http://www.sqlservercentral.com/articles/Tally+Table/72993/
SQL Server Version - 2008 R2
I am working on evaluating a DMS solution, with an objective of taking over maintenance. The original solution has one central database, that has data pertaining to the manufacturer. It also has one database for each dealer, which means there are a lot of cross database dependencies.
The problems:
No DB documentation
No code comments
Lots of heaps
No standard object naming conventions
The central DB has 460+ tables and 900+ SProcs, in addition to other
objects
Each dealer DB has 370+ tables and 2350+ SProcs, in addition to other
objects
As a first step, I am recommending a complete clean-up of the DB, for which it is critical to understand object dependencies, including cross database dependencies. I tried using Red Gate's solution, but the output is way too voluminous. All I want is a list of objects in the databases that do not have any dependencies - they neither depend on other objects, nor are there any objects that depend on them.
Here is the script I have used to get a list of dependencies:
SELECT
DB_NAME() referencing_database_name,
OBJECT_NAME (referencing_id) referencing_entity_name,
ISNULL(referenced_schema_name,'dbo') referenced_schema_name,
referenced_entity_name,
ao.type_desc referenced_entity_type,
ISNULL(referenced_database_name,DB_NAME()) referenced_database_name
FROM sys.sql_expression_dependencies sed
JOIN sys.all_objects ao
ON sed.referenced_entity_name = ao.name
I will be creating a table - Dependencies - into which I will be inserting this result set from each DB. As a next step, I will also be creating another table - AllObjects- which will contain a list of all objects in the Databases. Here is the script to do this:
SELECT
DB_NAME() DBName,
name,
type_desc
FROM sys.all_objects
WHERE type_desc IN
(
'VIEW',
'SQL_TABLE_VALUED_FUNCTION',
'SQL_STORED_PROCEDURE',
'SQL_INLINE_TABLE_VALUED_FUNCTION',
'USER_TABLE',
'SQL_SCALAR_FUNCTION'
)
Now, a list of name from this table, that do not appear in the referenced_entity_name column in the dependencies table should give a list of objects that I am looking for.
SELECT
AO.DBName,
AO.name,
AO.type_desc
FROM AllObjects AO
LEFT OUTER JOIN Dependencies D ON
D.referenced_database_name = AO.DBName AND
D.referenced_entity_name = AO.name AND
D.referenced_entity_type = AO.type_desc
WHERE
D.referenced_database_name IS NULL AND
D.referenced_entity_name IS NULL AND
D.referenced_entity_type IS NULL
Now the questions:
Some object dependencies seem to be missing in the output. What am I
missing?
How do I validate that my findings are correct?
I mean is there a different way to do this, so I can compare the
results and double check?
Thanks in advance,
Raj
You can compare your results to the ones that the following script finds.
Here is the full article
CREATE PROCEDURE [dbo].[get_crossdatabase_dependencies] AS
SET NOCOUNT ON;
CREATE TABLE #databases(
database_id int,
database_name sysname
);
INSERT INTO #databases(database_id, database_name)
SELECT database_id, [name]
FROM sys.databases
WHERE 1 = 1
AND [state] <> 6 /* ignore offline DBs */
AND database_id > 4; /* ignore system DBs */
DECLARE
#database_id int,
#database_name sysname,
#sql varchar(max);
CREATE TABLE #dependencies(
referencing_database varchar(max),
referencing_schema varchar(max),
referencing_object_name varchar(max),
referenced_server varchar(max),
referenced_database varchar(max),
referenced_schema varchar(max),
referenced_object_name varchar(max)
);
WHILE (SELECT COUNT(*) FROM #databases) > 0 BEGIN
SELECT TOP 1 #database_id = database_id,
#database_name = database_name
FROM #databases;
SET #sql = 'INSERT INTO #dependencies select
DB_NAME(' + convert(varchar,#database_id) + '),
OBJECT_SCHEMA_NAME(referencing_id,'
+ convert(varchar,#database_id) +'),
OBJECT_NAME(referencing_id,' + convert(varchar,#database_id) + '),
referenced_server_name,
ISNULL(referenced_database_name, db_name('
+ convert(varchar,#database_id) + ')),
referenced_schema_name,
referenced_entity_name
FROM ' + quotename(#database_name) + '.sys.sql_expression_dependencies';
EXEC(#sql);
DELETE FROM #databases WHERE database_id = #database_id;
END;
SET NOCOUNT OFF;
SELECT * FROM #dependencies;
Oh, MS made a good effort at detecting cross-database dependencies with sys.sql_expression_dependencies, but I've seen it miss things before. In your case, I'd find an example of a missing dependency, and start backtracking: have you dropped it from your query some how? If so, fix your query. Does sys.sql_expression_dependencies omit a certain class of dependencies? Under what conditions? Is dynamic SQL to blame? etc.
You should also run sp_refreshsqlmodule for each object in sys.sql_modules, and then rerun your code. It forces SQL Server to refresh the dependency info (to the best of its ability).
Now, for validation, set up a trace, and listen for event 114, "Audit Schema Object Access Event", plus the starting and completed events for stored procedure and/or RPC calls. Include columns DatabaseName, ParentName, ObjectName, ServerName, SPID and RequestID (for MARS-enabled connections). Maybe some others too. "Audit Schema Object Access Event" happens anytime an object is accessed, so exercise the app while this trace is running, then collate the data using SPID + RequestId and compare it to your results using sys.sql_expression_dependencies. If anything is in the trace data that doesn't appear in your dependencies data, then you've missed something.
If you have to deal with linked servers, I adapted #MilicaMedic's answer to work for cross-server dependencies. I also output column names where available in a dependency.
You can use it like this:
create table #dependencies (
referencing_server nvarchar(128),
referencing_database nvarchar(128),
referencing_schema nvarchar(128),
referencing_object_name nvarchar(128),
referencing_column nvarchar(128),
referenced_server nvarchar(128),
referenced_database nvarchar(128),
referenced_schema nvarchar(128),
referenced_object_name nvarchar(128),
referenced_column nvarchar(128)
);
insert #dependencies
exec crossServerDependencies
'ThisServerName, LinkedServerName, LinkedServerName2, etc'
From there you join it to your AllObjects table as you described in your answer.
My code requires two external functions: "splitString", and "AddBracketsWhenNecessary". You can simplify the former and completely eliminate the latter, as you desire. But I use them for other things so they make it into my implementation. The code for both is at the bottom.
Here is the main procedure:
create procedure crossServerDependencies
#server_names_csv nvarchar(500) = null -- csv list of server names you want to pull dependencies for
as
-- Create output table
if object_id('tempdb..#dependencies') is not null
drop table #dependencies;
create table #dependencies (
referencing_server nvarchar(128),
referencing_database nvarchar(128),
referencing_schema nvarchar(128),
referencing_object_name nvarchar(128),
referencing_column nvarchar(128),
referenced_server nvarchar(128),
referenced_database nvarchar(128),
referenced_schema nvarchar(128),
referenced_object_name nvarchar(128),
referenced_column nvarchar(128)
);
-- Split server csv into table
set #server_names_csv = isnull(#server_names_csv, ##servername);
declare #server_names table (
server_row int,
server_name nvarchar(128),
actuallyExists bit
);
insert #server_names
select server_row = id,
server_name,
actuallyExists = case when sv.name is not null then 1 else 0 end
from dbo.splitString(#server_names_csv, ',') sp
cross apply (select server_name = dbo.AddBracketsWhenNecessary(val)) ap
left join sys.servers sv on sp.val = dbo.AddBracketsWhenNecessary(sv.name);
-- Loop servers
declare
#server_row int = 0,
#server_name nvarchar(50),
#server_exists bit = 0,
#server_is_local bit = 0,
#server_had_some_inserts bit = 0;
while #server_row <= (select max(server_row) from #server_names)
begin
-- Server loop initializations
set #server_row += 1;
set #server_had_some_inserts = 0;
select #server_name = server_name,
#server_exists = actuallyExists
from #server_names
where server_row = #server_row;
set #server_is_local =
case when #server_name = dbo.AddBracketsWhenNecessary(##servername) then 1 else 0 end;
-- Handle non-existent server (and prevent sql injection)
if #server_exists = 0
begin
print
'"' + #server_name + '" does not exist. ' +
'Please check your spelling and/or access to view the linked server ' +
'(running under ' + user_name() + ').';
continue;
end
-- Get database list
if object_id('tempdb..#databases') is not null
drop table #databases;
create table #databases (
rownum int identity(1,1),
database_id int,
database_name nvarchar(128)
);
declare #sql nvarchar(max) = '
select database_id, [name]
from master.sys.databases
where state <> 6 -- ignore offline dbs
and database_id > 4 -- ignore system dbs
and has_dbaccess([name]) = 1
and [name] not in (''ReportServer'', ''ReportServerTempDB'')
';
if #server_is_local = 0
begin
set #sql = replace(#sql, '''', '''''');
set #sql = 'select * from openquery( #server_name, ''' + #sql + ''')';
end
set #sql = 'insert #databases (database_id, database_name)' + #sql;
set #sql = replace(#sql, '#server_name', #server_name);
exec (#sql);
delete #databases
where database_name = 'ReportServer';
-- Loop databases
declare #rowNum int = 0;
while #rowNum <= (select max(rownum) from #databases)
begin
-- Database loop initializations
set #rowNum += 1;
declare
#database_id nvarchar(max),
#database_name nvarchar(max);
select #database_id = database_id,
#database_name = dbo.AddBracketsWhenNecessary(database_name)
from #databases
where rownum = #rowNum;
-- Get object dependency info
set #sql = '
with
getTableColumnIds as (
select table_id = o.object_id,
table_name = o.name,
column_id = c.column_id,
column_name = c.name
from #database_name.sys.objects o
join #database_name.sys.all_columns c on o.object_id = c.object_id
)
#insertStatement
select ''#server_name'',
db_name(#database_id),
object_schema_name(referencing_id, #database_id),
object_name(referencing_id, #database_id),
referencing_column = ringTCs.column_name,
isnull(referenced_server_name, ''#server_name''),
isnull(referenced_database_name, db_name(#database_id)),
isnull(referenced_schema_name, ''dbo''),
referenced_entity_name,
referenced_column = redTCs.column_name
from #database_name.sys.sql_expression_dependencies d
left join getTableColumnIds ringTCs
on d.referencing_id = ringTCs.table_id
and d.referencing_minor_id = ringTCs.column_id
left join getTableColumnIds redTCs
on d.referenced_id = redTCs.table_id
and d.referenced_minor_id = redTCs.column_id
';
set #sql = replace(#sql, '#database_id', #database_id);
set #sql = replace(#sql, '#database_name', #database_name);
if #server_is_local = 0
begin
set #sql = replace(#sql, '''', '''''');
set #sql = replace(#sql, '#insertStatement', '');
set #sql = 'select * from openquery(#server_name, ''' + #sql + ''')';
end
set #sql = replace(#sql, '#insertStatement', 'insert #dependencies ');
set #sql = replace(#sql, '#server_name', #server_name);
exec (#sql);
-- Database loop terminations
if ##rowcount > 0
set #server_had_some_inserts = 1;
end -- database loop
-- server loop terminations
if #server_had_some_inserts = 0
begin
declare #remote_user_name nvarchar(255);
select #remote_user_name = remote_name
from sys.linked_logins li
join sys.servers s on li.server_id = s.server_id
where remote_name is not null
and s.name = 'sisag'
print (
'No dependencies found for ' + #server_name + '. ' +
'If this is unexpected, you may need to run "grant view any definition to ' +
'[' + isnull(#remote_user_name, '?') + ']" ' +
'on the remote server.'
);
end
end -- server loop
-- Terminate
select * from #dependencies
The code for AddBracketsWhenNecessary:
create function AddBracketsWhenNecessary (
#objectName nvarchar(250)
)
returns nvarchar(250) as
begin
if left(#objectName, 1) = '[' and right(#objectName, 1) = ']'
return #objectName;
declare #hasInvalidCharacter bit;
select #hasInvalidCharacter = max(isInvalid)
from dbo.splitString(#objectName, null) chars
cross apply (select
isLetter = patindex('[a-z,_]', val),
isNumber = PATINDEX('[0-9]', val)
) getCharType
cross apply (select
isInvalid =
case
when isLetter = 1 then 0
when isNumber = 1 and not chars.id = 1 then 0
else 1
end
) getValidity
return
case when #hasInvalidCharacter = 1 then '[' else '' end
+ #objectName
+ case when #hasInvalidCharacter = 1 then ']' else '' end;
end
Any finally, my splitter function (but see Arnold Fribble here if you want a simpler version, or use the built in function if you have SqlServer 2016 or above):
create function splitString (
#stringToSplit nvarchar(max),
#delimiter nvarchar(50)
)
returns table as
return
with
split_by_delimiter as (
select id = 1,
start = 1,
stop = convert(int,
charindex(#delimiter, #stringToSplit)
)
union all
select id = id + 1,
start = newStart,
stop = convert(int,
charindex(#delimiter, #stringToSplit, newStart)
)
from split_by_delimiter
cross apply (select newStart = stop + len(#delimiter)) ap
where Stop > 0
),
split_into_characters as (
select id = 1,
chr = left(#stringToSplit,1)
union all
select id = id + 1,
chr = substring(#stringToSplit, ID + 1, 1)
from split_into_characters
where id < len(#stringToSplit)
)
select id,
val =
ltrim(rtrim(substring(
#stringToSplit,
start,
case
when stop > 0 then stop - start
else len(#stringtosplit)
end
)))
from split_by_delimiter
where len(#delimiter) > 0
union all
select id,
val = chr
from split_into_characters
where #delimiter = ''
or #delimiter is null
I had to make some small changes from the real code I use, so if there are any reference errors, please let me know in the comments and I'll edit.
A Query I often use to find the tables used from other databases is the following:
SELECT OBJECT_NAME (referencing_id) AS referencing_object, referenced_database_name,
referenced_schema_name, referenced_entity_name
FROM sys.sql_expression_dependencies
WHERE referenced_database_name IS NOT NULL
AND is_ambiguous = 0;
This gives you all tables used in the stored procedures / views that origin from this database, but also other databases.
source
I upgraded one of the answer adding referencing/referenced id and referencing/referenced type like table, view, etc.
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER PROCEDURE [dbo].[get_crossdatabase_dependencies] AS
SET NOCOUNT ON;
CREATE TABLE #databases(
database_id int,
database_name sysname
);
INSERT INTO #databases(database_id, database_name)
SELECT database_id, [name]
FROM sys.databases
WHERE 1 = 1
AND [state] <> 6 /* ignore offline DBs */
AND database_id NOT IN (4, 5, 6, 7, 11, 13, 14); /* ignore system DBs */
DECLARE
#database_id int,
#database_name sysname,
#sql varchar(max);
CREATE TABLE #dependencies(
referencing_id int,
referencing_database varchar(max),
referencing_schema varchar(max),
referencing_object_name varchar(max),
referencing_type varchar(max),
referenced_id int,
referenced_server varchar(max),
referenced_database varchar(max),
referenced_schema varchar(max),
referenced_object_name varchar(max),
referenced_type varchar(max),
);
WHILE (SELECT COUNT(*) FROM #databases) > 0 BEGIN
SELECT TOP 1 #database_id = database_id,
#database_name = database_name
FROM #databases;
SET #sql = 'INSERT INTO #dependencies select
referencing_id,
DB_NAME(' + convert(varchar,#database_id) + '),
OBJECT_SCHEMA_NAME(referencing_id,'
+ convert(varchar,#database_id) +'),
OBJECT_NAME(referencing_id,' + convert(varchar,#database_id) + '),
CASE
WHEN OBJECTPROPERTY(referencing_id, ''IsTable'') = 1 THEN ''Table''
WHEN OBJECTPROPERTY(referencing_id, ''IsView'') = 1 THEN ''View''
WHEN OBJECTPROPERTY(referencing_id, ''IsProcedure'') = 1 THEN ''Procedure''
WHEN OBJECTPROPERTY(referencing_id, ''IsTableFunction'') = 1 THEN ''Table-valued Function''
ELSE ''Unknown''
END AS referencing_type,
referenced_id,
referenced_server_name,
ISNULL(referenced_database_name, db_name('
+ convert(varchar,#database_id) + ')),
referenced_schema_name,
referenced_entity_name,
CASE
WHEN OBJECTPROPERTY(referenced_id, ''IsTable'') = 1 THEN ''Table''
WHEN OBJECTPROPERTY(referenced_id, ''IsView'') = 1 THEN ''View''
WHEN OBJECTPROPERTY(referenced_id, ''IsProcedure'') = 1 THEN ''Procedure''
WHEN OBJECTPROPERTY(referenced_id, ''IsTableFunction'') = 1 THEN ''Table-valued Function''
ELSE ''Unknown''
END AS referenced_type
FROM ' + quotename(#database_name) + '.sys.sql_expression_dependencies
ORDER BY
referencing_type';
EXEC(#sql);
DELETE FROM #databases WHERE database_id = #database_id;
END;
SET NOCOUNT OFF;
SELECT * FROM #dependencies;
I have SQL Server 2008, SQL Server Management Studio.
I need to select data from a table in one database and insert into another table in another database.
How can I convert the returned results from my select into INSERT INTO ...?
Clarification from comments: While I believe this could be solved by a INSERT INTO SELECT or SELECT INTO, I do need to generate INSERT INTO ....
Here is another method, which may be easier than installing plugins or external tools in some situations:
Do a select [whatever you need]INTO temp.table_namefrom [... etc ...].
Right-click on the database in the Object Explorer => Tasks => Generate Scripts
Select temp.table_name in the "Choose Objects" screen, click Next.
In the "Specify how scripts should be saved" screen:
Click Advanced, find the "Types of data to Script" property, select "Data only", close the advanced properties.
Select "Save to new query window" (unless you have thousands of records).
Click Next, wait for the job to complete, observe the resulting INSERT statements appear in a new query window.
Use Find & Replace to change all [temp.table_name] to [your_table_name].
drop table [temp.table_name].
In SSMS:
Right click on the database > Tasks > Generate Scripts
Next
Select "Select specific database objects" and check the table you want scripted, Next
Click Advanced > in the list of options, scroll down to the bottom and look for the "Types of data to script" and change it to "Data Only" > OK
Select "Save to new query window" > Next > Next > Finish
All 180 rows now written as 180 insert statements!
Native method:
for example if you have table
Users(Id, name)
You can do this:
select 'insert into Table values(Id=' + Id + ', name=' + name + ')' from Users
1- Explanation of Scripts
A)Syntax for inserting data in table is as below
Insert into table(col1,col2,col3,col4,col5)
-- To achieve this part i
--have used below variable
------#CSV_COLUMN-------
values(Col1 data in quote, Col2..quote,..Col5..quote)
-- To achieve this part
-- i.e column data in
--quote i have used
--below variable
----#QUOTED_DATA---
C)To get above data from existing
table we have to write the select
query in such way that the output
will be in form of as above scripts
D)Then Finally i have Concatenated
above variable to create
final script that's will
generate insert script on execution
E)
#TEXT='SELECT ''INSERT INTO
'+#TABLE_NAME+'('+#CSV_COLUMN+')VALUES('''+'+'+SUBSTRING(#QUOTED_DATA,1,LEN(#QUOTED_DATA)-5)+'+'+''')'''+' Insert_Scripts FROM '+#TABLE_NAME + #FILTER_CONDITION
F)And Finally Executed the above query EXECUTE(TEXT)
G)QUOTENAME() function is used to wrap
column data inside quote
H)ISNULL is used because if any row has NULL
data for any column the query fails
and return NULL thats why to avoid
that i have used ISNULL
I)And created the sp sp_generate_insertscripts
for same
1- Just put the table name for which you want insert script
2- Filter condition if you want specific results
----------Final Procedure To generate Script------
CREATE PROCEDURE sp_generate_insertscripts
(
#TABLE_NAME VARCHAR(MAX),
#FILTER_CONDITION VARCHAR(MAX)=''
)
AS
BEGIN
SET NOCOUNT ON
DECLARE #CSV_COLUMN VARCHAR(MAX),
#QUOTED_DATA VARCHAR(MAX),
#TEXT VARCHAR(MAX)
SELECT #CSV_COLUMN=STUFF
(
(
SELECT ',['+ NAME +']' FROM sys.all_columns
WHERE OBJECT_ID=OBJECT_ID(#TABLE_NAME) AND
is_identity!=1 FOR XML PATH('')
),1,1,''
)
SELECT #QUOTED_DATA=STUFF
(
(
SELECT ' ISNULL(QUOTENAME('+NAME+','+QUOTENAME('''','''''')+'),'+'''NULL'''+')+'','''+'+' FROM sys.all_columns
WHERE OBJECT_ID=OBJECT_ID(#TABLE_NAME) AND
is_identity!=1 FOR XML PATH('')
),1,1,''
)
SELECT #TEXT='SELECT ''INSERT INTO '+#TABLE_NAME+'('+#CSV_COLUMN+')VALUES('''+'+'+SUBSTRING(#QUOTED_DATA,1,LEN(#QUOTED_DATA)-5)+'+'+''')'''+' Insert_Scripts FROM '+#TABLE_NAME + #FILTER_CONDITION
--SELECT #CSV_COLUMN AS CSV_COLUMN,#QUOTED_DATA AS QUOTED_DATA,#TEXT TEXT
EXECUTE (#TEXT)
SET NOCOUNT OFF
END
SSMS Toolpack (which is FREE as in beer) has a variety of great features - including generating INSERT statements from tables.
Update: for SQL Server Management Studio 2012 (and newer), SSMS Toolpack is no longer free, but requires a modest licensing fee.
It's possible to do via Visual Studio SQL Server Object Explorer.
You can click "View Data" from context menu for necessary table, filter results and save result as script.
Using visual studio, do the following
Create a project of type SQL Server-->SQL Server Database Project
open the sql server explorer CTL-\ , CTL-S
add a SQL Server by right clicking on the SQL SERVER icon. Selcet ADD NEW SERVER
navigate down to the table you are interested in
right click--> VIEW DATA
Click the top left cell to highlight everything (ctl-A doesnt seem to work)
Right Click -->SCript
This is fabulous. I have tried everything listed above over the years. I know there is a tool out there that will do this and much more, cant think of the name of it. But it is very expensive.
Good luck. I just figured this out. Have not tested it extensively w/ text fields etc, but it looks like it gets you a long ways down the road.
Greg
Create a separate table using into statement
For example
Select * into Test_123 from [dbo].[Employee] where Name like '%Test%'
Go to the Database
Right Click the Database
Click on Generate Script
Select your table
Select advanace option and select the Attribute "Data Only"
Select the file "open in new query"
Sql will generate script for you
This is a more versatile solution (that can do a little more than the question asks), and can be used in a query window without having to create a new stored proc - useful in production databases for instance where you don't have write access.
To use the code, please modify according to the in line comments which explain its usage. You can then just run this query in a query window and it will print the INSERT statements you require.
SET NOCOUNT ON
-- Set the ID you wish to filter on here
DECLARE #id AS INT = 123
DECLARE #tables TABLE (Name NVARCHAR(128), IdField NVARCHAR(128), IdInsert BIT, Excluded NVARCHAR(128))
-- Add any tables you wish to generate INSERT statements for here. The fields are as thus:
-- Name: Your table name
-- IdField: The field on which to filter the dataset
-- IdInsert: If the primary key field is to be included in the INSERT statement
-- Excluded: Any fields you do not wish to include in the INSERT statement
INSERT INTO #tables (Name, IdField, IdInsert, Excluded) VALUES ('MyTable1', 'Id', 0, 'Created,Modified')
INSERT INTO #tables (Name, IdField, IdInsert, Excluded) VALUES ('MyTable2', 'Id', 1, 'Created,Modified')
DECLARE #numberTypes TABLE (sysId TINYINT)
-- This will ensure INT and BIT types are not surrounded with quotes in the
-- resultant INSERT statement, but you may need to add more (from sys.types)
INSERT #numberTypes(SysId) VALUES(56),(104)
DECLARE #rows INT = (SELECT COUNT(*) FROM #tables)
DECLARE #cnt INT = 1
DECLARE #results TABLE (Sql NVARCHAR(4000))
WHILE #cnt <= #rows
BEGIN
DECLARE #tablename AS NVARCHAR(128)
DECLARE #idField AS NVARCHAR(128)
DECLARE #idInsert AS BIT
DECLARE #excluded AS NVARCHAR(128)
SELECT
#tablename = Name,
#idField = IdField,
#idInsert = IdInsert,
#excluded = Excluded
FROM (SELECT *, ROW_NUMBER() OVER(ORDER BY (SELECT 1)) AS RowId FROM #tables) t WHERE t.RowId = #cnt
DECLARE #excludedFields TABLE (FieldName NVARCHAR(128))
DECLARE #xml AS XML = CAST(('<X>' + REPLACE(#excluded, ',', '</X><X>') + '</X>') AS XML)
INSERT INTO #excludedFields SELECT N.value('.', 'NVARCHAR(128)') FROM #xml.nodes('X') AS T(N)
DECLARE #setIdentity NVARCHAR(128) = 'SET IDENTITY_INSERT ' + #tablename
DECLARE #execsql AS NVARCHAR(4000) = 'SELECT ''' + CASE WHEN #idInsert = 1 THEN #setIdentity + ' ON' + CHAR(13) ELSE '' END + 'INSERT INTO ' + #tablename + ' ('
SELECT #execsql = #execsql +
STUFF
(
(
SELECT CASE WHEN NOT EXISTS(SELECT * FROM #excludedFields WHERE FieldName = name) THEN ', ' + name ELSE '' END
FROM sys.columns
WHERE object_id = OBJECT_ID('dbo.' + #tablename)
FOR XML PATH('')
), 1, 2, ''
) +
')' + CHAR(13) + 'VALUES (' +
STUFF
(
(
SELECT
CASE WHEN NOT EXISTS(SELECT * FROM #excludedFields WHERE FieldName = name) THEN
''', '' + ISNULL(' +
CASE WHEN EXISTS(SELECT * FROM #numberTypes WHERE SysId = system_type_id) THEN '' ELSE ''''''''' + ' END +
'CAST(' + name + ' AS VARCHAR)' +
CASE WHEN EXISTS(SELECT * FROM #numberTypes WHERE SysId = system_type_id) THEN '' ELSE ' + ''''''''' END +
', ''NULL'') + '
ELSE ''
END
FROM sys.columns
WHERE object_id = OBJECT_ID('dbo.' + #tablename)
FOR XML PATH('')
), 1, 3, ''
) +
''')' + CASE WHEN #idInsert = 1 THEN CHAR(13) + #setIdentity + ' OFF' ELSE '' END +
''' FROM ' + #tablename + ' WHERE ' + #idField + ' = ' + CAST(#id AS VARCHAR)
INSERT #results EXEC (#execsql)
DELETE #excludedFields
SET #cnt = #cnt + 1
END
DECLARE cur CURSOR FOR SELECT Sql FROM #results
OPEN cur
DECLARE #sql NVARCHAR(4000)
FETCH NEXT FROM cur INTO #sql
WHILE ##FETCH_STATUS = 0
BEGIN
PRINT #sql
FETCH NEXT FROM cur INTO #sql
END
CLOSE cur
DEALLOCATE cur
You can Choose 'Result to File' option in SSMS and export your select result to file and make your changes in result file and finally using BCP - Bulk copy you can insert in table 1 in database 2.
I think for bulk insert you have to convert .rpt file to .csv file
Hope it will help.
I had a similar problem, but I needed to be able to create an INSERT statement from a query (with filters etc.)
So I created following procedure:
CREATE PROCEDURE dbo.ConvertQueryToInsert (#input NVARCHAR(max), #target NVARCHAR(max)) AS BEGIN
DECLARE #fields NVARCHAR(max);
DECLARE #select NVARCHAR(max);
-- Get the defintion from sys.columns and assemble a string with the fields/transformations for the dynamic query
SELECT
#fields = COALESCE(#fields + ', ', '') + '[' + name +']',
#select = COALESCE(#select + ', ', '') + ''''''' + ISNULL(CAST([' + name + '] AS NVARCHAR(max)), ''NULL'')+'''''''
FROM tempdb.sys.columns
WHERE [object_id] = OBJECT_ID(N'tempdb..'+#input);
-- Run the a dynamic query with the fields from #select into a new temp table
CREATE TABLE #ConvertQueryToInsertTemp (strings nvarchar(max))
DECLARE #stmt NVARCHAR(max) = 'INSERT INTO #ConvertQueryToInsertTemp SELECT '''+ #select + ''' AS [strings] FROM '+#input
exec sp_executesql #stmt
-- Output the final insert statement
SELECT 'INSERT INTO ' + #target + ' (' + #fields + ') VALUES (' + REPLACE(strings, '''NULL''', 'NULL') +')' FROM #ConvertQueryToInsertTemp
-- Clean up temp tables
DROP TABLE #ConvertQueryToInsertTemp
SET #stmt = 'DROP TABLE ' + #input
exec sp_executesql #stmt
END
You can then use it by writing the output of your query into a temp table and running the procedure:
-- Example table
CREATE TABLE Dummy (Id INT, Comment NVARCHAR(50), TimeStamp DATETIME)
INSERT INTO Dummy VALUES (1 , 'Foo', GetDate()), (2, 'Bar', GetDate()), (3, 'Foo Bar', GetDate())
-- Run query and procedure
SELECT * INTO #TempTableForConvert FROM Dummy WHERE Id < 3
EXEC dbo.ConvertQueryToInsert '#TempTableForConvert', 'dbo.Dummy'
Note:
This procedure only casts the values to a string which can cause the data to look a bit different. With DATETIME for example the seconds will be lost.
I created the following procedure:
if object_id('tool.create_insert', 'P') is null
begin
exec('create procedure tool.create_insert as');
end;
go
alter procedure tool.create_insert(#schema varchar(200) = 'dbo',
#table varchar(200),
#where varchar(max) = null,
#top int = null,
#insert varchar(max) output)
as
begin
declare #insert_fields varchar(max),
#select varchar(max),
#error varchar(500),
#query varchar(max);
declare #values table(description varchar(max));
set nocount on;
-- Get columns
select #insert_fields = isnull(#insert_fields + ', ', '') + c.name,
#select = case type_name(c.system_type_id)
when 'varchar' then isnull(#select + ' + '', '' + ', '') + ' isnull('''''''' + cast(' + c.name + ' as varchar) + '''''''', ''null'')'
when 'datetime' then isnull(#select + ' + '', '' + ', '') + ' isnull('''''''' + convert(varchar, ' + c.name + ', 121) + '''''''', ''null'')'
else isnull(#select + ' + '', '' + ', '') + 'isnull(cast(' + c.name + ' as varchar), ''null'')'
end
from sys.columns c with(nolock)
inner join sys.tables t with(nolock) on t.object_id = c.object_id
inner join sys.schemas s with(nolock) on s.schema_id = t.schema_id
where s.name = #schema
and t.name = #table;
-- If there's no columns...
if #insert_fields is null or #select is null
begin
set #error = 'There''s no ' + #schema + '.' + #table + ' inside the target database.';
raiserror(#error, 16, 1);
return;
end;
set #insert_fields = 'insert into ' + #schema + '.' + #table + '(' + #insert_fields + ')';
if isnull(#where, '') <> '' and charindex('where', ltrim(rtrim(#where))) < 1
begin
set #where = 'where ' + #where;
end
else
begin
set #where = '';
end;
set #query = 'select ' + isnull('top(' + cast(#top as varchar) + ')', '') + #select + ' from ' + #schema + '.' + #table + ' with (nolock) ' + #where;
insert into #values(description)
exec(#query);
set #insert = isnull(#insert + char(10), '') + '--' + upper(#schema + '.' + #table);
select #insert = #insert + char(10) + #insert_fields + char(10) + 'values(' + v.description + ');' + char(10) + 'go' + char(10)
from #values v
where isnull(v.description, '') <> '';
end;
go
Then you can use it that way:
declare #insert varchar(max),
#part varchar(max),
#start int,
#end int;
set #start = 1;
exec tool.create_insert #schema = 'dbo',
#table = 'customer',
#where = 'id = 1',
#insert = #insert output;
-- Print one line to avoid the maximum 8000 characters problem
while len(#insert) > 0
begin
set #end = charindex(char(10), #insert);
if #end = 0
begin
set #end = len(#insert) + 1;
end;
print substring(#insert, #start, #end - 1);
set #insert = substring(#insert, #end + 1, len(#insert) - #end + 1);
end;
The output would be something like that:
--DBO.CUSTOMER
insert into dbo.customer(id, name, type)
values(1, 'CUSTOMER NAME', 'F');
go
If you just want to get a range of rows, use the #top parameter as bellow:
declare #insert varchar(max),
#part varchar(max),
#start int,
#end int;
set #start = 1;
exec tool.create_insert #schema = 'dbo',
#table = 'customer',
#top = 100,
#insert = #insert output;
-- Print one line to avoid the maximum 8000 characters problem
while len(#insert) > 0
begin
set #end = charindex(char(10), #insert);
if #end = 0
begin
set #end = len(#insert) + 1;
end;
print substring(#insert, #start, #end - 1);
set #insert = substring(#insert, #end + 1, len(#insert) - #end + 1);
end;
You can Use Sql Server Integration Service Packages specifically designed for Import and Export operation.
VS has a package for developing these packages if your fully install Sql Server.
Integration Services in Business Intelligence Development Studio
I think its also possible with adhoc queries
you can export result to excel file and then import that file into your datatable object or use it as it is and then import the excel file into the second database
have a look at this link
this can help u alot.
http://vscontrols.blogspot.com/2010/09/import-and-export-excel-to-sql-server.html
If you are using Oracle (or configure the application to the SQL Server) then Oracle SQL Developer does this for you. choose 'unload' for a table and follow the options through (untick DDL if you don't want all the table create stuff).
I found this SMSMS Boost addon, which is free and does exactly this among other things. You can right click on the results and select Script data as.
You can use this Q2C.SSMSPlugin, which is free and open source. You can right click and select "Execute Query To Command... -> Query To Insert...". Enjoy)
You can use an INSERT INTO SELECT statement, to insert the results of a select query into a table. http://www.w3schools.com/sql/sql_insert_into_select.asp
Example:
INSERT INTO Customers (CustomerName, Country)
SELECT SupplierName, Country
FROM Suppliers
WHERE Country='Germany'