I'm not sure if this is possible but here goes.
I have financial data stored in a csv format. The data unfortunately lacks any decimal points in the dollar fields. so $100.00 is stored as '00000010000'. Its also stored as a string. In my current setup I upload the csv file into a staging table with all columns set to varchar(x).
I know that if I try to insert this value into an integer column it will automatically convert it to 10000 of type integer, but that means I am missing my decimal place.
Is there anyway I can create a table such that inserting an integer stored as a string or integer automatically converts it to an decimal with 2 places behind the decimal????
EX: '000010000' -> 100.00
I know I can cast the column to a decimal and divide the existing value by 100.... but this table has 100+ columns with 60+ of them needing to be recast. This is also only table 1 of 6. I want to avoid creating commands to individually changing the relevant columns. Not all columns containing a number need the decimal treatment.
Why not just a basic query? You have to do the math in two steps because '00000010000' is large to fit into a basic numeric. But by multiplying by 1 it will implicitly convert to an int and then it is simple to divide by 100. Notice it needs be 100.0 so it will implicitly convert it to a numeric and not an int. Here are a couple example values.
select convert(numeric(9, 2), ('00000010000' * 1) / 100.0)
select convert(numeric(9, 2), ('00000010123' * 1) / 100.0)
Just a little helper stored procedure, to convert imported table to required format.
You specify imported table name, converted table name, which will be created and comma separated list of columns which are not required to be converted.
SQL Fiddle
MS SQL Server 2014 Schema Setup:
create table source (
id int not null identity(1,1),
c1 varchar(100) not null,
c2 varchar(100) not null,
c3 varchar(100) not null,
c4 varchar(100) not null,
c5 varchar(100) not null,
c6 varchar(100) not null
);
insert source (c1, c2, c3, c4, c5, c6) values
('a', '000001000', '000001000', '000001000', '000001000', 'b'),
('c', '000002001', '000002002', '000002003', '200020002', 'd'),
('e', '000003002', '000003002', '000003003', '300030003', 'f'),
('g', '000004003', '000004002', '000004003', '400040004', 'h'),
('i', '000005004', '000005002', '000005003', '500050005', 'j')
;
create procedure convert_table
#source varchar(max),
#dest varchar(max),
#exclude_cols varchar(max)
as
begin
declare
#sql varchar(max) = 'select ',
#col_name varchar(max)
if #exclude_cols not like ',%' set #exclude_cols = ',' + #exclude_cols
if #exclude_cols not like '%,' set #exclude_cols = #exclude_cols + ','
declare c cursor for
select column_name
from information_schema.columns
where table_name = #source
open c
fetch next from c into #col_name
while ##fetch_status = 0
begin
if #exclude_cols like '%,' + #col_name + ',%'
set #sql = #sql + #col_name + ','
else
set #sql = #sql + 'convert(numeric(11, 2), ' + #col_name + ') / 100 as ' + #col_name + ','
fetch next from c into #col_name
end
close c
deallocate c
set #sql = substring(#sql, 1, len(#sql) -1)
set #sql = #sql + ' into ' + #dest + ' from ' + #source
--print(#sql)
exec(#sql)
end
;
exec convert_table #source = 'source', #dest = 'dest', #exclude_cols = 'id,c1,c6'
Query 1:
select * from dest
Results:
| id | c1 | c2 | c3 | c4 | c5 | c6 |
|----|----|-------|-------|-------|------------|----|
| 1 | a | 10 | 10 | 10 | 10 | b |
| 2 | c | 20.01 | 20.02 | 20.03 | 2000200.02 | d |
| 3 | e | 30.02 | 30.02 | 30.03 | 3000300.03 | f |
| 4 | g | 40.03 | 40.02 | 40.03 | 4000400.04 | h |
| 5 | i | 50.04 | 50.02 | 50.03 | 5000500.05 | j |
So i didn't find the overly simple answer i was hoping for but i did find another way to accomplish my goal. If i set all columns that should have a decimal place to a decimal type I can then use the system tables and (Tsql?) to modify all columns of type decimal to equal itself/100.0.
DECLARE #tableName varchar(10)
SET #tableName = 'test21'
DECLARE #sql VARCHAR(MAX)
SET #sql = ''
SELECT #sql = #sql + 'UPDATE ' + #tableName + ' SET ' + c.name + ' = ' + c.name + '/100.0 ;'
FROM sys.columns c
INNER JOIN sys.tables t ON c.object_id = t.object_id
INNER JOIN sys.types y ON c.system_type_id = y.system_type_id
WHERE t.name = #tableName AND y.name IN ('decimal')
exec(#sql)
I have to dynamically construct the command within SQL based on information within the system tables and then execute it. .
Related
I need to write a query in SQL to count the number of unique combinations of record. I have a table of items with a child table listing options for each item. Each item may have 0 to x number of options. I want to count how many of each combinations there are. I thought I could take the child table and transpose it using pivot and unpivot, but I haven't figured it out. I then tried creating a list of the combinations, but I don't know how to count the occurrences. Can someone show me how to do this or point me in the right direction?
Here is the table I want to use:
Item | Option
----------------
1 | A
1 | B
2 | B
3 | B
4 | B
4 | C
5 | A
6 | A
6 | B
6 | C
7 | A
7 | B
7 | C
8 | A
8 | B
9 | A
10 | A
10 | B
The results I want are this:
Option 1 | Option 2 | Option 3 | Count
--------------------------------------------
A | B | | 3 * 1, 8, 10
B | | | 2 * 2, 3
B | C | | 1 * 4
A | | | 2 * 5, 9
A | B | C | 2 * 6, 7
This is saying that the combination A and B occurred twice, twice B was the only option picked, B and C were picked together 1 time. (The numbers after the asterisk aren't part of the result, they're just there to show which items are being counted.)
The closest I've come is the query below. It gives me the unique combinations, but doesn't tell me how many times that combination occurred:
SELECT ItemCombo, Count(*) AS ItemComboCount
FROM
(
SELECT
Item
,STUFF((SELECT ',' + CAST(Option AS varchar(MAX))
FROM itemDetail a
WHERE a.Item = b.Item
FOR XML PATH(''), TYPE).value('.', 'VARCHAR(MAX)'),1,1,''
) AS ItemCombo
FROM itemDetail b
) AS Combos
GROUP BY ItemCombo
ORDER BY Count(*) DESC
You should group by in the inner query and also order by option so the concatenated values can be correctly grouped.
SELECT ItemCombo, Count(*) AS ItemComboCount
FROM
(
SELECT
Item
,STUFF((SELECT ',' + CAST(Option AS varchar(MAX))
FROM itemDetail a
WHERE a.Item = b.Item
ORDER BY Option
FOR XML PATH(''), TYPE).value('.', 'VARCHAR(MAX)'),1,1,''
) AS ItemCombo
FROM itemDetail b
GROUP BY item
) AS Combos
GROUP BY ItemCombo
ORDER BY Count(*) DESC
To address the additional requirement you mentioned in the comments I would add a CTE, some more XML processing and dynamic TSQL to Vamsi Prabhala's excellent answer (+1 from my side):
--create test table
create table tmp (Item int, [Option] char(1))
--populate test table
insert into tmp values ( 1, 'A') ,( 1, 'B') ,( 2, 'B') ,( 3, 'B') ,( 4, 'B') ,( 4, 'C') ,( 5, 'A') ,( 6, 'A') ,( 6, 'B') ,( 6, 'C') ,( 7, 'A') ,( 7, 'B') ,( 7, 'C') ,( 8, 'A') ,( 8, 'B') ,( 9, 'A') ,(10, 'A') ,(10, 'B')
declare #count int
declare #loop int = 1
declare #dynamicColums nvarchar(max) = ''
declare #sql nvarchar(max) = ''
--count possible values
select #count = max(c.options_count) from (
select count(*) as options_count from tmp group by item
) c
--build dynamic headers for all combinations
while #loop <= #count
begin
set #dynamicColums = #dynamicColums + ' Parts.value(N''/x['+ cast(#loop as nvarchar(max)) +']'', ''char(1)'') AS [Option ' + cast(#loop as nvarchar(max)) + '],'
set #loop = #loop + 1
end
--build dynamic TSQL statement
set #sql = #sql + ';WITH Splitted'
set #sql = #sql + ' AS ('
set #sql = #sql + ' SELECT ItemComboCount'
set #sql = #sql + ' ,ItemCombo'
set #sql = #sql + ' ,CAST(''<x>'' + REPLACE(ItemCombo, '','', ''</x><x>'') + ''</x>'' AS XML) AS Parts'
set #sql = #sql + ' FROM '
set #sql = #sql + ' ('
set #sql = #sql + ' SELECT ItemCombo, Count(*) AS ItemComboCount'
set #sql = #sql + ' FROM'
set #sql = #sql + ' ('
set #sql = #sql + ' SELECT'
set #sql = #sql + ' Item '
set #sql = #sql + ' ,STUFF((SELECT '','' + CAST([Option] AS varchar(MAX))'
set #sql = #sql + ' FROM tmp a '
set #sql = #sql + ' WHERE a.Item = b.Item'
set #sql = #sql + ' ORDER BY [Option]'
set #sql = #sql + ' FOR XML PATH(''''), TYPE).value(''.'', ''VARCHAR(MAX)''),1,1,'''''
set #sql = #sql + ' ) AS ItemCombo'
set #sql = #sql + ' FROM tmp b'
set #sql = #sql + ' GROUP BY item'
set #sql = #sql + ' ) AS Combos'
set #sql = #sql + ' GROUP BY ItemCombo'
set #sql = #sql + ' ) t'
set #sql = #sql + ' )'
set #sql = #sql + ' SELECT '
set #sql = #sql + #dynamicColums
set #sql = #sql + ' ItemComboCount as [Count]'
set #sql = #sql + ' FROM Splitted'
--execute dynamic TSQL statement
exec(#sql)
Results:
Now if you add another value (for example 'D') with a couple of insert statements:
insert into tmp values ( 1, 'D')
insert into tmp values ( 7, 'D')
you'll see that new columns are dinamically generated:
I have a table CenterDetails like this
| uid | CenterID | CenterName | AccessLock |
| ----|----------|------------|------------|
|1 | 1 | Andheri | 1 |
|2 | 2 | Borivali | 1 |
|1 | 3 | Dadar | 1 |
I have 100's of tables in my database.
If I want to delete Dadar center, then first I need to check, to the whole database, where centerID=3 exist or not.
If dadar center's CenterID does not exist in the whole database where column names are CenterID.
How can I find that CenterID=3 is present in the whole database or not?
Thanks in advance!
I guess this code will help you:
DECLARE #ColumnName SYSNAME = 'CenterID '
,#ColumnValue NVARCHAR(256) = '3'
DECLARE #DynamicSQLStatement NVARCHAR(MAX)
SELECT #DynamicSQLStatement =STUFF
(
(
SELECT ' UNION ALL ' + CHAR(10) + ' SELECT TOP 1 ''' + t.name + ''' AS T FROM ' + SCHEMA_NAME(t.schema_id) + '.' + t.name + ' WHERE ' + #ColumnName + ' = ' + #ColumnValue + CHAR(10)
FROM sys.tables t
INNER JOIN sys.columns c
on t.[object_id] = c.[object_id]
WHERE c.[name] = #ColumnName
FOR XML PATH(''), TYPE
).value('.', 'NVARCHAR(MAX)')
,1
,12
,''
);
EXEC sp_executesql #DynamicSQLStatement
It is looking for all tables that have specific column. Then it query all these tables in order to find if the given column has a specific value, if yes, it will return the table name.
What will be better here is to read about data integrity or more specific the foreign keys.
Hope it helps you
DECLARE #TablesColumns TABLE
(
ID INT IDENTITY,
COLUMN_NAME VARCHAR(50),
TABLE_NAME VARCHAR(50)
)
DECLARE #MinId INT,
#MaxId INT,
#Sql NVARCHAR(MAX),
#TableName VARCHAR(50),
#ColumnName VARCHAR(50)
INSERT INTO #TablesColumns(COLUMN_NAME,TABLE_NAME)--Here we get list of table containing 'centerID'
SELECT COLUMN_NAME,TABLE_NAME FROM INFORMATION_SCHEMA.COLUMNS WHERE COLUMN_NAME='Credits'
SELECT #MinId=MIN(Id) FROM #TablesColumns
SELECT #MaxId=MAX(Id) FROm #TablesColumns
WHILE (#MinId <=#MaxId)
BEGIN
SELECT #TableName=TABLE_NAME FROm #TablesColumns WHERE Id=#MinId
SELECT #ColumnName=COLUMN_NAME FROm #TablesColumns WHERE Id=#MinId
SET #Sql='DELETE From ' +#TableName+ ' WHERE '+#ColumnName+'=3'
--PRINT #Sql
SET #MinId=#MinId+1
Exec (#Sql)
END
From my database table(Customer) I need to select one record and display the result by interchanging columns to rows.
EG:
actual result
| ID | Name | Age |
| 1 | Tom | 25 |
expected output
| Name | Value|
| ID | 1 |
| Name | Tom |
| Age | 25 |
Other details:
Customer table has different number of colums in different databases
I need to do this inside a function (So I cannot use dynamic queries, UNPIVOT)
Please advice me.
This uses CROSS APPLY with VALUES to perform unpivot
--Set up test data
CREATE TABLE dbo.TEST(ID INT IDENTITY (1,1),Name VARCHAR(20),Age TINYINT)
INSERT INTO dbo.TEST VALUES
('Shaggy',32)
,('Fred',28)
,('Velma',26)
,('Scooby',7)
DECLARE #table VARCHAR(255) = 'Test'
DECLARE #schema VARCHAR(255) = 'dbo'
DECLARE #ID INT = 2
--Create a VALUES script for the desired table
DECLARE #col VARCHAR(1000)
SELECT
#col = COALESCE(#col,'') + '(''' + c.name + ''' ,CAST(A.[' + c.name + '] AS VARCHAR(20))),'
FROM
sys.objects o
INNER JOIN sys.columns c
ON
o.object_id = c.object_id
WHERE
o.name = #table
AND
SCHEMA_NAME(o.schema_id) = #schema
ORDER BY
c.column_id
--Remove trailing ,
SET #col = LEFT(#col,LEN(#col)-1)
--Build Script for unpivoting data.
DECLARE #str VARCHAR(2000) = '
SELECT
CAST(C.Col AS VARCHAR(20)) AS [Name]
,CAST(C.Val AS VARCHAR(20)) AS [Value]
FROM
[' + #schema + '].[' + #table + '] A
CROSS APPLY (VALUES ' + #col + ') C(Col,Val)
WHERE
A.ID = ''' + CAST(#ID AS VARCHAR(8)) + ''''
--Run Script
EXEC (#str)
I have a small question regarding SQL.
I have a table with 450 columns and I would like to check which of those columns contain at least one null value.
How can I do this?
Example:
Id A1 A2 A3 A4
1 NULL 1 5 6
2 4 NULL 2 1
3 3 4 5 7
should simply return A1 and A2.
There's not a simple way to find columns with specific conditions; you generally need to check each column explicitly. There are ways to do it dynamically or you can just have a massive query with 450 comparisons.
Another way is to UNPIVOT the data:
SELECT Id, Col FROM
(
SELECT Id, Col, Val
FROM
(SELECT Id, A1, A2, ...
FROM pvt) p
UNPIVOT
(Val FOR Id IN
(A1, A2, ...)
)AS unpvt
)
WHERE Val is NULL
If this is a common real-time need (and not just a one-time or batch need) a better long-term solution would be to change your data structure so that each "column" is a row along with the value:
Id Col Val
--- ---- ----
1 A1 NULL
1 A2 1
1 A3 5
1 A4 6
2 A1 4
2 A2 NULL
etc.
(Note that the above is essentially the output of UNPIVOT)
the below code is used by me in sql server
try
DECLARE #dbname VARCHAR(100) = 'ur_Database'
DECLARE #schemaName VARCHAR(100) = 'dbo'
DECLARE #tableName VARCHAR(100) = 'ur_Table'
DECLARE #result TABLE (col VARCHAR(4000))
SELECT #dbname dbname
,t.name tbl
,c.name col
INTO #temp
FROM sys.columns c
JOIN sys.tables t ON
t.object_id = c.object_id
WHERE c.is_nullable = 1
AND t.name = #tableName
DECLARE #sql NVARCHAR(MAX) =
STUFF(
(
SELECT 'UNION ALL SELECT CASE WHEN EXISTS (SELECT 1 FROM ' + #dbname + '.' + #schemaName + '.' + tbl + ' WHERE ' + col + ' IS NULL) THEN '''+ #schemaName + '.' + tbl + '.' + col+''' END AS NULL_Value_Exists '
FROM #temp
FOR XML PATH('')
), 1, 10, ' ')
INSERT #result
EXEC(#sql)
SELECT *
FROM #result
WHERE col IS NOT NULL
DROP TABLE #temp
There might be a better way to do this. But I'm trying to find columns that might contain personal information.
Problem is that the tables are poorly named (non-english, abbreviations). So I'm running this dynamic script, that will return all tables in all databases and their columns.
USE master;
DECLARE #SQL varchar(max)
SET #SQL=';WITH cteCols (dbName, colName) AS (SELECT NULL, NULL '
SELECT #SQL=#SQL+'UNION
SELECT
'''+d.name COLLATE Czech_CI_AS +'.''+sh.name COLLATE Czech_CI_AS +''.''+o.name COLLATE Czech_CI_AS ''dbSchTab''
, c.name COLLATE Czech_CI_AS ''colName''
FROM ['+d.name+'].sys.columns c
JOIN ['+d.name+'].sys.objects o ON c.object_id=o.object_id
JOIN ['+d.name+'].sys.schemas sh ON o.schema_id=sh.schema_id
WHERE o.[type] = ''U'' COLLATE Czech_CI_AS'
FROM sys.databases d
SET #SQL = #SQL + ')
SELECT
*
FROM cteCols cs
ORDER BY 1;'
EXEC (#SQL);
Result:
+---------------------+------------+
| DatabaseSchemaTable | ColumnName |
+---------------------+------------+
| dev1.dbo.Users | Col1 |
| dev1.dbo.Users | Col2 |
| dev1.dbo.Users | Col3 |
| dev1.dbo.Users | Col4 |
+---------------------+------------+
But because of the poor column naming, I can't tell what data is in these columns. I'd like to select a TOP (1) non NULL value from each column, but I'm struggling.
Required result:
+---------------------+------------+--------------+
| DatabaseSchemaTable | ColumnName | ColumnValue |
+---------------------+------------+--------------+
| dev1.dbo.Users | Col1 | 20 |
| dev1.dbo.Users | Col2 | 2018-02-06 |
| dev1.dbo.Users | Col3 | 202-555-0133 |
| dev1.dbo.Users | Col4 | John Doe |
+---------------------+------------+--------------+
Ideas I had:
I would need to either transpose each of the tables (probably not a
job for PIVOT)
I could join with the table dynamically and only display the current column. But I can't use dynamic column in correlated subquery.
Any ideas?
I would create a temporary table such as ##cols, and then use this temporary table to loop through the table, running update queries on the table itself. Mind you, we have a lot of spaces and other potentially troublesome characters in our field names. Therefore I updated your cte with some QUOTENAMEs around the field / table / schema / db names.
USE master;
DECLARE #SQL varchar(max);
SET #SQL=';WITH cteCols (dbName, colName, top1Value) AS (SELECT NULL, NULL, CAST(NULL AS VARCHAR(MAX)) '
SELECT #SQL=#SQL+' UNION
SELECT
'''+QUOTENAME(d.[name]) COLLATE Czech_CI_AS +'.''+QUOTENAME(sh.name) COLLATE Czech_CI_AS +''.''+QUOTENAME(o.name) COLLATE Czech_CI_AS ''dbSchTab''
, QUOTENAME(c.name) COLLATE Czech_CI_AS ''colName'', CAST(NULL AS VARCHAR(MAX)) AS ''top1Value''
FROM ['+d.[name]+'].sys.columns c
JOIN ['+d.[name]+'].sys.objects o ON c.object_id=o.object_id
JOIN ['+d.[name]+'].sys.schemas sh ON o.schema_id=sh.schema_id
WHERE o.[type] = ''U'' COLLATE Czech_CI_AS'
FROM sys.databases d;
SET #SQL = #SQL + ')
SELECT
*
INTO ##Cols
FROM cteCols cs
ORDER BY 1;'
EXEC (#SQL);
DECLARE #colName VARCHAR(255), #dbName VARCHAR(255), #SQL2 NVARCHAR(MAX);
DECLARE C CURSOR FOR SELECT [colName],[dbName] FROM ##Cols;
OPEN C;
FETCH NEXT FROM C INTO #colName, #dbName;
WHILE ##FETCH_STATUS=0
BEGIN
SET #SQL2='UPDATE ##Cols SET [top1Value] = (SELECT TOP 1 x.'+#colName+' FROM '+#dbName+' x WHERE x.'+#colName+' IS NOT NULL) WHERE [colName]='''+#colName+''' AND [dbName]='''+#dbName+''''
EXEC sp_executesql #SQL2
FETCH NEXT FROM C INTO #colName, #dbName
END;
CLOSE C;
DEALLOCATE C;
SELECT * FROM ##Cols;
It's not pretty, but it'd suit your needs.
You might try this:
--In this table we write our findings
CREATE TABLE ##TargetTable(ID INT IDENTITY, TableName VARCHAR(500), FirstRowXML XML);
--the undocumented sp "MSforeachtable" allows to create a statement where the
--question mark is a place holder for the actual table
--(SELECT TOP 1 * FROM ? FOR XML PATH('row')) will create one single XML with all first row's values
EXEC sp_MSforeachtable 'INSERT INTO ##TargetTable(TableName,FirstRowXML) SELECT ''?'', (SELECT TOP 1 * FROM ? FOR XML PATH(''row''))';
--Now it is easy to get what you want
SELECT ID
,TableName
,col.value('local-name(.)','nvarchar(max)') AS colname
,col.value('text()[1]','nvarchar(max)') AS colval
FROM ##TargetTable
CROSS APPLY FirstRowXML.nodes('/row/*') A(col);
GO
DROP TABLE ##TargetTable
Just use SELECT TOP X to get more than one row...
UPDATE
The following will create a table with all columns of all tables of all databases and fetch one value per row.
CREATE TABLE ##TargetTable(ID INT IDENTITY
,TABLE_CATALOG VARCHAR(300),TABLE_SCHEMA VARCHAR(300),TABLE_NAME VARCHAR(300),COLUMN_NAME VARCHAR(300)
,DATA_TYPE VARCHAR(300),CHARACTER_MAXIMUM_LENGTH INT, IS_NULLABLE VARCHAR(10),Command VARCHAR(MAX),OneValue NVARCHAR(MAX));
EXEC sp_MSforeachdb
'USE ?;
INSERT INTO ##TargetTable(TABLE_CATALOG,TABLE_SCHEMA,TABLE_NAME,COLUMN_NAME,DATA_TYPE,CHARACTER_MAXIMUM_LENGTH,IS_NULLABLE,Command)
SELECT ''?''
,c.TABLE_SCHEMA
,c.TABLE_NAME
,c.COLUMN_NAME
,c.DATA_TYPE
,c.CHARACTER_MAXIMUM_LENGTH
,c.IS_NULLABLE
, CASE WHEN c.IS_NULLABLE=''YES''
THEN ''SELECT CAST(MAX('' + QUOTENAME(c.COLUMN_NAME) + '') AS NVARCHAR(MAX))''
ELSE ''SELECT TOP 1 CAST('' + QUOTENAME(c.COLUMN_NAME) + '' AS NVARCHAR(MAX))''
END
+ '' FROM '' + QUOTENAME(''?'') + ''.'' + QUOTENAME(c.TABLE_SCHEMA) + ''.'' + QUOTENAME(c.TABLE_NAME)
FROM INFORMATION_SCHEMA.COLUMNS c
INNER JOIN INFORMATION_SCHEMA.TABLES t ON c.TABLE_CATALOG=t.TABLE_CATALOG AND c.TABLE_SCHEMA=t.TABLE_SCHEMA AND c.TABLE_NAME=T.TABLE_NAME AND t.TABLE_TYPE=''BASE TABLE''
WHERE c.DATA_TYPE NOT IN(''BINARY'',''VARBINARY'',''IMAGE'',''NTEXT'')';
DECLARE #ID INT,#Command VARCHAR(MAX);
DECLARE cur CURSOR FOR SELECT ID,Command FROM ##TargetTable
OPEN cur;
FETCH NEXT FROM cur INTO #ID,#Command;
WHILE ##FETCH_STATUS=0
BEGIN
SET #Command = 'UPDATE ##TargetTable SET OneValue=(' + #Command + ') WHERE ID=' + CAST(#ID AS VARCHAR(100))
PRINT #command;
EXEC(#Command);
FETCH NEXT FROM cur INTO #ID,#Command;
END
CLOSE cur;
DEALLOCATE cur;
GO
SELECT * FROM ##TargetTable;
GO
DROP TABLE ##TargetTable;