I am using dynamic SQL to concatenate a bunch of scripts that I have loaded into a table. It will take those individual scripts, concatenate them into one script, and then put that finished script into another table. It has worked beautifully so far, concatenating about 17 scripts successfully. But yesterday, I tried 80. And there is the issue. It is capping out at 78,165 characters every time I try it. The guts of it go like this:
SET #FinScriptSQL
= N'SELECT script, row_number( ) over( order by scriptTypeId ) as execOrder INTO #' + #FinScriptName
+ N' from dbo.' + #FinScriptName
+ N';
DECLARE #tsqlStmt NVARCHAR(MAX);
SELECT #tsqlStmt = STUFF(
(
SELECT (CHAR(13) + CHAR(10) + script) AS [text()]
FROM #' + #FinScriptName
+ N'
ORDER BY execOrder
FOR XML PATH(''''), TYPE).value(''.'', ''nvarchar( max )''),
1 ,
1 ,
NULL
);
CREATE TABLE dbo.' + #FinScriptName + N'_FinishedScripts
(
Name sysname NULL,
FinishedScript NVARCHAR(MAX) NULL
);
INSERT INTO dbo.' + #FinScriptName + N'_FinishedScripts
SELECT ''' + #FinScriptName + N''', #tsqlStmt;'
PRINT #FinScriptSQL;
EXECUTE sp_executesql #FinScriptSQL;
The column of the table where the individual scripts are is an NVARCHAR(MAX), as is the destination column of the new table. I thought it might be a setting in SQL Server, so I have already ensured that the maximum characters retrieved is wide open. Any ideas?
Related
I've created the script below to be able to quickly create a minimal reproducible example for other questions in general.
This script uses an original table and generates the following PRINT statements:
DROP and CREATE a temp table with structure matching the original table
INSERT INTO statement using examples from the actual data
I can just add the original table name into the variable listed, along with the number of sample records required from the table. When I run it, it generates all of the statements needed in the Messages window in SSMS. Then I can just copy and paste those statements into my posted questions, so those answering have something to work with.
I know that you can get similar results in SSMS through Tasks>Generate Scripts, but this gets things down to the minimal amount of code that's useful for posting here without all of the unnecessary info that SSMS generates automatically. It's just a quick way to create a reproduced version of a simple table with actual sample data in it.
Unfortunately the one scenario that doesn't work is if I run it on very wide tables. It seems to fail on the last STRING_AGG() query where it's building the VALUES portion of the INSERT. When it runs on wide tables, it returns NULL.
Any suggestions to correct this?
EDIT: I figured out the issue I was having with UNIQUEIDENTIFIER columns and revised the query below. Also included an initial check to make sure the table actually exists.
/* ---------------------------------------
-- For creating minimal reproducible examples
-- based on original table and data,
-- builds the following statements
-- -- CREATE temp table with structure matching original table
-- -- INSERT statements based on actual data
--
-- Note: May not work for very wide tables due to limitations on
-- PRINT statements
*/ ---------------------------------------
DECLARE #tableName NVARCHAR(MAX) = 'testTable', -- original table name HERE
#recordCount INT = 5, -- top number of records to insert to temp table
#buildStmt NVARCHAR(MAX),
#insertStmt NVARCHAR(MAX),
#valuesStmt NVARCHAR(MAX),
#insertCol NVARCHAR(MAX),
#strAgg NVARCHAR(MAX),
#insertOutput NVARCHAR(MAX)
IF (EXISTS (SELECT 1 FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME = #tableName))
BEGIN
-- build DROP and CREATE statements for temp table from original table
SET #buildStmt = 'IF OBJECT_ID(''tempdb..#' + #tableName + ''') IS NOT NULL DROP TABLE #' + #tableName + CHAR(10) + CHAR(10) +
'CREATE TABLE #' + #tableName + ' (' + CHAR(10)
SELECT #buildStmt = #buildStmt + ' ' + C.[Name] + ' ' +
T.[Name] +
CASE WHEN T.[Name] IN ('varchar','varchar','char','nchar') THEN '(' + CAST(C.[Length] AS VARCHAR) + ') ' ELSE ' ' END +
'NULL,' + CHAR(10)
FROM sysobjects O
JOIN syscolumns C ON C.id = O.id
JOIN systypes T ON T.xusertype = C.xusertype
WHERE O.[name] = #TableName
ORDER BY C.ColID
SET #buildStmt = SUBSTRING(#buildStmt,1,LEN(#buildStmt) - 2) + CHAR(10) + ')' + CHAR(10)
PRINT #buildStmt
-- build INSERT INTO statement from original table
SELECT #insertStmt = 'INSERT INTO #' + #tableName + ' (' +
STUFF ((
SELECT ', [' + C.[Name] + ']'
FROM sysobjects O
JOIN syscolumns C ON C.id = O.id
WHERE O.[name] = #TableName
ORDER BY C.ColID
FOR XML PATH('')), 1, 1, '')
+')'
PRINT #insertStmt
-- build VALUES portion of INSERT from data in original table
SELECT #insertCol = STUFF ((
SELECT '''''''''+CONVERT(NVARCHAR(200),' +
'[' + C.[Name] + ']' +
')+'''''',''+'
FROM sysobjects O
JOIN syscolumns C ON C.id = O.id
JOIN systypes T ON T.xusertype = C.xusertype
WHERE O.[name] = #TableName
ORDER BY C.ColID
FOR XML PATH('')), 1, 1, '')
SET #insertCol = SUBSTRING(#insertCol,1,LEN(#insertCol) - 1)
SELECT #strAgg = ';WITH CTE AS (SELECT TOP(' + CONVERT(VARCHAR,#recordCount) + ') * FROM ' + #tableName + ') ' +
' SELECT #valuesStmt = STRING_AGG(CAST(''' + #insertCol + ' AS NVARCHAR(MAX)),''), ('') ' +
' FROM CTE'
EXEC sp_executesql #strAgg,N'#valuesStmt NVARCHAR(MAX) OUTPUT', #valuesStmt OUTPUT
PRINT 'VALUES (' +REPLACE(SUBSTRING(#valuesStmt,1,LEN(#valuesStmt) - 1),',)',')') + ')'
END
ELSE
BEGIN
PRINT 'Table does NOT exist'
END
I am using SQL Server 2012. i have a table with 90 columns. I am trying to select only columns that contains data. After searching i used the following procedure:
1- Getting all columns count using one select query
2- Pivoting Result Table into a Temp table
3- Creating Select query
4- Executing this query
Here is the query i used:
DECLARE #strTablename varchar(100) = 'dbo.MyTable'
DECLARE #strQuery varchar(max) = ''
DECLARE #strSecondQuery varchar(max) = 'SELECT '
DECLARE #strUnPivot as varchar(max) = ' UNPIVOT ([Count] for [Column] IN ('
CREATE TABLE ##tblTemp([Column] varchar(50), [Count] Int)
SELECT #strQuery = ISNULL(#strQuery,'') + 'Count([' + name + ']) as [' + name + '] ,' from sys.columns where object_id = object_id(#strTablename) and is_nullable = 1
SELECT #strUnPivot = ISNULL(#strUnPivot,'') + '[' + name + '] ,' from sys.columns where object_id = object_id(#strTablename) and is_nullable = 1
SET #strQuery = 'SELECT [Column],[Count] FROM ( SELECT ' + SUBSTRING(#strQuery,1,LEN(#strQuery) - 1) + ' FROM ' + #strTablename + ') AS p ' + SUBSTRING(#strUnPivot,1,LEN(#strUnPivot) - 1) + ')) AS unpvt '
INSERT INTO ##tblTemp EXEC (#strQuery)
SELECT #strSecondQuery = #strSecondQuery + '[' + [Column] + '],' from ##tblTemp WHERE [Count] > 0
DROP TABLE ##tblTemp
SET #strSecondQuery = SUBSTRING(#strSecondQuery,1,LEN(#strSecondQuery) - 1) + ' FROM ' + #strTablename
EXEC (#strSecondQuery)
The problem is that this query is TOO SLOW. Is there a best way to achieve this?
Notes:
Table have only one clustered index on primary key Column ID and does not contains any other indexes.
Table is not editable.
Table contains very large data.
Query is taking about 1 minute to be executed
Thanks in advance.
I do not know if this is faster, but you might use one trick: FOR XML AUTO will ommit columns without content:
DECLARE #tbl TABLE(col1 INT,col2 INT,col3 INT);
INSERT INTO #tbl VALUES (1,2,NULL),(1,NULL,NULL),(NULL,NULL,NULL);
SELECT *
FROM #tbl AS tbl
FOR XML AUTO
This is the result: col3 is missing...
<tbl col1="1" col2="2" />
<tbl col1="1" />
<tbl />
Knowing this, you could find the list of columns, which are not NULL in all rows, like this:
DECLARE #ColList VARCHAR(MAX)=
STUFF
(
(
SELECT DISTINCT ',' + Attr.value('local-name(.)','nvarchar(max)')
FROM
(
SELECT
(
SELECT *
FROM #tbl AS tbl
FOR XML AUTO,TYPE
) AS TheXML
) AS t
CROSS APPLY t.TheXML.nodes('/tbl/#*') AS A(Attr)
FOR XML PATH('')
),1,1,''
);
SELECT #ColList
The content of #ColList is now col1,col2. This string you can place in a dynamically created SELECT.
UPDATE: Hints
It would be very clever, to replace the SELECT * with a column list created from INFORMATION_SCHEMA.COLUMNS excluding all not-nullable. And - if needed and possible - types, wich contain very large data (BLOBs).
UPDATE2: Performance
Don't know what your very large data means actually... Just tried this on a table with about 500.000 rows (with SELECT *) and it returned correctly after less than one minute. Hope, this is fast enough...
Try using this condition:
where #columnname IS NOT NULL AND #columnname <> ' '
I have a table with a few hundred thousand rows and the data format is index (int), and a words nvarchar(1000). The words string is made up of a collection of words separated by a space, e.g word1 word2 word3. I want to read the word table and create a dictionary. In terms of pseudo code this is what I want:
INSERT INTO dictionary (dictionaryword)
SELECt splitBySpace(words) FROM word;
This is simple enough to code in Java or C#, but I have found the system takes a long time to process the data. In other processing the cost benefit to running SQL to handle the query (i.e not processing the data in c# or Java) is huge.
I want to create a stored procedure which reads the words, splits them, and then creates the dictionary. I have seen various split procedures which are a little complex, e.g https://dba.stackexchange.com/questions/21078/t-sql-table-valued-function-to-split-a-column-on-commas but I could not see how to re-code this for the task of reading a whole database, splitting the words, and inserting them.
Has anyone any sample code to split the column data and then insert it which can wholly implemented in SQL for reasons of efficiency?
Here is the solution.
DDL:
create table sentence(t varchar(100))
insert into sentence values
('Once upon a time in America'),
('Eyes wide shut')
DML:
select distinct ca.d as words from sentence s
cross apply(select split.a.value('.', 'varchar(100)') as d
from
(select cast('<x>' + REPLACE(s.t, ' ', '</x><x>') + '</x>' as xml) as d) as a
cross apply d.nodes ('/x') as split(a)) ca
Output:
words
a
America
Eyes
in
Once
shut
time
upon
wide
Fiddle http://sqlfiddle.com/#!6/54dff/4
I suggest you to use a stored procedure like this:
CREATE PROCEDURE spSplit
#words nvarchar(max),
#delimiter varchar(1) = ' '
AS
BEGIN
SET NOCOUNT ON;
DECLARE #sql nvarchar(max)
SELECT #sql = 'SELECT ''' + REPLACE(#words, #delimiter, ''' As res UNION ALL SELECT ''') + ''''
--or for removing duplicates SELECT #sql = 'SELECT ''' + REPLACE(#words, #delimiter, ''' As res UNION SELECT ''') + ''''
EXEC(#sql)
END
GO
This stored procedure will give you the results that you can use it in INSERT INTO statement, and:
CREATE PROCEDURE spSplit
#words nvarchar(max) = 'a bc lkj weu 234 , sdsd 3 and 3 & test',
#delimiter varchar(1) = ' ',
#destTable nvarchar(255),
#destColumn nvarchar(255)
AS
BEGIN
SET NOCOUNT ON;
DECLARE #sql nvarchar(max)
SELECT #sql = 'INSERT INTO [' + #destTable + '] ([' + #destColumn + ']) SELECT res FROM ('
SELECT #sql = #sql + 'SELECT ''' + REPLACE(#words, #delimiter, ''' As res UNION ALL SELECT ''') + ''''
SELECT #sql = #sql + ') DT WHERE res NOT IN (SELECT [' + #destColumn + '] FROM [' + #destTable + '])'
EXEC(#sql)
END
GO
This stored procedure will do the insert with out inserting duplicates.
I have a table which has been designed to have columns named the same as a value I need to reference it with. I know I need to unpivot the table and have got it working if I manually type all the column names, however this table keeps getting new columns added to it so I want to capture all the columns in the unpivot rather than script them manually.
I can get the column names using the column_name function and was wondering if this can be added to the unpivot at all, ive been playing around with it and its not looking possible at the moment to me so thought id check to see if there were any other suggestions.
Sadly I cant redesign the table with where the column names keep getting added to although that would be the ideal solution.
select Day, Rota, RotaTemplate
from table1 t1
unpivot
(
Rota
for RotaTemplate in (select Column_name
from INFORMATION_SCHEMA.COLUMNS
where TABLE_NAME = 'table1')
) unpiv;
You are not allowed to do this. You need to build dynamic SQL statement like follows:
DECLARE #DynamicSQL NVARCHAR(MAX)
SET #DynamicSQL = N'select Day, Rota, RotaTemplate' + CHAR(10) +
'from table1 t1'+ CHAR(10) +
'unpivot' + CHAR(10) +
'(' + CHAR(10) + CHAR(9) +
'Rota for RotaTemplate in ('
+
STUFF
(
(
SELECT ',[' + [COLUMN_NAME] + '] '
FROM [INFORMATION_SCHEMA].[COLUMNS]
WHERE [TABLE_NAME] = 'table1'
FOR XML PATH('')
)
,1
,1
,''
)
+')' + CHAR(10) +
') unpiv;'
EXEC sp_executesql #DynamicSQL
Can someone please provide source on how to write a generic instead of trigger for insert for all the tables in a database. I would want to run the stored procedure that will create the instead of insert triggers for all the tables in a db .
Without understanding exactly why you want an instead of trigger on every single table, and what else you plan to do in the resulting code aside from insert the supplied values into the base table (just like what would have happened if there was no instead of trigger at all), here is what I came up with. You'll note that it drops the trigger if it already exists, so you can run this multiple times in the same database without "already exists" errors. It ignores IDENTITY, ROWGUIDCOL, computed columns, and TIMESTAMP/ROWVERSION columns. Finally at the end I show how you can quickly inspect, instead of execute (which is commented out) the output script (up to 8K), and cast it to XML if you want to see more (up to 64K). No guarantees you can return the whole thing in SSMS depending on how many tables/columns there are. If you want to check it and/or run it manually you may want to create a table that stores the value - then you can pull it out with an app or what have you. Now if you want this to execute automatically you can follow Yuck's point - save this as a stored procedure and create a DDL trigger that responds to certain DDL events (CREATE TABLE etc).
SET NOCOUNT ON;
DECLARE
#cr char(2) = char(13) + char(10),
#t char(1) = char(9),
#s nvarchar(max) = N'';
;WITH t AS
(
SELECT [object_id],
s = OBJECT_SCHEMA_NAME([object_id]),
n = name
FROM sys.tables WHERE is_ms_shipped = 0
)
SELECT #s += 'IF OBJECT_ID(N''dbo.ioTrigger_' + t.s + '_' + t.n + ''')
IS NOT NULL
BEGIN
DROP TRIGGER dbo.[ioTrigger_' + t.s + '_' + t.n + '];
END
G' + 'O
CREATE TRIGGER ioTrigger_' + t.s + '_' + t.n + '
ON ' + QUOTENAME(t.s) + '.' + QUOTENAME(t.n) + '
INSTEAD OF INSERT
AS
BEGIN
SET NOCOUNT ON;
-- surely you must want to put some other code here?
INSERT ' + QUOTENAME(t.s) + '.' + QUOTENAME(t.n) + '
(
' +
(
SELECT #t + #t + name + ',' + #cr
FROM sys.columns AS c
WHERE c.[object_id] = t.[object_id]
AND is_identity = 0
AND is_rowguidcol = 0
AND is_computed = 0
AND system_type_id <> 189
FOR XML PATH(''), TYPE
).value(N'./text()[1]', N'nvarchar(max)') + '--'
+ #cr + #t + ')'
+ #cr + #t + 'SELECT
' +
(
SELECT #t + #t + name + ',' + #cr
FROM sys.columns AS c
WHERE c.[object_id] = t.[object_id]
AND is_identity = 0
AND is_rowguidcol = 0
AND is_computed = 0
AND system_type_id <> 189
FOR XML PATH(''), TYPE
).value(N'./text()[1]', N'nvarchar(max)') + '--'
+ #cr + #t + 'FROM
inserted;
END' + #cr + 'G' + 'O' + #cr
FROM t
ORDER BY t.s, t.n;
SELECT #s = REPLACE(#s, ',' + #cr + '--' + #cr, #cr);
-- you can inspect at least part of the script by running the
-- following in text mode:
SELECT #s;
-- if you want to see more of the whole thing (but not necessarily
-- the whole thing), run this in grid mode and click on the result:
SELECT CONVERT(XML, #s);
A couple of caveats:
I don't deal with sparse columns, xml collections, filestream etc. so if you have fancy tables you might run into complications with some of those features.
the name of the trigger isn't really "protected" - you could have a schema called foo, another schema called foo_bar, and then a table in foo called bar_none and a table in foo_bar called none. This would lead to a duplicate trigger name because I use an underscore as the separator. I complained about this issue with CDC, but Microsoft closed the bug as "won't fix." Just something to keep in mind if you happen to use schemas with underscores.
You can only create database triggers for DDL statements in SQL Server 2008+.
If you need DML triggers (INSTEAD OF INSERT) on every table in a database, you're going to have to have to either manage them yourself or try to create a database-level DDL trigger that is responsible for then creating or updating the INSTEAD OF INSERT triggers on any CREATE or ALTER table statements. Could get hairy and will almost certainly require using dynamic SQL.
Out of curiosity, are you trying to build some sort of auditing mechanism?