Query to get SSRS report parameters and Drop Down options slow - sql

I need to get a list of parameters for a specific SSRS report and show all the possible drop down items available (along with Report parameter name, DataType, and Prompt).
I'm not sure of other options (if there are any). The query returns exactly what I need, it just takes too long to be useful (10-12 seconds). Is there another way to get these results or make this one faster?
USE ReportServer
DECLARE #dbname VARCHAR(25)
SET #dbname='DBName'
DECLARE #rptlistStr VARCHAR(MAX)
DECLARE #reportlist table ( path varchar(500), name varchar(500) )
insert into #reportlist
exec [ADA Master].[dbo].[spGetAdminReports] #dbname
set #rptlistStr = substring((SELECT ( ', ' + Name ) FROM #reportlist WHERE NAME = 'rptReport'
FOR XML PATH( '' )
), 3, 1000 )
SELECT NAME, PATH,
y.r.query ('for $s in *:ParameterValue/*:Value return concat(data($s),"|")') [DropDownItemValue]
, y.r.query ('for $s in *:ParameterValue/*:Label return concat(data($s),"|")') [DropDownItemLabel]
, x.r.value ('#Name', 'VARCHAR(100)') AS ReportParameterName
, x.r.value ('*:DataType[1]', 'VARCHAR(100)') AS DataType
, x.r.value ('*:AllowBlank[1]', 'VARCHAR(50)') AS AllowBlank
, x.r.value ('*:Prompt[1]', 'VARCHAR(100)') AS Prompt
, x.r.value ('*:Hidden[1]', 'VARCHAR(100)') AS Hidden
, x.r.value ('*:MultiValue[1]', 'VARCHAR(100)') AS MultiValue
FROM (
SELECT PATH
, NAME
, CAST(CAST(content AS VARBINARY(MAX)) AS XML) AS ReportXML
FROM ReportServer.dbo.Catalog
join master.dbo.ufn_SplitStringArray(#rptlistStr,',') a on NAME COLLATE DATABASE_DEFAULT = a.Item COLLATE DATABASE_DEFAULT
WHERE CONTENT IS NOT NULL AND TYPE = 2
) C
CROSS APPLY C.ReportXML.nodes('*:Report/*:ReportParameters/*:ReportParameter') x(r)
OUTER APPLY x.r.nodes('*:ValidValues/*:ParameterValues') y(r)
where x.r.value ('*:Prompt[1]', 'VARCHAR(100)') is not null

First and foremost - I would not query ReportServer.dbo.Catalog directly; people do this all the time and it's crazy. Converting img data into VARNBINARY(MAX) then into XML is insanely expensive. If you need this information often I suggest dumping the results of this query into another table (we'll call it "SSRS_RDL") then, as part of your Report Deployment process you would update SSRS_RDL with changes and new records as needed. Even an hourly job that does this will often be enough. This way you're dealing with relational data that is easy to index and can be retreived 1000's of times faster than how you are doing it now.
That said you have a serious design flaw with your code; you are:
executing [ADA Master].[dbo].[spGetAdminReports] to push rows into your #reportlist table
then using XML PATH to turn #reportlist into a string called #rptlistStr
Then using dbo.ufn_SplitStringArray to turn #rptlistStr back into a table
Joining the results of this table to ReportServer.dbo.Catalog
You can get rid of #rptlistStr and the code to populate it from #reportList, and instead join your #reportlist table to ReportServer.dbo.Catalog. In other words, change:
FROM ReportServer.dbo.Catalog AS c
JOIN master.dbo.ufn_SplitStringArray(#rptlistStr,',') a on NAME COLLATE DATABASE_DEFAULT = a.Item COLLATE DATABASE_DEFAULT
To:
FROM ReportServer.dbo.Catalog AS c
JOIN #reportlist AS a ON on c.Name COLLATE DATABASE_DEFAULT = a.name COLLATE DATABASE_DEFAULT
Lastly:
This query will run much faster with a parallel execution plan. Once you've made the changes I just outline consider using OPTION (QUERYTRACEON 8649) at the end of your final SELECT query (or using Make_Parallel() by Adam Machanic.)

Related

TSQL - Identify the table name as a column of the single or multiple tables in a select statement WITHOUT HARDCODING it as part of the query

I need to display the table name in my select statement.
select col1, table_name_1 as table_name from table_name_1
Which I insert into another table.
My problem is I have to do this dynamically not as explained by this question.
Display the table name in the select statement
Not sure if this available with a single key word like for an eg. the currentdate we could use something like GETDATE().
Thank you for your answer.
As you were told already, there is no special keyword, which you could use for an easy-cheesy solution. If you really need this, you would add the table's name as a parameter and use a dynamically created statement.
Another approach was a VIEW for each table you are looking for, something along this
CREATE VIEW Table1_withName AS
SELECT 'Table1' AS TableName, * FROM Table1 --use columns instead of "*"
You can use this VIEW instead of the table in all your selects. This is not without hardcoding, but you could even generate the code to create your views out of INFORMATION_SCHEMA.TABLES. And when you call the views, there is no special action needed.
You might use FOR XML AUTO...
There is a fully generic / ad-hoc approach too, but I doubt that this will help you:
Hint: I use - as example only - a TOP 3 * FROM sys.objects. This would work with your tables too.
--FOR XML AUTO will add the table's name as the element's name
SELECT TOP 3 * FROM sys.objects FOR XML AUTO
--result
<sys.objects name="sysrscols" object_id="3" ...
--In case you use an alias, this alias is used
SELECT TOP 3 * FROM sys.objects AnyAlias FOR XML AUTO
--You would have to change this a little, if you want to enforce to include NULL values.
--But this is element centric XML now, the one before was attribute centric.
SELECT TOP 3 * FROM sys.objects FOR XML AUTO,ELEMENTS XSINIL
--This is the way you could use this, to simulate the generic table's name
SELECT EachRow.value('local-name(.)','nvarchar(max)') AS TableName
,EachRow.value('./#name,','nvarchar(max)') AS name
,EachRow.value('./#object_id','int') AS object_id
,EachRow.value('./#schema_id','int') AS schema_id
--add all columns here
FROM
(
--Your original Statement goes here
SELECT TOP 3 * FROM sys.objects
FOR XML AUTO, TYPE
) A(TheSelectAsXml)
CROSS APPLY A.TheSelectAsXml.nodes('*') B(EachRow);
The result
TableName name object_id schema_id
sys.objects sysrscols 3 4
This would even work with joined sources. Try this one out:
SELECT TOP 3 * FROM sys.objects
INNER JOIN sys.all_columns ON sys.objects.object_id=sys.all_columns.object_id
FOR XML AUTO,TYPE
I don't think there is inbuilt function available in T-SQL other than OBJECT_NAME (object_id('TableName')) which required to hard-code Table name.
However, as workaround, following query could fulfill your requirements:
Declare #tbname varchar(50);
Declare #SelectQuery varchar(2000);
set #tbname = 'YourTable';
set #SelectQuery = 'select *, OBJECT_NAME (object_id(' + '''' + #tbname + '''' +')) from ' + #tbname;
--print #SelectQuery;
exec (#SelectQuery);

How to write a FULL XML record to column based on row count

I have a stored procedure that I am using to write custom XML based on a vendors requirements for integrating two systems. I would like to write each record to a column to bypass the char limitation in a sql column. I am including a very simple version of my SP. I have 600 fields in the true SP. I have 4700 records in the table and my XML is getting cut off after 200 rows process. Is there a way to return everything between the Command action="Upsert" invalidLookupBehavior="Skip" and the "/Command" in their own columns?
I'm stumped. I apologize for the duplicate post.. TAB
USE [DEV]
GO
/****** Object: StoredProcedure [dbo].[MASTER_TABLE_XML_PHASE_I] ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER PROCEDURE [dbo].[MASTER_TABLE_XML_PHASE_I_SIMPLE]
AS
declare
#xml nvarchar(max),
#metaEMPLOYEE nvarchar(max)
CREATE TABLE #MASTER_TABLE_IMPORT
(
[EMP_COMPANY_ID] [int] NOT NULL,
[EMP_LAST_NAME] [nvarchar](50) NULL,
[EMP_MIDDLE_NAME] [nvarchar](50) NULL,
[EMP_FIRST_NAME] [nvarchar](50) NULL,
[EMP_PREFIX] [nchar](6) NULL,
[EMP_PREFERRED_NAME] [nvarchar](50) NULL,
[EMP_FORMER_NAME] [nvarchar](50) NULL,
[EMP_SYSTEM_NUMBER] [nvarchar](100) NOT NULL,
[IMP_CREATE_DATE] [datetime] NULL,
[IMP_LAST_UPDATE_DATE] [datetime] NULL,
)
INSERT INTO #MASTER_TABLE_IMPORT
(
[EMP_COMPANY_ID] ,
[EMP_LAST_NAME] ,
[EMP_MIDDLE_NAME] ,
[EMP_FIRST_NAME],
[EMP_PREFIX] ,
[EMP_PREFERRED_NAME],
[EMP_FORMER_NAME] ,
[EMP_SYSTEM_NUMBER] ,
[IMP_CREATE_DATE],
[IMP_LAST_UPDATE_DATE]
)
SELECT
EMP_COMPANY_ID ,
EMP_LAST_NAME,
EMP_MIDDLE_NAME ,
EMP_FIRST_NAME,
EMP_PREFIX ,
EMP_PREFERRED_NAME,
EMP_FORMER_NAME ,
T1.EMP_SYSTEM_NUMBER,
IMP_CREATE_DATE,
IMP_LAST_UPDATE_DATE
FROM MASTER_TABLE_PHASE_I AS T1
INNER JOIN (SELECT EMP_SYSTEM_NUMBER ,MAX(IMP_CREATE_DATE) AS MaxDate
FROM MASTER_TABLE_PHASE_I
GROUP BY EMP_SYSTEM_NUMBER) AS T2
ON (T1.EMP_SYSTEM_NUMBER = T2.EMP_SYSTEM_NUMBER AND T1.IMP_CREATE_DATE = T2.MaxDate)
/*OPEN XML FULL FILE TAGS*/
set #xml =
N'<DataChange><Commands>'
+ N'' + CHAR(10);
/*OPEN EMPLOYEE TABLE*/
/*OPEN EMPLOYEE FIELDS*/
select #metaEMPLOYEE =
CONVERT(nvarchar(max),
(
(select
/*OPEN XML UNIQUE RECORD TAGS*/
'<Command action="Upsert" invalidLookupBehavior="Skip"><Tables><Table name="EMPLOYEE"><Fields>'+
'<Field name="COMPANY_ID" lookupValue="False">84</Field>',
'<Field name="LAST_NAME">' + EMP_LAST_NAME + '</Field>',
'<Field name="MIDDLE_NAME">' + EMP_MIDDLE_NAME + '</Field>',
'<Field name="FIRST_NAME">' + EMP_FIRST_NAME + '</Field>',
'<Field name="PREFIX" lookupValue="True">' + EMP_PREFIX + '</Field>',
'<Field name="PREFERRED_NAME">' + EMP_PREFERRED_NAME + '</Field>',
'<Field name="FORMER_NAME">' + EMP_FORMER_NAME + '</Field>',
'<Field name="SYSTEM_NUMBER" recordIdentifier="True">' + EMP_SYSTEM_NUMBER + '</Field>',
/*CLOSE EMPLOYEE FIELDS*/
'</Fields>',
/*CLOSE EMPLOYEE TABLE*/
'</Table>',
/*CLOSE EMPLOYEE RECORD ALL TABLES*/
'</Tables>',
/*CLOSE XML COMMAND*/
/*CLOSE XML UNIQUE RECORD TAGS*/
'</Command>'
FROM #MASTER_TABLE_IMPORT
WHERE 1=1
FOR XML PATH(''),TYPE).value('(./text())[1]','NVARCHAR(MAX)')))
/*BUILD XML*/
/*CLOSING MASTER COMMAND*/
/*CLOSING MASTER DATA CHANGE*/
SET #xml = #xml + #metaEMPLOYEE +'</Commands></DataChange>'
SELECT #xml;
CREATE TABLE XMLDATA
(
xCol XML
) ;
INSERT INTO XMLDATA ( xCol )
SELECT #xml
DECLARE #Command VARCHAR(255)
DECLARE #Filename VARCHAR(100)
SELECT #Filename = 'C:\Client_XML\Data.dat'
SELECT #Command = 'bcp "select xCol from ' + DB_NAME()
+ '..XMLDATA" queryout '
+ #Filename + ' -w -T -S' + ##servername
EXECUTE master..xp_cmdshell #command
--WRITE THE XML TO A FILE
SELECT CONVERT(nVARCHAR(max),BulkColumn)
FROM OPENROWSET(BULK 'C:\Client_XML\Data.dat', SINGLE_BLOB) AS x
DROP TABLE XMLDATA
Thank you Shnugo. I'm not a SQL developer. Unfortunately we have a resource issue and so I thought I would give this a try. I understand enough to be dangerous and ask questions correctly. This is so much better. Thank you, thank you. There are only a handful of fields in this example (I have nearly 600 fields and 4700 records to process every 15 minutes, hence the need for this). When I run this, the XML is still truncated at row 1200. Is there a way to take the #myXML results and parse to columns base on the the row id of the data source? One row? Is there a column number limitation in a SQL table? Here is a screen shot of what I would expect this table to look like.
SQL Table concept
Thank you for your help and you patience.
Pennie
I am determining that it is truncated in two ways.
1- I am saving the file to a folder on my HD using xp_cmd_shell
2- I have saved the results from my query.
This is the last record in the returned file row 1200
<Command action="Upsert" invalidLookupBehavior="Skip"><Tables><Table name="EMPLOYEE" /><Fields><Field name="COMPANY_ID" lookupValue="False">84</Field><Field name="LAST_NAME">Auditore</Field><Field name="MIDDLE_NAME" /><Field name="FIRST_NAME">Ezio</Field><Field name="PREFIX" loo
ends right there.
Pennie
Hi Shnugo. How do I add Child Nodes ==> ChildTables to your concept? I just keep getting errors.
The FOR XML clause is not allowed in a ASSIGNMENT statement.
This is what the XML result should look like. I also have ChildTables nested in ChildTables.
<Command action="Upsert" invalidLookupBehavior="Skip"><Tables><Table name="EMPLOYEE"><Fields><Field name="COMPANY_ID" lookupValue="False">84</Field><Field name="LAST_NAME">Pinot</Field><Field name="FIRST_NAME">Gris</Field><Field name="PREFIX" lookupValue="True">Ms.</Field><Field name="SYSTEM_NUMBER" recordIdentifier="True">1603-XXXXX</Field><Field name="GENDER" lookupValue="True">Female</Field><Field name="MARITAL_STATUS" lookupValue="True">Married</Field><Field name="BIRTH_COUNTRY" lookupValue="True">Guatemala</Field><Field name="USER_ID_EMAIL">gris.pinot#me.com</Field></Fields><ChildTables><Table name="EMPLOYEE_CF"><Fields><Field name="CF_TEXT001">Gris</Field><Field name="CF_TEXT002">Pinot</Field><Field name="CF_TEXT003">Pinot</Field><Field name="CF_TEXT026">Family</Field><Field name="CF_TEXT027">No</Field><Field name="CF_NUMBER001">2</Field></Fields></Table><Table name="EMPLOYEE_PASSPORT"><Fields><Field name="ISSUE_COUNTRY" lookupValue="True">Guatemala</Field></Fields></Table><Table name="ASSIGNMENT"><Fields><Field name="NUMBER" recordIdentifier="True">1603-XXXXX</Field><Field name="FROM_COUNTRY" lookupValue="True">United Arab Emirates</Field><Field name="TO_COUNTRY" lookupValue="True">Malaysia</Field><Field name="TYPE" lookupValue="True">Long Term</Field><Field name="PHASE" lookupValue="True">New Assignment</Field><Field name="SCHEDULED_END_DATE">04/30/2019</Field><Field name="FROM_COMPANY_LEVEL1">From Level1</Field><Field name="FROM_COMPANY_LEVEL2">From Level2</Field><Field name="FROM_COMPANY_LEVEL3">From Level3</Field><Field name="FROM_COMPANY_LEVEL4">From Level4</Field><Field name="TO_COMPANY_LEVEL1">To Level1</Field><Field name="TO_COMPANY_LEVEL2">To Level2</Field><Field name="TO_COMPANY_LEVEL3">To Level3</Field><Field name="TO_COMPANY_LEVEL4">To Level4</Field></Fields><ChildTables><Table name="ASSIGNMENT_CF"><Fields><Field name="CF_TEXT002">No</Field><Field name="CF_TEXT005">1234567</Field><Field name="CF_TEXT009">1111111</Field><Field name="CF_TEXT010">2222222</Field><Field name="CF_DATE004">03/22/2016</Field><Field name="CF_DATE005">03/23/2016</Field></Fields></Table><Table name="ASSIGNMENT_EMPLOYEE_CONTACT"><Fields><Field name="LOCATION_TYPE" recordIdentifier="true">Current Address</Field><Field name="CONTACT_TYPE" recordIdentifier="true">Address</Field></Fields></Table><Table name="ASSIGNMENT_CONTACT"><Fields><Field name="TYPE" recordIdentifier="true">Manager</Field><Field name="NAME">Trinity</Field><Field name="EMAIL">trinity#me.com.com</Field><Field name="PHONE">5555555555</Field></Fields></Table><Table name="ASSIGNMENT_CONTACT"><Fields><Field name="TYPE" recordIdentifier="true">Home HR Contact</Field><Field name="NAME">Kim</Field><Field name="EMAIL">kim.#me.com</Field><Field name="PHONE">5555555551</Field></Fields></Table><Table name="ASSIGNMENT_CONTACT"><Fields><Field name="TYPE" recordIdentifier="true">HR Contact</Field><Field name="NAME">Pennie</Field><Field name="EMAIL">me#me.com</Field><Field name="PHONE">5555555552</Field></Fields></Table><Table name="ASSIGNMENT_MAILING_ADDRESS"><Fields><Field name="LOCATION_TYPE" recordIdentifier="true">Home Address</Field> </Fields></Table><Table name="POLICY"><Fields><Field name="NAME">424</Field></Fields></Table><Table name="UT_ACCOUNT_SPECIFIC_MISC_COMP_DATA"><Fields><Field name="HOME_BUSINESS_FUNCTION">Finance</Field><Field name="HOST_BUSINESS_FUNCTION">Finance</Field><Field name="EST_ASSIGNMENT_START_DATE">07/01/2016</Field></Fields></Table></ChildTables></Table></ChildTables></Table></Tables></Command>
I appreciate the help.
Pennie
4/5
#Shnugo This is still unresolved as I am not able to use #command. The other part of this is I cannot write the XML from the result set. It is custom. The result set is coming from a system I do not own. Though this was all extremely helpful. I am working with the suggestions. I would like to vote as you were very responsive and so knowledgeable. Maybe because I am so new I am not able to vote. Regardless, I am not clear on how to vote.
P
You are doing a huge lot of unnecessary work...
It seems that you do not need your temp table
You should never build your XML as string via string concatenation
The result returned by SELECT ... FOR XML has (almost) no limit in size
Size restrictions are - in most cases - bound to intermediate steps / conversions / computations / whatever, where the return type is not big enough
If you go like this, the whole result is in #myXML.
From there you can continue however you like...
DECLARE #myXML XML;
WITH CTE_instead_of_TempTable AS
(
SELECT EMP_COMPANY_ID ,
EMP_LAST_NAME,
EMP_MIDDLE_NAME ,
EMP_FIRST_NAME,
EMP_PREFIX ,
EMP_PREFERRED_NAME,
EMP_FORMER_NAME ,
T1.EMP_SYSTEM_NUMBER,
IMP_CREATE_DATE,
IMP_LAST_UPDATE_DATE
--This is the source you are using to fill your temp table. Cannot know, wheter it's correct or not
--The two "FROM" lines are disturbing...
FROM MASTER_TABLE_PHASE_I AS T1
INNER JOIN (SELECT EMP_SYSTEM_NUMBER ,MAX(IMP_CREATE_DATE) AS MaxDate
FROM MASTER_TABLE_PHASE_I
GROUP BY EMP_SYSTEM_NUMBER) AS T2
ON (T1.EMP_SYSTEM_NUMBER = T2.EMP_SYSTEM_NUMBER AND T1.IMP_CREATE_DATE = T2.MaxDate)
)
SELECT #myXML=
(
SELECT
(
SELECT
'Upsert' AS [#action]
,'Skip' AS [#invalidLookupBehavior]
,(
SELECT
'EMPLOYEE' AS [Table/#name]
,(
SELECT
'COMPANY_ID' AS [Field/#name]
,'False' AS [Field/#lookupValue]
,84 AS [Field]
,''
,'LAST_NAME' AS [Field/#name]
,EMP_LAST_NAME AS [Field]
,''
,'MIDDLE_NAME' AS [Field/#name]
,EMP_MIDDLE_NAME AS [Field]
,''
,'FIRST_NAME' AS [Field/#name]
,EMP_FIRST_NAME AS [Field]
,''
,'PREFIX' AS [Field/#name]
,'True' AS [Field/#lookupValue]
,EMP_PREFIX AS [Field]
,''
,'PREFERRED_NAME' AS [Field/#name]
,EMP_PREFERRED_NAME AS [Field]
,''
,'FORMER_NAME' AS [Field/#name]
,EMP_FORMER_NAME AS [Field]
,''
,'SYSTEM_NUMBER' AS [Field/#name]
,'True' AS [Field/#recordIdentifier]
,EMP_SYSTEM_NUMBER AS [Field]
FOR XML PATH(''),TYPE
) AS [Fields]
FOR XML PATH('Tables'),TYPE
)
FROM CTE_instead_of_TempTable
FOR XML PATH('Command'),TYPE
)
FOR XML PATH('Commands'),ROOT('DataChange'),TYPE
)
SELECT #myXML;

How do I identify the column(s) responsible for “String or binary data would be truncated.”

I have an INSERT statement which looks like this:
INSERT INTO CLIENT_TABLE
SELECT NAME, SURNAME, AGE FROM CONTACT_TABLE
My example above is a basic one, but is there a way to pass in a SELECT statement and then check the returned column values against what the actual field sizes are?
Checking LEN against every column isnt practical. I am looking for something that is automated.
My debugging in that kind of problem is..
I am removing columns in the SELECT one by one, if did not return error, then you know what column is the cause of truncation problem.. but here are some tips on debugging.
Option 1: Start first with the columns that hold more character.. like VARCHAR, for example in your case, i think the column NAME, SURNAME are the one causes an error since AGE column does not hold many characters because its integer. You should debug something like that.
Option 2: You can investigate the column in your final output. The final SELECT will return all columns and its values, then you can counter check if the values matches what you input on the UI etc.
Ex. See the Expected vs. Actual Output result on the image below
Expected:
Actual Output:
My example in option 2 shows that the truncated string is the SURNAME as you can see..
NOTE: You can only use the Option 2 if the query did not return execution error, meaning to say that the truncated string did not return an error BUT created an unexpected split string which we don't want.
IF the query return an error, your best choice is Option 1, which consume more time but worth it, because that is the best way to make sure you found the exact column that causes the truncation problem
Then if you already found the columns that causes the problem, you can now adjust the size of the column or another way is to limit the input of the user ?, you can put some validation to users to avoid truncation problem, but it is all up to you on how you want the program works depending on your requirement.
My answers/suggestion is base on my experience in that kind of situation.
Hope this answer will help you. :)
Check max length for each field, this way you can identify the fields that are over char limit specified in your table e.g CLIENT_TABLE.
SELECT Max(Len(NAME)) MaxNamePossible
, Max(Len(SURNAME)) MaxSurNamePossible
, Max(Len(AGE)) MaxAgePossible
FROM CONTACT_TABLE
Compare the result with Client_Table Design
Like if in Client_Table "Name" is of Type Varchar(50) and validation query( written above) return more than 50 chars than "Name" field is causing over flow.
There is a great answer by Aaron Bertrand to the question:
Retrieve column definition for stored procedure result set
If you used SQL Server 2012+ you could use sys.dm_exec_describe_first_result_set. Here is a nice article with examples. But, even in SQL Server 2008 it is possible to retrieve the types of columns of the query. Aaron's answer explains it in details.
In fact, in your case it is easier, since you have a SELECT statement that you can copy-paste, not something that is hidden in a stored procedure. I assume that your SELECT is a complex query returning columns from many tables. If it was just one table you could use sys.columns with that table directly.
So, create an empty #tmp1 table based on your complex SELECT:
SELECT TOP(0)
NAME, SURNAME, AGE
INTO #tmp1
FROM CONTACT_TABLE;
Create a second #tmp2 table based on the destination of your complex SELECT:
SELECT TOP(0)
NAME, SURNAME, AGE
INTO #tmp2
FROM CLIENT_TABLE;
Note, that we don't need any rows, only columns for metadata, so TOP(0) is handy.
Once those #tmp tables exist, we can query their metadata using sys.columns and compare it:
WITH
CTE1
AS
(
SELECT
c.name AS ColumnName
,t.name AS TypeName
,c.max_length
,c.[precision]
,c.scale
FROM
tempdb.sys.columns AS c
INNER JOIN tempdb.sys.types AS t ON
c.system_type_id = t.system_type_id
AND c.user_type_id = t.user_type_id
WHERE
c.[object_id] = OBJECT_ID('tempdb.dbo.#tmp1')
)
,CTE2
AS
(
SELECT
c.name AS ColumnName
,t.name AS TypeName
,c.max_length
,c.[precision]
,c.scale
FROM
tempdb.sys.columns AS c
INNER JOIN tempdb.sys.types AS t ON
c.system_type_id = t.system_type_id
AND c.user_type_id = t.user_type_id
WHERE
c.[object_id] = OBJECT_ID('tempdb.dbo.#tmp2')
)
SELECT *
FROM
CTE1
FULL JOIN CTE2 ON CTE1.ColumnName = CTE2.ColumnName
WHERE
CTE1.TypeName <> CTE2.TypeName
OR CTE1.max_length <> CTE2.max_length
OR CTE1.[precision] <> CTE2.[precision]
OR CTE1.scale <> CTE2.scale
;
Another possible way to compare:
WITH
... as above ...
SELECT * FROM CTE1
EXCEPT
SELECT * FROM CTE2
;
Finally
DROP TABLE #tmp1;
DROP TABLE #tmp2;
You can tweak the comparison to suit your needs.
A manual solution is very quick if you are using SQL Server Manager Studio (SSMS). First capture the table structure of your SELECT statement into a working table:
SELECT COL1, COL2, ... COL99 INTO dbo.zz_CONTACT_TABLE
FROM CONTACT_TABLE WHERE 1=0;
Then in SSMS, right-click your original destination table (CLIENT_TABLE) and script it as create to a new SSMS window. Then right-click your working table (zz_CONTACT_TABLE) and script the creation of this table to a second SSMS window. Arrange both windows side by side and check the columns of zz_CONTACT_TABLE against CLIENT_TABLE. Differences in length and out-of-order columns will be immediately seen, even if there are hundreds of output columns.
Finally drop your working table:
DROP TABLE dbo.zz_CONTACT_TABLE;
Regarding an automated solution, it is difficult to see how this could work. Basically you are comparing a destination table (or a subset of columns in a destination table) against the output of a SELECT statement. I suppose you could write a stored procedure that takes two varchar parameters: the name of the destination table and the SELECT statement that would populate it. But this would not handle the case where only some columns of the destination are populated, and it would be more work than the manual solution above.
Here is some code to compare two row producing SQL statements to compare the columns. It takes as parameters two row-sets specified with server name, database name, and T-SQL query. It can compare data in different databases and even on different SQL Servers.
--setup parameters
declare #Server1 as varchar(128)
declare #Database1 as varchar(128)
declare #Query1 as varchar(max)
declare #Server2 as varchar(128)
declare #Database2 as varchar(128)
declare #Query2 as varchar(max)
set #Server1 = '(local)'
set #Database1 = 'MyDatabase'
set #Query1 = 'select * from MyTable' --use a select
set #Server2 = '(local)'
set #Database2 = 'MyDatabase2'
set #Query2 = 'exec MyTestProcedure....' --or use a procedure
--calculate statement column differences
declare #SQLStatement1 as varchar(max)
declare #SQLStatement2 as varchar(max)
set #Server1 = replace(#Server1,'''','''''')
set #Database1 = replace(#Database1,'''','''''')
set #Query1 = replace(#Query1,'''','''''')
set #Server2 = replace(#Server2,'''','''''')
set #Database2 = replace(#Database2,'''','''''')
set #Query2 = replace(#Query2,'''','''''')
CREATE TABLE #Qry1Columns(
[colorder] [smallint] NULL,
[ColumnName] [sysname] COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
[TypeName] [sysname] COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL,
[prec] [smallint] NULL,
[scale] [int] NULL,
[isnullable] [int] NULL,
[collation] [sysname] COLLATE SQL_Latin1_General_CP1_CI_AS NULL
) ON [PRIMARY]
CREATE TABLE #Qry2Columns(
[colorder] [smallint] NULL,
[ColumnName] [sysname] COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
[TypeName] [sysname] COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL,
[prec] [smallint] NULL,
[scale] [int] NULL,
[isnullable] [int] NULL,
[collation] [sysname] COLLATE SQL_Latin1_General_CP1_CI_AS NULL
) ON [PRIMARY]
set #SQLStatement1 =
'SELECT *
INTO #Qry1
FROM OPENROWSET(''SQLNCLI'',
''server=' + #Server1 + ';database=' + #Database1 + ';trusted_connection=yes'',
''select top 0 * from (' + #Query1 + ') qry'')
select colorder, syscolumns.name ColumnName, systypes.name TypeName, syscolumns.prec, syscolumns.scale, syscolumns.isnullable, syscolumns.collation
from tempdb.dbo.syscolumns
join tempdb.dbo.systypes
on syscolumns.xtype = systypes.xtype
where id = OBJECT_ID(''tempdb.dbo.#Qry1'')
order by 1'
insert into #Qry1Columns
exec(#SQLStatement1)
set #SQLStatement2 =
'SELECT *
INTO #Qry1
FROM OPENROWSET(''SQLNCLI'',
''server=' + #Server2 + ';database=' + #Database2 + ';trusted_connection=yes'',
''select top 0 * from (' + #Query2 + ') qry'')
select colorder, syscolumns.name ColumnName, systypes.name TypeName, syscolumns.prec, syscolumns.scale, syscolumns.isnullable, syscolumns.collation
from tempdb.dbo.syscolumns
join tempdb.dbo.systypes
on syscolumns.xtype = systypes.xtype
where id = OBJECT_ID(''tempdb.dbo.#Qry1'')
order by 1'
insert into #Qry2Columns
exec(#SQLStatement2)
select ISNULL( #Qry1Columns.colorder, #Qry2Columns.colorder) ColumnNumber,
#Qry1Columns.ColumnName ColumnName1,
#Qry1Columns.TypeName TypeName1,
#Qry1Columns.prec prec1,
#Qry1Columns.scale scale1,
#Qry1Columns.isnullable isnullable1,
#Qry1Columns.collation collation1,
#Qry2Columns.ColumnName ColumnName2,
#Qry2Columns.TypeName TypeName2,
#Qry2Columns.prec prec2,
#Qry2Columns.scale scale2,
#Qry1Columns.isnullable isnullable2,
#Qry2Columns.collation collation2
from #Qry1Columns
join #Qry2Columns
on #Qry1Columns.colorder=#Qry2Columns.colorder
You can tweak the finally select statement to highlight any differences that you wish. You can also wrap this up in a procedure and make a nice little user interface for it if you like, so that it's literally a cut and paste away to quick results.

Finding Out If a Table is Being Used by a Report

Is there anyway to find out if a particular table is being used by a report on the reporting server?
USE ReportServer
DECLARE #TEXTTOSEARCH AS VARCHAR(200)
SET #TEXTTOSEARCH = 'urtableorview'
;WITH XMLNAMESPACES
(DEFAULT 'http://schemas.microsoft.com/sqlserver/reporting/2005/01/reportdefinition',
'http://schemas.microsoft.com/SQLServer/reporting/reportdesigner' AS rd)
SELECT name
, x.value('CommandType[1]', 'VARCHAR(100)') AS CommandType
, x.value('CommandText[1]','VARCHAR(MAX)') AS CommandText
, x.value('DataSourceName[1]','VARCHAR(150)') AS DataSource
FROM (SELECT name,
CAST(CAST(content AS VARBINARY(MAX)) AS XML) AS reportXML
FROM Catalog
WHERE content IS NOT NULL AND type != 3) AS a
CROSS APPLY reportXML.nodes('/Report/DataSets/DataSet/Query') r(x)
WHERE x.value('CommandType[1]', 'VARCHAR(50)') IS NULL
AND x.value('CommandText[1]','VARCHAR(MAX)') LIKE '%' + #TEXTTOSEARCH + '%'
ORDER BY name
I found this similar but simpler query on a technet article by Ajit Kumar Thakur https://blogs.technet.microsoft.com/dbtechresource/2015/04/04/retrieve-ssrs-report-server-database-information/.
WITH Reports AS
(
SELECT *
, CONVERT( VARCHAR(MAX), CONVERT(VARBINARY(MAX), Content)) AS ReportContent
FROM Catalog
)
SELECT Name, [Path]
FROM Reports
WHERE ReportContent LIKE '%tablename%';
This is very useful when you need "to identify dependency [on] any table, procedure or function in any report. It extracts [the] XML content of each report, converts it to varchar and then search for given object [name]. Catalog table contains XML contents of all RDL files" in your report server.

MERGE Command in SQL Server

I have been using the statement
insert into target
select * from source
where [set of conditions] for a while.
Recently found this MERGE command that will be more effective to use for my purpose so that I can change the above statement to
MERGE target
USING source ON [my condtion]
WHEN NOT MATCHED BY TARGET
THEN INSERT VALUES (source.col1, source.col2, source.col3)
But the problem for me is lets say if I have 20+ columns in my source table I have to list all of them, I need a way to specify it to insert source.* . Is there a way ? I'm new to SQL. Appreciate your help.
Thanks in advance :)
Me too; I hate typing column names.
I normally build the Merge statement in dynamic SQL.
I have a function that takes a table name as a parameter, and returns a string containing all column names formatted properly with Table Name prefix, [] brackets and comma, as in S.Col1, S.Col2, S.Col3
I could also tell you that I build a temp table with the required columns, and pass the temp table to my function, because some times you don't want a list of all columns. But that would probably be a confusing wooble, obscuring the important bits;
Use dynamic sql
Use a function to create csv list of columns.
Everything that I have read regarding the MERGE statement says that you need to specify the columns for your INSERT statement. If you are looking for a quick way to get the INSERT statment, you can right mouse click the table in SSMS and select Script Table As->INSERT To->Clipboard. You can then paste this into your query and alter just the VALUES part.
Merge statement
There's simply no advantage of using MERGE in this situation. Why overcomplicate? Stick to the KISS principle, for chrissake.
Anyways, here's the script:
declare
#targetTableName varchar(100) = 'target'
,#targetSchemaName varchar(20) = 'dbo'
,#sourceTableName varchar(100) = 'source'
,#sourceSchemaName varchar(20) = 'dbo2'
,#matchCondition varchar(50) = 't.id = s.id'
,#columns varchar(max)
set #columns = (select ','+quotename(c.name)
from sys.tables t
join sys.columns as c on t.object_id = c.object_id
join sys.schemas s on s.schema_id = t.schema_id
where t.name = #targetTableName and s.name = isnull(#targetSchemaName, s.name)
for xml path(''))
--a column name starts with a comma
declare #sql varchar(max) = '
merge #target t
using #source s on #matchCondition
when not matched then
insert (#columns)
values #sourceColumns'
set #sql =
replace(replace(replace(replace(replace(#sql
, '#matchCondition', #matchCondition)
--replace #columns with column list with the first comma removed
, '#columns', stuff(#columns, 1, 1, ''))
--replace #sourceColumns with column list with the 's.' prefix and comma removed
, '#sourceColumns', stuff(replace(#columns, ',', ',s.'),1,1,''))
, '#target', quotename(#targetSchemaName)+'.'+quotename(#targetTableName))
, '#source', quotename(#sourceSchemaName)+'.'+quotename(#sourceTableName))
print #sql
--exec(#sql)
And we'll get something like this:
merge [dbo].[target] t
using [dbo2].[source] s on t.id = s.id
when not matched then
insert ([column1], [column2], [column3], [column4])
values s.[column1], s.[column2], s.[column3], s.[column4]