CTE & Temp Tables Performance Issue - sql

The query that I've been working on for awhile now was filled with 7 Temp Tables until I had to replace them with CTE's (7 CTE's) due to OPENQUERY giving the following error when using TempTables:
Metadata discovery only supports temp tables when analyzing a single- statement batch.
When I run the Query with Temp Tables, the run duration is:
7:50
When I run the Query with CTE's, the run duration is:
15:00
Almost double the time! Is there any other alternative to OPENQUERY that might make it run faster while perhaps keeping my temp tables?
Current execution Query:
SET #XSql = 'SELECT * FROM OPENQUERY([server], ''' + REPLACE(#QSql, '''', '''''') + ''')'
EXEC(#XSql)
I used this for reference: Stored Procedure and populating a Temp table from a linked Stored Procedure with parameters
And need a optimal solution.
Open to suggestions!

Can you use EXEC ... AT SERVER? This worked fine for me:
EXEC ('CREATE TABLE #TestTable1 (ID int); CREATE TABLE #TestTable2 (ID int); SELECT * FROM #TestTable1, #TestTable2;') AT LinkedServer;

Related

Executing dynamically created SQL Query and storing the Query results as a temporary table

I am creating a SQL Query dynamically. After it's been created I want to execute it and store it as a temporary table.
WITH [VALIDACCOUNTS] AS( EXEC (#sqlQuery))
You have two solutions for this:
As a first solution you can simply use an INSERT EXEC. This will work if you have a specified result set. This could be used if your procedure just returns one result set with a fixed result design.
Simply create your temporary table with matching columns and datatypes. After that you can call this:
INSERT INTO #yourTemporaryTable
EXEC(#sql)
The second solution would be the usage of OPENROWSET for this, which may have some sideeffects.
You can read more about it here.
INSERT INTO #yourTemptable
SELECT *
FROM OPENROWSET('SQLNCLI', 'DRIVER={SQL Server};',
'EXEC (''+#sql+''))'

Performance Dynamic SQL vs Temporary Tables

I'm wondering if copying an existing Table into a Temporary Table results in a worse performance compared to Dynamic SQL.
To be concrete i wonder if i should expect a different performance between the following two SQL Server stored procedures:
CREATE PROCEDURE UsingDynamicSQL
(
#ID INT ,
#Tablename VARCHAR(100)
)
AS
BEGIN
DECLARE #SQL VARCHAR(MAX)
SELECT #SQL = 'Insert into Table2 Select Sum(ValColumn) From '
+ #Tablename + ' Where ID=' + #ID
EXEC(#SQL)
END
CREATE PROCEDURE UsingTempTable
(
#ID INT ,
#Tablename Varachar(100)
)
AS
BEGIN
Create Table #TempTable (ValColumn float, ID int)
DECLARE #SQL VARCHAR(MAX)
SELECT #SQL = 'Select ValColumn, ID From ' + #Tablename
+ ' Where ID=' + #ID
INSERT INTO #TempTable
EXEC ( #SQL );
INSERT INTO Table2
SELECT SUM(ValColumn)
FROM #TempTable;
DROP TABLE #TempTable;
END
I'm asking this since I'm currently using a Procedure build in the latter style where i create many Temporary Tables in the beginning as simple extracts of existing Tables and am afterwards working with these Temporary Tables.
Could I improve the performance of the stored procedure by getting rid of the Temporary Tables and using Dynamic SQL instead? In my opinion the Dynamic SQL Version is a lot uglier to programm - therefore i used Temporary Tables in the first place.
Table variables suffer performance problems because the query optimizer always assumes there will be exactly one row in them. If you have table variables holding > 100 rows, I'd switch them to temp tables.
Using dynamic sql with EXEC(#sql) instead of exec sp_executesql #sql will prevent the query plan from being cached, which will probably hurt performance.
However, you are using dynamic sql on both queries. The only difference is that the second query has the unnecessary step of loading to a table variable first, then loading into the final table. Go with the first stored procedure you have, but switch to sp_executesql.
In the posted query the temporary table is an extra write.
It is not going to help.
Don't just time a query look at the query plan.
If you have two queries the query plan will tell you the split.
And there is a difference between a table variable and temp table
The temp table is faster - the query optimizer does more with a temp table
A temporary table can help in a few situations
The output from a select is going to be used more than once
You materialize the output so it is only executed once
Where you see this is with a an expensive CTE that is evaluated many times
People of falsely think a CTE is just executed once - no it is just syntax
The query optimizer need help
An example
You are doing a self join on a large table with multiple conditions and some of conditions eliminate most of the rows
A query to a #temp can filter the rows and also reduce the number of join conditions
I agree with everyone else that you always need to test both... I'm putting it in an answer here so it's more clear.
If you have an index setup that is perfect for the final query, going to temp tables could be nothing but extra work.
If that's not the case, pre-filtering to a temp table may or may not be faster.
You can predict it at the extremes - if you're filtering down from a million to a dozen rows, I would bet it helps.
But otherwise it can be genuinely difficult to know without trying.
I agree with you that maintenance is also an issue and lots of dynamic sql is a maintenance cost to consider.

Statement 'SELECT INTO' is not supported in this version of SQL Server - SQL Azure

I am getting
Statement 'SELECT INTO' is not supported in this version of SQL Server
in SQL Server
for the below query inside stored procedure
DECLARE #sql NVARCHAR(MAX)
,#sqlSelect NVARCHAR(MAX) = ''
,#sqlFrom NVARCHAR(MAX) = ''
,#sqlTempTable NVARCHAR(MAX) = '#itemSearch'
,#sqlInto NVARCHAR(MAX) = ''
,#params NVARCHAR(MAX)
SET #sqlSelect ='SELECT
,IT.ITEMNR
,IT.USERNR
,IT.ShopNR
,IT.ITEMID'
SET #sqlFrom =' FROM dbo.ITEM AS IT'
SET #sqlInto = ' INTO ' + #sqlTempTable + ' ';
IF (#cityId > 0)
BEGIN
SET #sqlFrom = #sqlFrom +
' INNER JOIN dbo.CITY AS CI2
ON CI2.CITYID = #cityId'
SET #sqlSelect = #sqlSelect +
'CI2.LATITUDE AS CITYLATITUDE
,CI2.LONGITUDE AS CITYLONGITUDE'
END
SELECT #params =N'#cityId int '
SET #sql = #sqlSelect +#sqlInto +#sqlFrom
EXEC sp_executesql #sql,#params
I have around 50,000 records, so decided to use Temp Table. But surprised to see this error.
How can i achieve the same in SQL Azure?
Edit: Reading this blog http://blogs.msdn.com/b/sqlazure/archive/2010/05/04/10007212.aspx suggesting us to CREATE a Table inside Stored procedure for storing data instead of Temp table. Is it safe under concurrency? Will it hit performance?
Adding some points taken from http://blog.sqlauthority.com/2011/05/28/sql-server-a-quick-notes-on-sql-azure/
Each Table must have clustered index. Tables without a clustered index are not supported.
Each connection can use single database. Multiple database in single transaction is not supported.
‘USE DATABASE’ cannot be used in Azure.
Global Temp Tables (or Temp Objects) are not supported.
As there is no concept of cross database connection, linked server is not the concept in Azure at this moment.
SQL Azure is shared environment and because of the same there is no concept of Windows Login.
Always drop TempDB objects after their need as they create pressure on TempDB.
During buck insert use batchsize option to limit the number of rows to be inserted. This will limit the usage of Transaction log space.
Avoid unnecessary usage of grouping or blocking ORDER by operations as they leads to high end memory usage.
SELECT INTO is one of the many things that you can unfortunately not perform in SQL Azure.
What you'd have to do is first create the temporary table, then perform the insert. Something like:
CREATE TABLE #itemSearch (ITEMNR INT, USERNR INT, IT.ShopNR INT, IT.ITEMID INT)
INSERT INTO #itemSearch
SELECT IT.ITEMNR, IT.USERNR, IT.ShopNR ,IT.ITEMID
FROM dbo.ITEM AS IT
The new Azure DB Update preview has this problem resolved:
The V12 preview enables you to create a table that has no clustered
index. This feature is especially helpful for its support of the T-SQL
SELECT...INTO statement which creates a table from a query result.
http://azure.microsoft.com/en-us/documentation/articles/sql-database-preview-whats-new/
Create the table using # prefix, e.g. create table #itemsearch then use insert into. The scope of the temp table is limited to the session so there will no concurrency problems.
Well, As we all know SQL Azure table must have a clustered index, that is why SELECT INTO failure copy data from one table in to another table.
If you want to migrate, you must create a table first with same structure and then execute INSERT INTO statement.
For temporary table which followed by # you don't need to create Index.
how to create index and how to execute insert into for temp table?

Drop all temporary tables for an instance

I was wondering how / if it's possible to have a query which drops all temporary tables?
I've been trying to work something out using the tempdb.sys.tables, but am struggling to format the name column to make it something that can then be dropped - another factor making things a bit trickier is that often the temp table names contain a '_' which means doing a replace becomes a bit more fiddly (for me at least!)
Is there anything I can use that will drop all temp tables (local or global) without having to drop them all individually on a named basis?
Thanks!
The point of temporary tables is that they are.. temporary. As soon as they go out of scope
#temp create in stored proc : stored proc exits
#temp created in session : session disconnects
##temp : session that created it disconnects
The query disappears. If you find that you need to remove temporary tables manually, you need to revisit how you are using them.
For the global ones, this will generate and execute the statement to drop them all.
declare #sql nvarchar(max)
select #sql = isnull(#sql+';', '') + 'drop table ' + quotename(name)
from tempdb..sysobjects
where name like '##%'
exec (#sql)
It is a bad idea to drop other sessions' [global] temp tables though.
For the local (to this session) temp tables, just disconnect and reconnect again.
The version below avoids all of the hassles of dealing with the '_'s. I just wanted to get rid of non-global temp tables, hence the '#[^#]%' in my WHERE clause, drop the [^#] if you want to drop global temp tables as well, or use a '##%' if you only want to drop global temp tables.
The DROP statement seems happy to take the full name with the '_', etc., so we don't need to manipulate and edit these. The OBJECT_ID(...) NOT NULL allows me to avoid tables that were not created by my session, presumably since these tables should not be 'visible' to me, they come back with NULL from this call. The QUOTENAME is needed to make sure the name is correctly quoted / escaped. If you have no temp tables, #d_sql will be the empty string still, so we check for that before printing / executing.
DECLARE #d_sql NVARCHAR(MAX)
SET #d_sql = ''
SELECT #d_sql = #d_sql + 'DROP TABLE ' + QUOTENAME(name) + ';
'
FROM tempdb..sysobjects
WHERE name like '#[^#]%'
AND OBJECT_ID('tempdb..'+QUOTENAME(name)) IS NOT NULL
IF #d_sql <> ''
BEGIN
PRINT #d_sql
-- EXEC( #d_sql )
END
In a stored procedure they are dropped automatically when the execution of the proc completes.
I normally come across the desire for this when I copy code out of a stored procedure to debug part of it and the stored proc does not contain the drop table commands.
Closing and reopening the connection works as stated in the accepted answer. Rather than doing this manually after each execution you can enable SQLCMD mode on the Query menu in SSMS
And then use the :connect command (adjust to your server/instance name)
:connect (local)\SQL2014
create table #foo(x int)
create table #bar(x int)
select *
from #foo
Can be run multiple times without problems. The messages tab shows
Connecting to (local)\SQL2014...
(0 row(s) affected)
Disconnecting connection from (local)\SQL2014...

automatically placing results of a called procedure into a select statement

I'm playing with some code from an article written by Peter Brawley found here on page 6 of the pdf. I'm trying to figure out how to automate it so that the result of the procedure is automatically placed in the select query. Right now what I am doing is calling the procedure, exporting the result into a text file, going to the text file manually (point click with mouse), copying the result and pasting it into a select statement. I haven't been able to figure out how to either insert the select statement into the procedure, or put the procedure into a table in my database or variable that I can call from the select statement. Any ideas?
Here is the sample code from Peter Brawley, that I've been trying to automate:
use database;
DROP PROCEDURE IF EXISTS writesumpivot;
DELIMITER |
CREATE PROCEDURE writesumpivot(
db CHAR(64), tbl CHAR(64), pivotcol CHAR(64), sumcol CHAR(64)
)
BEGIN
DECLARE datadelim CHAR(1) DEFAULT '"';
DECLARE comma CHAR(1) DEFAULT ',';
DECLARE singlequote CHAR(1) DEFAULT CHAR(39);
SET #sqlmode = (SELECT ##sql_mode);
SET ##sql_mode='';
SET #pivotstr = CONCAT( 'SELECT DISTINCT CONCAT(', singlequote,
',SUM(IF(', pivotcol, ' = ', datadelim, singlequote,
comma, pivotcol, comma, singlequote, datadelim,
comma, sumcol, ',0)) AS `',
singlequote, comma, pivotcol, comma, singlequote, '`',
singlequote, ') AS sumpivotarg FROM ', db, '.', tbl,
' WHERE ', pivotcol, ' IS NOT NULL' );
-- UNCOMMENT TO SEE THET MIDLEVEL SQL:
-- SELECT #pivotstr;
PREPARE stmt FROM #pivotstr;
EXECUTE stmt;
drop prepare stmt;
SET ##sql_mode=#sqlmode;
END
|
DELIMITER ;
call writesumpivot('database', 'table', 'pivotcol','sumcol');
Then the Select statement is as follows:
SELECT
infoField
[results of the call]
FROM
database.table
GROUP BY infoField;
Assuming I've ran the call, exported the results, copied them and pasted them into the select statement, my personal results of the call in the SELECT query would look something like this:
SELECT
infoField
,SUM(IF(pivotcol = "Yellow",sumcol,0)) AS `Yellow`
,SUM(IF(pivotcol = "Red",sumcol,0)) AS `Red`
,SUM(IF(pivotcol = "Purple",sumcol,0)) AS `Purple`
,SUM(IF(pivotcol = "Orange",sumcol,0)) AS `Orange`
,SUM(IF(pivotcol = "Green",sumcol,0)) AS `Green`
,SUM(IF(pivotcol = "Blue",sumcol,0)) AS `Blue`
,SUM(IF(pivotcol = "White",sumcol,0)) AS `White`
FROM database.table
GROUP BY infoField;
Running the above select statement gives me the pivot table that I need. I'm tryig to figure out how to incorporate this into a website, which is why it needs to be automated.
I tried inserting a create table, and then reference the table, but didn't get desired results.
Edited the last section of the PROCEDURE as follows:
--SELECT #pivotstr;
DROP TABLE IF EXISTS temp2;
CREATE TABLE IF NOT EXISTS temp2(sumpivotarg varchar(8000));
PREPARE stmt FROM #pivotstr;
...
changed call and select as follows:
call writesumpivot('database','table','pivotcol','sumcol');
insert into temp2(sumpivotarg) values(#pivotstr);
SELECT
table.infoField, temp2.sumpivotarg
FROM table, temp2
GROUP BY infoField
Results from this were the generic code rather than summing the contents of the cells in the database. it looks something like this:
infoField | sumpivotarg <-- Col Headings
123 | SELECT DISTINCT CONCAT('Sum(if(pivotcol=",pivotcol",sumcol,0)) AS'pivotcol,'')..
124 | SELECT DISTINCT CONCAT('Sum(if(pivotcol=",pivotcol",sumcol,0)) AS'pivotcol,'')..
125 | select DISTINCT CONCAT('Sum(if(pivotcol=",pivotcol",sumcol,0)) AS'pivotcol,'')..
I do not mean any disrespect towards mySQL, but this whole writing to a temp table solution for passing tablular data between stored procedures is suboptimal and dangerous (in real world transaction processing). I truly hope that the mySQL team will build in some enterprise level stored procedure functionality. Also, mySQL Functions not being able to return tables is a distinct disadvantage.
I have been slowly moving process over to Linux and mySQL from MSSQL. The short comings of mySQL in the procedure and function department is forcing some major kludgey type rewrites (ala temp tables and globals, etc).
I have been writing SPs for about 20 years (Sybase before SQL Server) and feel strongly that using dynamic SQL does not take advantage of the server side database. Many folks try to implement a Data layer at the client level, but the sever is better suited to this task. It is a natural division of functionality and data. Also, simultaneously running multiple precompiled calls at the server is quite a bit more optimal than repeated calls to the server, for the same processes.
Come on mySQL team, I am keeping my fingers crossed....
You could create a temp table in your DB. Use SQL insert to insert data into temp table as the result of the stored procedure execution. Afterwards you could use that temp table inside your select statement.
Here's an answer that shows how to do that:
Use result set of mysql stored procedure in another stored procedure
Just to mention a similar question:
MySQL How to INSERT INTO temp table FROM Stored Procedure