Build temporary table with dynamic sql in SQL Server 2008 - sql

To make a long story short...
I'm building a web app in which the user can select any combination of about 40 parameters. However, for one of the results they want(investment experience), I have to extract information from a different table and compare the values in six different columns(stock exp, mutual funds exp, etc) and return only the highest value of the six for that specific record.
This is not the issue. The issue is that at runtime, my query to find the investment exp doesn't necessarily know the account id. Considering a table scan would bring well over half a million clients, this is not an option. So what I'm trying to do is edit a copy of my main dynamically built query, but instead of returning 30+ columns, it'll just return 2, the accountid and experienceid (which is the PK for the experience table) so I can do the filtering deal.
Some of you may define dynamic SQL a little different than myself. My query is a string that depending on the arguments sent to my procedure, portions of the where clause will be turned on or off by switches. In the end I execute, it's all done on the server side, all the web app does is send an array of arguments to my proc.
My over simplified code looks essentially like this:
declare #sql varchar(8000)
set #sql =
'select [columns]
into #tempTable
from [table]
[table joins]' + #dynamicallyBuiltWhereClause
exec(#sql)
after this part I try to use #tempTable for the investment experience filtering process, but i get an error telling me #tempTable doesn't exist.
Any and all help would be greatly appreciated.

The problem is the scope of your temp table only exists within the exec() statement. You can transform your temp table into a "global" temp table by using 2 hash signs -> ##tempTable. However, I wonder why you are using a variable #dynamicallyBuiltWhereClause to generate your SQL statement.
I have done what you are doing in the past, but have had better success generating SQL from the application (using C# to generate my SQL).
Also, you may want to look into Table Variables. I have seen some strange instances using temp tables where an application re-uses a connection and the temp table from the last query is still there.

Related

Create a generic procedure, which inserts data into any table

I'm currently working on a .NET application and want to make it as modular as possible. I've already created a basic SELECT procedure, which returns data by checking inputted parameters on SQL Server side.
I want to create a procedure that parses structured data as string and inserts its' contents to corresponding table in database.
For example, I have a table as
CREATE TABLE ExampleTable (
id_exampleTable int IDENTITY (1, 1) NOT NULL,
exampleColumn1 nvarchar(200) NOT NULL,
exampleColumn2 int NULL,
exampleColumn3 int NOT NULL,
CONSTRAINT pk_exampleTable PRIMARY KEY ( id_exampleTable )
)
And my procedure starts as
CREATE PROCEDURE InsertDataIntoCorrespondingTable
#dataTable nvarchar(max), --name of Table in my DB
#data nvarchar(max) --normalized string parameter as 'column1, column2, column3, etc.'
AS
BEGIN
IF #dataTable = 'table'
BEGIN
/**Parse this string and execute insert command**/
END
ELSE IF /**Other statements**/
END
TL;DR
So basically, I'm looking for a solution that can help me achieve something like this
EXEC InsertDataIntoCorrespondingTableByID(
#dataTable = 'ExampleTable',
#data = '''exampleColumn1'', 2, 3'
)
Which should be equal to just
INSERT INTO ExampleTable SELECT 'exampleColumn1', 2, 3
Sure, I can push data as INSERT statements (for each and every 14 tables inside DB...), generated inside an app, but I want to conquer T-SQL :)
This might be reasonable (to some degree) on an RDBMS that supports structured data like JSON or XML natively, but doing this the way you are planning is going to cause some real pain-in-the-rear support and, more importantly, a sql injection attack vector. I would leave this to the realm of the web backend server where it belongs.
You are likely going to invent your own structured data markup language and parser to solve this as sql server. That's a wheel that doesn't need to be reinvented. If you do end up building this, highly consider going with JSON to avoid all the issues that structured data inherently bring with it, assuming your version of sql server supports json parsing/packaging.
Your front end that packages your data into your SDML is going to have to assume column ordinals, but column ordinal is not something that one should rely on in a database. SQL Amateurs often do, I know from years in the industry and dealing with end users that are upset when a new column is introduced in a position they don't want it. Adding a column to a table shouldn't break an application. If it does, that application has bad code.
Regarding the sql injection attack vector, your SP code is going to get ugly. You'll need to parse out each item in #data into a variable of its own in order to properly parameterize your dynamic sql that is being built. See here under the "working with parameters" section for what that will look like. Failure to add this to your SP code means that values passed in that #data SDML could become executable SQL instead of literals and that would be very bad. This is not easy to solve in SP language. Where it IS easy to solve though is in the backend server code. Every database library on the planet supports parameterized query building/execution natively.
Once you have this built you will be dynamically generating an INSERT statement and dynamically generating variables or an array or some data structure to pass in parameters to the INSERT statement to avoid sql injection attacks. It's going to be dynamic, on top of dynamic, on top of dynamic which leads to:
From a support context, imagine that your application just totally throws up one day. You have to dive into investigate. You track the SDML that your front end created that caused the failure, and you open up your SP code to troubleshoot. Imagine what this code ends up looking like
It has to determine if the table exists
It has to parse the SDML to get each literal
It has to read DB metadata to get the column list
It has to dynamically write the insert statement, listing the columns from metadata and dynamically creating sql parameters for the VALUES() list.
It has to execute sending a dynamic number of variables into the dynamically generated sql.
My support staff would hang me out to dry if they had to deal with that, and I'm the one paying them.
All of this is solved by using a proper backend to handle communication, deeper validation, sql parameter binding, error catching and handling, and all the other things that backend servers are meant to do.
I believe that your back end web server should be VERY aware of the underlying data model. It should be the connection between your view, your data, and your model. Leave the database to the things it's good at (reading and writing data). Leave your front end to the things that it's good at (presenting a UI for the end user).
I suppose you could do something like this (may need a little extra work)
declare #columns varchar(max);
select #columns = string_agg(name, ', ') WITHIN GROUP ( ORDER BY column_id )
from sys.all_columns
where object_id = object_id(#dataTable);
declare #sql varchar(max) = select concat('INSERT INTO ',#dataTable,' (',#columns,') VALUES (', #data, ')')
exec sp_executesql #sql
But please don't. If this were a good idea, there would be tons of examples of how to do it. There aren't so it's probably not a good idea.
There are however tons of examples of using ORMs or auto-generated code in stead - because that way your code is maintainable, debugable and performant.

Query for Multiple Users - Best Practices

I currently have about 10 users that use their own personalized query for an internal process at my workplace. The user inputs a few values at the top of the query, hits execute, and voila, their report shows up in the grid. The source data tables they access are the same, but the created tables within are personalized with the suffix _User1, _User2...User10. Each time they run the query, the previously created tables are dropped and created again. The entire query takes about 1 second to run.
The majority of the structure looks like this repeated 5 times for the 5 steps to get to their desired output:
DROP TABLE z
SELECT *
INTO z
FROM y
Now, the number of users is multiplying to 50, and that means that each tweak in the master query code will result in me changing 50 user-specific queries and sending them back out. Managable and annoying with 10 users, completely unmanagable with 50.
My question is, what is the best way to go about structuring the database/query? Ideally I'd like to just have one query, one set of created tables (not 50). Since it only takes 1 second to run, would we run the risk of two or more users (with different inputs) running the query simultaneously, accessing the same tables and somehow getting bad data because they ran it at the exact same time?
Is there a specfic way this is normally done? Hoping someone can shed some light.
Thanks
Disclaimer: As I've indicated in my comments, giving a bunch of users access directly to SSMS to run reports is a very bad idea. Get some sort of front-end, even a simple MS Access database - you would only need a single license to develop the database, and you could give the rest of the users Access Runtime, for instance. There are so many ways a user could really mess you up if they don't know what they're doing. I will offer some ideas below, but I don't recommend doing this.
One solution: use temp tables so you don't have to worry about each user's tables overlapping:
-- drop the table if it already exists
if object_id('tempdb..#z') is not null
DROP TABLE #z
SELECT *
INTO #z
FROM y
When you prefix a table name with #, it becomes a connection-scoped temporary table, which means separate sessions will not see the temporary tables in other sessions even if they have the same name.
Often it is not necessary to create a temp table unless you have some really complicated scenario. You should be able to make use of subqueries, views, CTE's, and stored procedures to generate the output real-time without any new tables being involved. You can even build views and procedures that reference other views so you can organize your complicated logic. For example, you might encapsulate the logic into a stored procedure like this:
CREATE PROCEDURE TheReport
(
#ReportID int,
#Name varchar(50),
#SomeField varchar(10)
)
AS
BEGIN
-- do some complicated query here
SELECT field1, field2 FROM Result Q
END
Then you don't even have to send updates to your users (unless the fields change). Just have their query call the stored procedure, and you can update the procedure directly at your convenience:
DECLARE #ReportID int
DECLARE #Name varchar(50)
DECLARE #SomeField varchar(10)
-- YOU CAN MODIFY THIS --
SET #ReportID = 5
SET #Name = 'MyName'
SET #SomeField = 'abc'
-- DON'T MODIFY BELOW THIS LINE --
EXEC [TheReport] #ReportID, #Name, #SomeField;

Caching multiple versions of the same table

I have a problem with generating a particular table on the fly due to expensive SQL requests. I would like to pre-generate the table and simply display it to the user. The problem is: there are multiple versions of the table, and new ones will be continuously added.
Please give me some ideas on how to design a table (?) to hold these tables.
One idea that I have is to append a version number to each row in the individual tables and dump them all into a single cache table. This way, I can easily display just the requested version by filtering on version number. Is there a better way?
Without an example I may be making some assumptions here, but it sounds to me like dynamic SQL would do the trick.
declare #sql varchar(max)
select
#sql = 'SELECT * FROM MyTableName_' + v.version
from dbo.Version as v
where v.id = 1
exec (#sql)
If you end up using it, just know that dynamic sql is a cruel mistress. With one hand she give'th, the other she take'th away.

Cast Stored Procedure Result as a Table? [duplicate]

This question already has answers here:
SQL: how to predicate over stored procedure's result sets?
(3 answers)
Closed 6 years ago.
I currently have a stored procedure that runs a complex query and returns a data set. I'd like to cast this data set to a table (on which I can perform further queries) if at all possible. I know I can do this using a table-valued UDF but I'd prefer to avoid that at this point. Is there any way I can accomplish this task?
EDIT: OK... so the SProc I'm using (written by third party and I'm not supposed to change it) runs a fairly complex select statement to return a bunch of line item data about purchase orders. I can recreate it as a UDF but then I'd have to support the UDF and ensure it gets changed as and when our vendor changes their SProc. I'd like to further refine this line item info by a number of criteria such as (but not limited to) item numbers, vendor codes, cost centers, etc. All of this information is brought back by the original SProc and I just need to be able to manipulate it further. My thought process was that if I can somehow treat the results of the SProc as a table (or get them into a table format of some type) then I can run further queries against the original result set to limit by the criteria mentioned above. Please let me know if any further details are needed.
There's various means of sharing data between stored procedures - this link is pretty exhaustive.
But I'm curious why you want a table valued stored procedure (which doesn't exist in SQL Server) when there are table valued functions...
Cast Stored Procedure Result as a
Table?
Yes and this is used quite often. It simply needs one or more select statements:
Create Procedure #Foo
As
Select object_id, name
From sys.columns
That said, you cannot join to this resultset nor can you easily consume it from another stored proc (although there is a way). Given your edit, it appears the question is whether you can consume the results of a stored proc by another stored proc. Technically, yes. You can populate a temp table with the results of a proc. However, you must declare your temp variable or temp table with the same column structure as is returned by the first resultset of the stored proc.
Declare #Data Table ( object_id int, name nvarchar(128) )
Insert #Data
Exec #Foo
Select *
From #Data
(Or use the far more clever OPENROWSET solution as mentioned by Cade Roux and OMG Ponies)
Have you considered using table-valued parameters? They are new in SQL 2008.
-- Edit --
Nope, never mind, they're only good for passing data into stored procedures.
You could try using a View instead of a Stored Procedure. Store your complex query as part of the view, and you have the functionality to perform more queries on the view.

Access to Result sets from within Stored procedures Transact-SQL SQL Server

I'm using SQL Server 2005, and I would like to know how to access different result sets from within transact-sql. The following stored procedure returns two result sets, how do I access them from, for example, another stored procedure?
CREATE PROCEDURE getOrder (#orderId as numeric) AS
BEGIN
select order_address, order_number from order_table where order_id = #orderId
select item, number_of_items, cost from order_line where order_id = #orderId
END
I need to be able to iterate through both result sets individually.
EDIT: Just to clarify the question, I want to test the stored procedures. I have a set of stored procedures which are used from a VB.NET client, which return multiple result sets. These are not going to be changed to a table valued function, I can't in fact change the procedures at all. Changing the procedure is not an option.
The result sets returned by the procedures are not the same data types or number of columns.
The short answer is: you can't do it.
From T-SQL there is no way to access multiple results of a nested stored procedure call, without changing the stored procedure as others have suggested.
To be complete, if the procedure were returning a single result, you could insert it into a temp table or table variable with the following syntax:
INSERT INTO #Table (...columns...)
EXEC MySproc ...parameters...
You can use the same syntax for a procedure that returns multiple results, but it will only process the first result, the rest will be discarded.
I was easily able to do this by creating a SQL2005 CLR stored procedure which contained an internal dataset.
You see, a new SqlDataAdapter will .Fill a multiple-result-set sproc into a multiple-table dataset by default. The data in these tables can in turn be inserted into #Temp tables in the calling sproc you wish to write. dataset.ReadXmlSchema will show you the schema of each result set.
Step 1: Begin writing the sproc which will read the data from the multi-result-set sproc
a. Create a separate table for each result set according to the schema.
CREATE PROCEDURE [dbo].[usp_SF_Read] AS
SET NOCOUNT ON;
CREATE TABLE #Table01 (Document_ID VARCHAR(100)
, Document_status_definition_uid INT
, Document_status_Code VARCHAR(100)
, Attachment_count INT
, PRIMARY KEY (Document_ID));
b. At this point you may need to declare a cursor to repetitively call the CLR sproc you will create here:
Step 2: Make the CLR Sproc
Partial Public Class StoredProcedures
<Microsoft.SqlServer.Server.SqlProcedure()> _
Public Shared Sub usp_SF_ReadSFIntoTables()
End Sub
End Class
a. Connect using New SqlConnection("context connection=true").
b. Set up a command object (cmd) to contain the multiple-result-set sproc.
c. Get all the data using the following:
Dim dataset As DataSet = New DataSet
With New SqlDataAdapter(cmd)
.Fill(dataset) ' get all the data.
End With
'you can use dataset.ReadXmlSchema at this point...
d. Iterate over each table and insert every row into the appropriate temp table (which you created in step one above).
Final note:
In my experience, you may wish to enforce some relationships between your tables so you know which batch each record came from.
That's all there was to it!
~ Shaun, Near Seattle
There is a kludge that you can do as well. Add an optional parameter N int to your sproc. Default the value of N to -1. If the value of N is -1, then do every one of your selects. Otherwise, do the Nth select and only the Nth select.
For example,
if (N = -1 or N = 0)
select ...
if (N = -1 or N = 1)
select ...
The callers of your sproc who do not specify N will get a result set with more than one tables. If you need to extract one or more of these tables from another sproc, simply call your sproc specifying a value for N. You'll have to call the sproc one time for each table you wish to extract. Inefficient if you need more than one table from the result set, but it does work in pure TSQL.
Note that there's an extra, undocumented limitation to the INSERT INTO ... EXEC statement: it cannot be nested. That is, the stored proc that the EXEC calls (or any that it calls in turn) cannot itself do an INSERT INTO ... EXEC. It appears that there's a single scratchpad per process that accumulates the result, and if they're nested you'll get an error when the caller opens this up, and then the callee tries to open it again.
Matthieu, you'd need to maintain separate temp tables for each "type" of result. Also, if you're executing the same one multiple times, you might need to add an extra column to that result to indicate which call it resulted from.
Sadly it is impossible to do this. The problem is, of course, that there is no SQL Syntax to allow it. It happens 'beneath the hood' of course, but you can't get at these other results in TSQL, only from the application via ODBC or whatever.
There is a way round it, as with most things. The trick is to use ole automation in TSQL to create an ADODB object which opens each resultset in turn and write the results to the tables you nominate (or do whatever you want with the resultsets). you can also do it in DMO if you enjoy pain.
There are two ways to do this easily. Either stick the results in a temp table and then reference the temp table from your sproc. The other alternative is to put the results into an XML variable that is used as an OUTPUT variable.
There are, however, pros and cons to both of these options. With a temporary table, you'll need to add code to the script that creates the calling procedure to create the temporary table before modifying the procedure. Also, you should clean up the temp table at the end of the procedure.
With the XML, it can be memory intensive and slow.
You could select them into temp tables or write table valued functions to return result sets. Are asking how to iterate through the result sets?