What are the real time uses of the Denali's With Result Set so far Sql Stored Procs are concern apart from renaming the column names and data types at runtime.
Even what is the benefit of changing the datatypes at runtime in With Result Set
e.g.
Alter PROCEDURE test_Proc
AS
BEGIN
SELECT * FROM tbl_Test
END
GO
EXEC test_Proc
WITH RESULT SETS
(
( Id int,
EmpName varchar(50),
PNo varchar(50)
)
)
Even if the column datatypes has been changed, what will we do with that?
however this article gives some idea about it's benefit in SSIS. But I am more interested in Sql Server stored Proc talking to any front end application(e.g. c#) and the like prespective.
Well, for one, say your application is calling sp_who2, and it is storing SPID in an int32. sp_who2 returns SPID as a char, requiring you to perform special handling in all of your apps to convert the output to an int32. If you create a wrapper procedure, you can do this in one place, and without having to dump the results into a temp table first. One more curious case with sp_who2 is that it returns two identical SPID columns - with WITH RESULT SETS you can rename one of them (say, to redundant_SPID) so that your apps never see multiple columns with the same name.
Another use case is say you are changing a data type from int64 to int32 or int32 to varchar, but you can't change all of your apps at once. You can change the "modern" apps to use the new data type while leaving the other "not changeable right now" apps to use the old data type. This means you can split out the deployment and testing of your apps one by one instead of making a wholesale data type change across all of the apps.
Related
I currently have about 10 users that use their own personalized query for an internal process at my workplace. The user inputs a few values at the top of the query, hits execute, and voila, their report shows up in the grid. The source data tables they access are the same, but the created tables within are personalized with the suffix _User1, _User2...User10. Each time they run the query, the previously created tables are dropped and created again. The entire query takes about 1 second to run.
The majority of the structure looks like this repeated 5 times for the 5 steps to get to their desired output:
DROP TABLE z
SELECT *
INTO z
FROM y
Now, the number of users is multiplying to 50, and that means that each tweak in the master query code will result in me changing 50 user-specific queries and sending them back out. Managable and annoying with 10 users, completely unmanagable with 50.
My question is, what is the best way to go about structuring the database/query? Ideally I'd like to just have one query, one set of created tables (not 50). Since it only takes 1 second to run, would we run the risk of two or more users (with different inputs) running the query simultaneously, accessing the same tables and somehow getting bad data because they ran it at the exact same time?
Is there a specfic way this is normally done? Hoping someone can shed some light.
Thanks
Disclaimer: As I've indicated in my comments, giving a bunch of users access directly to SSMS to run reports is a very bad idea. Get some sort of front-end, even a simple MS Access database - you would only need a single license to develop the database, and you could give the rest of the users Access Runtime, for instance. There are so many ways a user could really mess you up if they don't know what they're doing. I will offer some ideas below, but I don't recommend doing this.
One solution: use temp tables so you don't have to worry about each user's tables overlapping:
-- drop the table if it already exists
if object_id('tempdb..#z') is not null
DROP TABLE #z
SELECT *
INTO #z
FROM y
When you prefix a table name with #, it becomes a connection-scoped temporary table, which means separate sessions will not see the temporary tables in other sessions even if they have the same name.
Often it is not necessary to create a temp table unless you have some really complicated scenario. You should be able to make use of subqueries, views, CTE's, and stored procedures to generate the output real-time without any new tables being involved. You can even build views and procedures that reference other views so you can organize your complicated logic. For example, you might encapsulate the logic into a stored procedure like this:
CREATE PROCEDURE TheReport
(
#ReportID int,
#Name varchar(50),
#SomeField varchar(10)
)
AS
BEGIN
-- do some complicated query here
SELECT field1, field2 FROM Result Q
END
Then you don't even have to send updates to your users (unless the fields change). Just have their query call the stored procedure, and you can update the procedure directly at your convenience:
DECLARE #ReportID int
DECLARE #Name varchar(50)
DECLARE #SomeField varchar(10)
-- YOU CAN MODIFY THIS --
SET #ReportID = 5
SET #Name = 'MyName'
SET #SomeField = 'abc'
-- DON'T MODIFY BELOW THIS LINE --
EXEC [TheReport] #ReportID, #Name, #SomeField;
I am trying to create a stored procedure that has a table and as an argument and executes some queries on that table.
So...
CREATE PROCEDURE blabla
#TableName nvarchar(50)
AS
DROP TABLE #TableName -- just an example, real queries are much longer
GO
This query gives me incorrect syntax error.
I know I can always use sp_executesql procedure, but I want a neater way where I don't need to worry about building an endless sql string.
Thanks
Here is a good article on why not to use Dynamic SQL in most cases as well as how to use it properly when it is the best solution:
http://www.sommarskog.se/dynamic_sql.html
Basically, doing what you are looking to do has a number of issues, including not allowing the system to properly check for permission issues before executing, not being able to optimize the stored procedure, and (most importantly) opening yourself up to SQL injection. You can mitigate this last issue somewhat but it involves a much more complex statement. Here is a quote from the above article:
Passing table and column names as parameters to a procedure with dynamic SQL is rarely a good idea for application code. (It can make perfectly sense for admin tasks). As I've said, you cannot pass a table or a column name as a parameter to sp_executesql, but you must interpolate it into the SQL string. Still you should protect it against SQL injection, as a matter of routine. It could be that bad it comes from user input.
To this end, you should use the built-in function quotename() (added in SQL 7). quotename() takes two parameters: the first is a string, and the second is a pair of delimiters to wrap the string in. The default for the second parameter is []. Thus, quotename('Orders') returns [Orders]. quotename() takes care of nested delimiters, so if you have a really crazy table name like Left]Bracket, quotename() will return [Left]]Bracket].
Note that when you work with names with several components, each component should be quoted separately. quotename('dbo.Orders') returns [dbo.Orders], but that is a table in an unknown schema of which the first four characters are d, b, o and a dot. As long as you only work with the dbo schema, best practice is to add dbo in the dynamic SQL and only pass the table name. If you work with different schemas, pass the schema as a separate parameter. (Although you could use the built-in function parsename() to split up a #tblname parameter in parts.)
I know you want a "neater" way of creating a dynamic statement but the reality is that no only is that not possible for how you want to do this, really you need to make the statement even more complex in order to ensure that the stored procedure is safe. I would try very hard to look at a different way to solve this issue (the article had a few suggestions). If you can avoid making this statement into dynamic SQL, you really should.
There are very few places that parameters can be used in T-SQL. Usually, it's exactly the places where you would find a quoted string - not just any arbitrary place within the query (where the query is necessarily in a string form anyway)
E.g., you could use a parameter or variable to replace 'hello' below:
SELECT * from Table2 where ColA = 'hello'
But you couldn't use it where Table2 appears. I don't know why people seem to expect such things to be possible in T-SQL, when it's generally not possible in most other programming languages either, outside of exec/eval style functions.
If you have multiple tables that share the same structure (names and types of columns), it generally suggests that what you should actually have is a single table, with possibly additional column(s) that distinguish between rows that would originally be in different tables. E.g. if you currently have:
CREATE TABLE MaleEmployees (
EmployeeNo int not null,
Name varchar(50) not null,
)
and
CREATE TABLE FemaleEmployees (
EmployeeNo int not null,
Name varchar(50) not null
)
You should instead have:
CREATE TABLE Employees (
EmployeeNo int not null,
Name varchar(50) not null,
Gender char(1) not null,
constraint CK_Gender_Valid CHECK (Gender in ('M','F'))
)
You can then query this Employees table, regardless of gender, rather than trying to parametrize the table name within your query. Of course, the above is an exaggerated example.
set #l = 'DROP TABLE ' + #TableName
exec #l
But if that's what you mean by 'endless string', not sure what you want
The correct syntax(notice the begin):
CREATE PROCEDURE blabla
#TableName nvarchar(50)
AS
begin
DROP TABLE #TableName -- just an example, real queries are much longer
END
GO
According to this forum discussion, SQL Server (I'm using 2005 but I gather this also applies to 2000 and 2008) silently truncates any varchars you specify as stored procedure parameters to the length of the varchar, even if inserting that string directly using an INSERT would actually cause an error. eg. If I create this table:
CREATE TABLE testTable(
[testStringField] [nvarchar](5) NOT NULL
)
then when I execute the following:
INSERT INTO testTable(testStringField) VALUES(N'string which is too long')
I get an error:
String or binary data would be truncated.
The statement has been terminated.
Great. Data integrity preserved, and the caller knows about it. Now let's define a stored procedure to insert that:
CREATE PROCEDURE spTestTableInsert
#testStringField [nvarchar](5)
AS
INSERT INTO testTable(testStringField) VALUES(#testStringField)
GO
and execute it:
EXEC spTestTableInsert #testStringField = N'string which is too long'
No errors, 1 row affected. A row is inserted into the table, with testStringField as 'strin'. SQL Server silently truncated the stored procedure's varchar parameter.
Now, this behaviour might be convenient at times but I gather there is NO WAY to turn it off. This is extremely annoying, as I want the thing to error if I pass too long a string to the stored procedure. There seem to be 2 ways to deal with this.
First, declare the stored proc's #testStringField parameter as size 6, and check whether its length is over 5. This seems like a bit of a hack and involves irritating amounts of boilerplate code.
Second, just declare ALL stored procedure varchar parameters to be varchar(max), and then let the INSERT statement within the stored procedure fail.
The latter seems to work fine, so my question is: is it a good idea to use varchar(max) ALWAYS for strings in SQL Server stored procedures, if I actually want the stored proc to fail when too long a string is passed? Could it even be best practice? The silent truncation that can't be disabled seems stupid to me.
It just is.
I've never noticed a problem though because one of my checks would be to ensure my parameters match my table column lengths. In the client code too. Personally, I'd expect SQL to never see data that is too long. If I did see truncated data, it'd be bleeding obvious what caused it.
If you do feel the need for varchar(max) beware a massive performance issue because of datatype precedence. varchar(max) has higher precedence than varchar(n) (longest is highest). So in this type of query you'll get a scan not a seek and every varchar(100) value is CAST to varchar(max)
UPDATE ...WHERE varchar100column = #varcharmaxvalue
Edit:
There is an open Microsoft Connect item regarding this issue.
And it's probably worthy of inclusion in Erland Sommarkog's Strict settings (and matching Connect item).
Edit 2, after Martins comment:
DECLARE #sql VARCHAR(MAX), #nsql nVARCHAR(MAX);
SELECT #sql = 'B', #nsql = 'B';
SELECT
LEN(#sql),
LEN(#nsql),
DATALENGTH(#sql),
DATALENGTH(#nsql)
;
DECLARE #t table(c varchar(8000));
INSERT INTO #t values (replicate('A', 7500));
SELECT LEN(c) from #t;
SELECT
LEN(#sql + c),
LEN(#nsql + c),
DATALENGTH(#sql + c),
DATALENGTH(#nsql + c)
FROM #t;
Thanks, as always, to StackOverflow for eliciting this kind of in-depth discussion. I have recently been scouring through my Stored Procedures to make them more robust using a standard approach to transactions and try/catch blocks. I disagree with Joe Stefanelli that "My suggestion would be to make the application side responsible", and fully agree with Jez: "Having SQL Server verify the string length would be much preferable". The whole point for me of using stored procedures is that they are written in a language native to the database and should act as a last line of defence. On the application side the difference between 255 and 256 is just a meangingless number but within the database environment, a field with a maximum size of 255 will simply not accept 256 characters. The application validation mechanisms should reflect the backend db as best they can, but maintenance is hard so I want the database to give me good feedback if the application mistakenly allows unsuitable data. That's why I'm using a database instead of a bunch of text files with CSV or JSON or whatever.
I was puzzled why one of my SPs threw the 8152 error and another silently truncated. I finally twigged: The SP which threw the 8152 error had a parameter which allowed one character more than the related table column. The table column was set to nvarchar(255) but the parameter was nvarchar(256). So, wouldn't my "mistake" address gbn's concern: "massive performance issue"? Instead of using max, perhaps we could consistently set the table column size to, say, 255 and the SP parameter to just one character longer, say 256. This solves the silent truncation problem and doesn't incur any performance penalty.
Presumably there is some other disadvantage that I haven't thought of, but it seems a good compromise to me.
Update:
I'm afraid this technique is not consistent. Further testing reveals that I can sometimes trigger the 8152 error and sometimes the data is silently truncated. I would be very grateful if someone could help me find a more reliable way of dealing with this.
Update 2:
Please see Pyitoechito's answer on this page.
The same behavior can be seen here:
declare #testStringField [nvarchar](5)
set #testStringField = N'string which is too long'
select #testStringField
My suggestion would be to make the application side responsible for validating the input before calling the stored procedure.
Update: I'm afraid this technique is not consistent. Further testing reveals that I can sometimes trigger the 8152 error and sometimes the data is silently truncated. I would be very grateful if someone could help me find a more reliable way of dealing with this.
This is probably occurring because the 256th character in the string is white-space. VARCHARs will truncate trailing white-space on insertion and just generate a warning. So your stored procedure is silently truncating your strings to 256 characters, and your insertion is truncating the trailing white-space (with a warning). It will produce an error when said character is not white-space.
Perhaps a solution would be to make the stored procedure's VARCHAR a suitable length to catch a non-white-space character. VARCHAR(512) would probably be safe enough.
One solution would be to:
Change all incoming parameters to be varchar(max)
Have sp private variable of the correct datalength (simply copy and paste all in parameters and add "int" at the end
Declare a table variable with the column names the same as variable names
Insert into the table a row where each variable goes into the column with the same name
Select from the table into internal variables
This way your modifications to the existing code are going to be very minimal like in the sample below.
This is the original code:
create procedure spTest
(
#p1 varchar(2),
#p2 varchar(3)
)
This is the new code:
create procedure spTest
(
#p1 varchar(max),
#p2 varchar(max)
)
declare #p1Int varchar(2), #p2Int varchar(3)
declare #test table (p1 varchar(2), p2 varchar(3)
insert into #test (p1,p2) varlues (#p1, #p2)
select #p1Int=p1, #p2Int=p2 from #test
Note that if the length of the incoming parameters is going to be greater than the limit instead of silently chopping off the string SQL Server will throw off an error.
You could always throw an if statement into your sp's that check the length of them, and if they're greater than the specified length throw an error. This is rather time consuming though and would be a pain to update if you update the data size.
This isn't the Answer that'll solve your problem today, but it includes a Feature Suggestion for MSSQL to consider adding, that would resolve this issue.
It is important to call this out as a shortcoming of MSSQL, so we may help them resolve it by raising awareness of it.
Here's the formal Suggestion if you'd like to vote on it:
https://feedback.azure.com/forums/908035-sql-server/suggestions/38394241-request-for-new-rule-string-truncation-error-for
I share your frustration.
The whole point of setting Character-Size on Parameters is so other Developers will instantly know
what the Size Limits are (via Intellisense) when passing in Data.
This is like having your documentation baked right into the Sproc's Signature.
Look, I get it, Implicit-Conversion during Variable Assignments is the culprit.
Still, there is no good reason to expend this amount of energy battling scenarios
where you are forced to work around this feature.
If you ask me, Sprocs and Functions should have the same engine-rules in place,
for Assigning Parameters, that are used when Populating Tables. Is this really too much to ask?
All these suggestions to use Larger Character-Limits
and then adding Validation for EACH Parameter in EVERY Sproc is ridiculous.
I know it's the only way to ensure Truncation is avoided, but really MSSQL?
I don't care if it's ANSI/ISO Standard or whatever, it's dumb!
When Values are too long - I want my code to break - every time.
It should be: Do not pass go, and fix your code.
You could have multiple truncation bugs festering for years and never catch them.
What happened to ensuring your Data-Integrity?
It's dangerous to assume your SQL Code will only ever be called after all Parameters are Validated.
I try to add the same Validation to both my Website and in the Sproc it calls,
and I still catch Errors in my Sproc that slipped past the website. It's a great sanity-check!
What if you want to re-use your Sproc for a WebSite/WebService and also have it called from other
Sprocs/Jobs/Deployment/Ad-Hoc Scripts (where there is no front-end to Validate Parameters)?
MSSQL Needs a "NO_TRUNC" Option to Enforce this on any Non-Max String Variable
(even those used as Parameters for Sprocs and Functions).
It could be Connection/Session-Scoped:
(like how the "TRANSACTION ISOLATION LEVEL READ UNCOMMITTED" Option affects all Queries)
Or focused on a Single Variable:
(like how "NOLOCK" is a Table Hint for just 1 Table).
Or a Trace-Flag or Database Property you turn on to apply this to All Sproc/Function Parameters in the Database.
I'm not asking to upend decades of Legacy Code.
Just asking MS for the option to better manage our Databases.
I'm working on a legacy system, and I need to call a stored procedure to retrieve the data I need. The problem is, I don't have any idea as to what the output column format is. Short of going into the stored procedure and figuring out the output column format from the SQL, is there a way for me to see what the output column types are? I can run the stored procedure just fine, but the code is a mess, and I'd prefer to treat it as a black box if I could.
EDIT: I know that its not possible for me to determine this from the database metadata, since the procedure may return different results based upon what the input is. I guess I should rephrase my question: given the result set from a stored procedure, how can I determine the column types?
As you already know, you cannot determine that information from any database metadata (since there is none) - and unfortunately, you cannot determine that from the result set, either - at least not in any reliable, deterministic way.
When you call a stored procedure, all you get back is a bunch of columns and a bunch of rows. There's no inherent information available about the types of those columns. Best you can do is guess - if the data contains alphanumeric characters, it's a VARCHAR/string field. If it has only numeric digits, and possibly a decimal separator, it's likely to be a INT or DECIMAL (or MONEY or SMALLMONEY - can't really tell for sure). If it looks like a DATE and can be converted to a DATE, it's probably a DATE, DATETIME, DATETIME2 or something like that.
The only reliable way is to have some documentation on the output values that the stored procedure generates. Anything else is guesswork at best.
what will you do if the stored proc outputs different resultsets depending on what is passed in...for example
create procedure Test
#var int
as
if #var =1
begin
select col1,col2 from table1
end
else if #var =2
begin
select col4,col2 ,col5,col1 from table2
end
else
begin
select * from table3
end
There is a SET options but it is being deprecated
SET FMTONLY ON;
GO
exec YourProc
GO
SET FMTONLY OFF;
GO
I'm using SQL Server 2005, and I would like to know how to access different result sets from within transact-sql. The following stored procedure returns two result sets, how do I access them from, for example, another stored procedure?
CREATE PROCEDURE getOrder (#orderId as numeric) AS
BEGIN
select order_address, order_number from order_table where order_id = #orderId
select item, number_of_items, cost from order_line where order_id = #orderId
END
I need to be able to iterate through both result sets individually.
EDIT: Just to clarify the question, I want to test the stored procedures. I have a set of stored procedures which are used from a VB.NET client, which return multiple result sets. These are not going to be changed to a table valued function, I can't in fact change the procedures at all. Changing the procedure is not an option.
The result sets returned by the procedures are not the same data types or number of columns.
The short answer is: you can't do it.
From T-SQL there is no way to access multiple results of a nested stored procedure call, without changing the stored procedure as others have suggested.
To be complete, if the procedure were returning a single result, you could insert it into a temp table or table variable with the following syntax:
INSERT INTO #Table (...columns...)
EXEC MySproc ...parameters...
You can use the same syntax for a procedure that returns multiple results, but it will only process the first result, the rest will be discarded.
I was easily able to do this by creating a SQL2005 CLR stored procedure which contained an internal dataset.
You see, a new SqlDataAdapter will .Fill a multiple-result-set sproc into a multiple-table dataset by default. The data in these tables can in turn be inserted into #Temp tables in the calling sproc you wish to write. dataset.ReadXmlSchema will show you the schema of each result set.
Step 1: Begin writing the sproc which will read the data from the multi-result-set sproc
a. Create a separate table for each result set according to the schema.
CREATE PROCEDURE [dbo].[usp_SF_Read] AS
SET NOCOUNT ON;
CREATE TABLE #Table01 (Document_ID VARCHAR(100)
, Document_status_definition_uid INT
, Document_status_Code VARCHAR(100)
, Attachment_count INT
, PRIMARY KEY (Document_ID));
b. At this point you may need to declare a cursor to repetitively call the CLR sproc you will create here:
Step 2: Make the CLR Sproc
Partial Public Class StoredProcedures
<Microsoft.SqlServer.Server.SqlProcedure()> _
Public Shared Sub usp_SF_ReadSFIntoTables()
End Sub
End Class
a. Connect using New SqlConnection("context connection=true").
b. Set up a command object (cmd) to contain the multiple-result-set sproc.
c. Get all the data using the following:
Dim dataset As DataSet = New DataSet
With New SqlDataAdapter(cmd)
.Fill(dataset) ' get all the data.
End With
'you can use dataset.ReadXmlSchema at this point...
d. Iterate over each table and insert every row into the appropriate temp table (which you created in step one above).
Final note:
In my experience, you may wish to enforce some relationships between your tables so you know which batch each record came from.
That's all there was to it!
~ Shaun, Near Seattle
There is a kludge that you can do as well. Add an optional parameter N int to your sproc. Default the value of N to -1. If the value of N is -1, then do every one of your selects. Otherwise, do the Nth select and only the Nth select.
For example,
if (N = -1 or N = 0)
select ...
if (N = -1 or N = 1)
select ...
The callers of your sproc who do not specify N will get a result set with more than one tables. If you need to extract one or more of these tables from another sproc, simply call your sproc specifying a value for N. You'll have to call the sproc one time for each table you wish to extract. Inefficient if you need more than one table from the result set, but it does work in pure TSQL.
Note that there's an extra, undocumented limitation to the INSERT INTO ... EXEC statement: it cannot be nested. That is, the stored proc that the EXEC calls (or any that it calls in turn) cannot itself do an INSERT INTO ... EXEC. It appears that there's a single scratchpad per process that accumulates the result, and if they're nested you'll get an error when the caller opens this up, and then the callee tries to open it again.
Matthieu, you'd need to maintain separate temp tables for each "type" of result. Also, if you're executing the same one multiple times, you might need to add an extra column to that result to indicate which call it resulted from.
Sadly it is impossible to do this. The problem is, of course, that there is no SQL Syntax to allow it. It happens 'beneath the hood' of course, but you can't get at these other results in TSQL, only from the application via ODBC or whatever.
There is a way round it, as with most things. The trick is to use ole automation in TSQL to create an ADODB object which opens each resultset in turn and write the results to the tables you nominate (or do whatever you want with the resultsets). you can also do it in DMO if you enjoy pain.
There are two ways to do this easily. Either stick the results in a temp table and then reference the temp table from your sproc. The other alternative is to put the results into an XML variable that is used as an OUTPUT variable.
There are, however, pros and cons to both of these options. With a temporary table, you'll need to add code to the script that creates the calling procedure to create the temporary table before modifying the procedure. Also, you should clean up the temp table at the end of the procedure.
With the XML, it can be memory intensive and slow.
You could select them into temp tables or write table valued functions to return result sets. Are asking how to iterate through the result sets?