SQL Server 2005 Determine Stored Procedure Output Type - sql-server-2005

I'm working on a legacy system, and I need to call a stored procedure to retrieve the data I need. The problem is, I don't have any idea as to what the output column format is. Short of going into the stored procedure and figuring out the output column format from the SQL, is there a way for me to see what the output column types are? I can run the stored procedure just fine, but the code is a mess, and I'd prefer to treat it as a black box if I could.
EDIT: I know that its not possible for me to determine this from the database metadata, since the procedure may return different results based upon what the input is. I guess I should rephrase my question: given the result set from a stored procedure, how can I determine the column types?

As you already know, you cannot determine that information from any database metadata (since there is none) - and unfortunately, you cannot determine that from the result set, either - at least not in any reliable, deterministic way.
When you call a stored procedure, all you get back is a bunch of columns and a bunch of rows. There's no inherent information available about the types of those columns. Best you can do is guess - if the data contains alphanumeric characters, it's a VARCHAR/string field. If it has only numeric digits, and possibly a decimal separator, it's likely to be a INT or DECIMAL (or MONEY or SMALLMONEY - can't really tell for sure). If it looks like a DATE and can be converted to a DATE, it's probably a DATE, DATETIME, DATETIME2 or something like that.
The only reliable way is to have some documentation on the output values that the stored procedure generates. Anything else is guesswork at best.

what will you do if the stored proc outputs different resultsets depending on what is passed in...for example
create procedure Test
#var int
as
if #var =1
begin
select col1,col2 from table1
end
else if #var =2
begin
select col4,col2 ,col5,col1 from table2
end
else
begin
select * from table3
end
There is a SET options but it is being deprecated
SET FMTONLY ON;
GO
exec YourProc
GO
SET FMTONLY OFF;
GO

Related

How to suppress record sets returned by SELECT statements in a Stored Procedure

I'm writing a stored procedure which checks for the existence of various tables in various databases, as well as the permissions that the user executing the stored procedure has on those tables. The stored procedure itself resides within a user database (i.e. it's not in the Master db).
To perform my checks, my stored procedure contains lots of SELECT statements. Each of those obviously returns a record set. What I would like is to somehow suppress these record sets so that they are not returned by the stored procedure, and instead return my own, single record set which is just a collection of messages relating to each check the stored procedure performs.
I think the obvious answer is to use a table-valued function instead, but I've not been able to recreate my tests successfully in a Function as they appear in the stored procedure. For starters, I'm having to use temporary tables (not possible in a function) and dynamic SQL (not very compatible with table parameters).
I think I've basically got two choices:
Rewrite my stored procedure as a function and figure out how to do the checks a different way.
Continue using my stored procedure and use an OUTPUT parameter to return my result messages, probably as a delimited string, and in the associated ASP.NET application just ignore all the record sets the stored procedure returns .
Neither of these solutions is very satisfactory. Before I spend any more time pursuing either one, is there a way to discard the record sets produced by the SELECT statements in a stored procedure and explicitly define what record I want it to return?
Hmm, I only can speculate here...
Are you using something like
SELECT ...;
IF ##rowcount > 0
BEGIN
...
END;
?
Then you can rewrite it using something like
IF EXISTS (SELECT ...)
BEGIN
...
END;
or
DECLARE #variable integer;
SELECT #variable = count(*) ...;
IF #variable > 0
BEGIN
...
END;
In general point the results of your queries to a target (variable, table, expression, ...), then they don't get outputted.
And then just execute the query for your desired result in the end.
In my opinion, here is almost no reason to have stored procedures produce record sets. That is what stored functions are for. On occasion, it is needed, because of the use of dynamic SQL or other stored procedures, but not as a general practice. Much, much too often, I see stored procedures being used where stored functions or views are more appropriate.
What should you do? Even SELECT statement in the stored procedure should be one of the following:
Setting (local) variables.
Saving the results in a temporary table or table variable.
The logic for the stored procedure should be working on the local variables. The results should be returned using OUTPUT parameters.
If you need to return rows in a tabular format, you can do that using tables explicitly (such as a global temporary table or real table). Or, you can have one SELECT at the end that does return a single result set. However, if you need this and can phrase the stored procedure as a function, that is better in my opinion.

Selecting data from a different schema within a stored procedure

Consider this:
CREATE PROCEDURE [dbo].[setIdentifier](#oldIdentifierName as varchar(50), #newIdentifierName as varchar(50))
AS
BEGIN
DECLARE #old_id as int;
DECLARE #new_id as int;
SET #old_id = (SELECT value FROM Configuration WHERE id = #oldIdentifierName);
SET #new_id = (SELECT value FROM Configuration WHERE id = #newIdentifierName);
IF #old_id IS NOT NULL AND #new_id IS NOT NULL
BEGIN
UPDATE Customer
SET type = #new_id
WHERE type = #old_id;
END;
END
[...]
EXECUTE dbo.setIdentifier '1', '2';
What this does is create a stored procedure that accepts two parameters which it then uses to update a Customer table.
The problem is that the entire script above runs within a schema other than "dbo". Let's just assume the schema is "company1". And when the stored procedure is called, I get an error from the SELECT statement, which says that the Configuration table cannot be found. I'm guessing this is because MS SQL by default looks for tables within the same schema as the location of the stored procedure, and not within the calling context.
My question is this:
Is there some option or parameter or switch of some kind that will
tell MS SQL to look for tables in the "caller's default schema" and
not within the schema that procedure itself is stored in?
If not,
what would you recommend? I don't really want to prefix the tables
with the schema name, because it would be kind of unflexible to do
that. So I'm thinking about using dynamic sql (and the schema_name()
function which returns the correct value even within the procedure),
but I am just not experienced enough with MS SQL to construct the
proper syntax.
It would be a tad more efficient to explicitly specify the schema name. And generally speaking, schema's are mainly used to divide a database into logical area's. I would not anticipate on tables schema-hopping often.
Regarding your question, you might want to have a look at the 'execute as' documentation on msdn, since it allows to explicitly control your execution context.
I ended up passing the schema name to my script as a property on the command line for the "sqlcmd" command. Like this:
C:/> sqlcmd -vSCHEMANAME=myschema -imysqlfile
In the SQL script I can then access this variable like this:
SELECT * from $(SCHEMANAME).myTable WHERE.... etc
Not quite as flexible as dynamic sql, but "good enough" as it were.
Thanks all for taking time to respond.

Real time application/benefit of Denali's With Result Set

What are the real time uses of the Denali's With Result Set so far Sql Stored Procs are concern apart from renaming the column names and data types at runtime.
Even what is the benefit of changing the datatypes at runtime in With Result Set
e.g.
Alter PROCEDURE test_Proc
AS
BEGIN
SELECT * FROM tbl_Test
END
GO
EXEC test_Proc
WITH RESULT SETS
(
( Id int,
EmpName varchar(50),
PNo varchar(50)
)
)
Even if the column datatypes has been changed, what will we do with that?
however this article gives some idea about it's benefit in SSIS. But I am more interested in Sql Server stored Proc talking to any front end application(e.g. c#) and the like prespective.
Well, for one, say your application is calling sp_who2, and it is storing SPID in an int32. sp_who2 returns SPID as a char, requiring you to perform special handling in all of your apps to convert the output to an int32. If you create a wrapper procedure, you can do this in one place, and without having to dump the results into a temp table first. One more curious case with sp_who2 is that it returns two identical SPID columns - with WITH RESULT SETS you can rename one of them (say, to redundant_SPID) so that your apps never see multiple columns with the same name.
Another use case is say you are changing a data type from int64 to int32 or int32 to varchar, but you can't change all of your apps at once. You can change the "modern" apps to use the new data type while leaving the other "not changeable right now" apps to use the old data type. This means you can split out the deployment and testing of your apps one by one instead of making a wholesale data type change across all of the apps.

SQL Server silently truncates varchar's in stored procedures

According to this forum discussion, SQL Server (I'm using 2005 but I gather this also applies to 2000 and 2008) silently truncates any varchars you specify as stored procedure parameters to the length of the varchar, even if inserting that string directly using an INSERT would actually cause an error. eg. If I create this table:
CREATE TABLE testTable(
[testStringField] [nvarchar](5) NOT NULL
)
then when I execute the following:
INSERT INTO testTable(testStringField) VALUES(N'string which is too long')
I get an error:
String or binary data would be truncated.
The statement has been terminated.
Great. Data integrity preserved, and the caller knows about it. Now let's define a stored procedure to insert that:
CREATE PROCEDURE spTestTableInsert
#testStringField [nvarchar](5)
AS
INSERT INTO testTable(testStringField) VALUES(#testStringField)
GO
and execute it:
EXEC spTestTableInsert #testStringField = N'string which is too long'
No errors, 1 row affected. A row is inserted into the table, with testStringField as 'strin'. SQL Server silently truncated the stored procedure's varchar parameter.
Now, this behaviour might be convenient at times but I gather there is NO WAY to turn it off. This is extremely annoying, as I want the thing to error if I pass too long a string to the stored procedure. There seem to be 2 ways to deal with this.
First, declare the stored proc's #testStringField parameter as size 6, and check whether its length is over 5. This seems like a bit of a hack and involves irritating amounts of boilerplate code.
Second, just declare ALL stored procedure varchar parameters to be varchar(max), and then let the INSERT statement within the stored procedure fail.
The latter seems to work fine, so my question is: is it a good idea to use varchar(max) ALWAYS for strings in SQL Server stored procedures, if I actually want the stored proc to fail when too long a string is passed? Could it even be best practice? The silent truncation that can't be disabled seems stupid to me.
It just is.
I've never noticed a problem though because one of my checks would be to ensure my parameters match my table column lengths. In the client code too. Personally, I'd expect SQL to never see data that is too long. If I did see truncated data, it'd be bleeding obvious what caused it.
If you do feel the need for varchar(max) beware a massive performance issue because of datatype precedence. varchar(max) has higher precedence than varchar(n) (longest is highest). So in this type of query you'll get a scan not a seek and every varchar(100) value is CAST to varchar(max)
UPDATE ...WHERE varchar100column = #varcharmaxvalue
Edit:
There is an open Microsoft Connect item regarding this issue.
And it's probably worthy of inclusion in Erland Sommarkog's Strict settings (and matching Connect item).
Edit 2, after Martins comment:
DECLARE #sql VARCHAR(MAX), #nsql nVARCHAR(MAX);
SELECT #sql = 'B', #nsql = 'B';
SELECT
LEN(#sql),
LEN(#nsql),
DATALENGTH(#sql),
DATALENGTH(#nsql)
;
DECLARE #t table(c varchar(8000));
INSERT INTO #t values (replicate('A', 7500));
SELECT LEN(c) from #t;
SELECT
LEN(#sql + c),
LEN(#nsql + c),
DATALENGTH(#sql + c),
DATALENGTH(#nsql + c)
FROM #t;
Thanks, as always, to StackOverflow for eliciting this kind of in-depth discussion. I have recently been scouring through my Stored Procedures to make them more robust using a standard approach to transactions and try/catch blocks. I disagree with Joe Stefanelli that "My suggestion would be to make the application side responsible", and fully agree with Jez: "Having SQL Server verify the string length would be much preferable". The whole point for me of using stored procedures is that they are written in a language native to the database and should act as a last line of defence. On the application side the difference between 255 and 256 is just a meangingless number but within the database environment, a field with a maximum size of 255 will simply not accept 256 characters. The application validation mechanisms should reflect the backend db as best they can, but maintenance is hard so I want the database to give me good feedback if the application mistakenly allows unsuitable data. That's why I'm using a database instead of a bunch of text files with CSV or JSON or whatever.
I was puzzled why one of my SPs threw the 8152 error and another silently truncated. I finally twigged: The SP which threw the 8152 error had a parameter which allowed one character more than the related table column. The table column was set to nvarchar(255) but the parameter was nvarchar(256). So, wouldn't my "mistake" address gbn's concern: "massive performance issue"? Instead of using max, perhaps we could consistently set the table column size to, say, 255 and the SP parameter to just one character longer, say 256. This solves the silent truncation problem and doesn't incur any performance penalty.
Presumably there is some other disadvantage that I haven't thought of, but it seems a good compromise to me.
Update:
I'm afraid this technique is not consistent. Further testing reveals that I can sometimes trigger the 8152 error and sometimes the data is silently truncated. I would be very grateful if someone could help me find a more reliable way of dealing with this.
Update 2:
Please see Pyitoechito's answer on this page.
The same behavior can be seen here:
declare #testStringField [nvarchar](5)
set #testStringField = N'string which is too long'
select #testStringField
My suggestion would be to make the application side responsible for validating the input before calling the stored procedure.
Update: I'm afraid this technique is not consistent. Further testing reveals that I can sometimes trigger the 8152 error and sometimes the data is silently truncated. I would be very grateful if someone could help me find a more reliable way of dealing with this.
This is probably occurring because the 256th character in the string is white-space. VARCHARs will truncate trailing white-space on insertion and just generate a warning. So your stored procedure is silently truncating your strings to 256 characters, and your insertion is truncating the trailing white-space (with a warning). It will produce an error when said character is not white-space.
Perhaps a solution would be to make the stored procedure's VARCHAR a suitable length to catch a non-white-space character. VARCHAR(512) would probably be safe enough.
One solution would be to:
Change all incoming parameters to be varchar(max)
Have sp private variable of the correct datalength (simply copy and paste all in parameters and add "int" at the end
Declare a table variable with the column names the same as variable names
Insert into the table a row where each variable goes into the column with the same name
Select from the table into internal variables
This way your modifications to the existing code are going to be very minimal like in the sample below.
This is the original code:
create procedure spTest
(
#p1 varchar(2),
#p2 varchar(3)
)
This is the new code:
create procedure spTest
(
#p1 varchar(max),
#p2 varchar(max)
)
declare #p1Int varchar(2), #p2Int varchar(3)
declare #test table (p1 varchar(2), p2 varchar(3)
insert into #test (p1,p2) varlues (#p1, #p2)
select #p1Int=p1, #p2Int=p2 from #test
Note that if the length of the incoming parameters is going to be greater than the limit instead of silently chopping off the string SQL Server will throw off an error.
You could always throw an if statement into your sp's that check the length of them, and if they're greater than the specified length throw an error. This is rather time consuming though and would be a pain to update if you update the data size.
This isn't the Answer that'll solve your problem today, but it includes a Feature Suggestion for MSSQL to consider adding, that would resolve this issue.
It is important to call this out as a shortcoming of MSSQL, so we may help them resolve it by raising awareness of it.
Here's the formal Suggestion if you'd like to vote on it:
https://feedback.azure.com/forums/908035-sql-server/suggestions/38394241-request-for-new-rule-string-truncation-error-for
I share your frustration.
The whole point of setting Character-Size on Parameters is so other Developers will instantly know
what the Size Limits are (via Intellisense) when passing in Data.
This is like having your documentation baked right into the Sproc's Signature.
Look, I get it, Implicit-Conversion during Variable Assignments is the culprit.
Still, there is no good reason to expend this amount of energy battling scenarios
where you are forced to work around this feature.
If you ask me, Sprocs and Functions should have the same engine-rules in place,
for Assigning Parameters, that are used when Populating Tables. Is this really too much to ask?
All these suggestions to use Larger Character-Limits
and then adding Validation for EACH Parameter in EVERY Sproc is ridiculous.
I know it's the only way to ensure Truncation is avoided, but really MSSQL?
I don't care if it's ANSI/ISO Standard or whatever, it's dumb!
When Values are too long - I want my code to break - every time.
It should be: Do not pass go, and fix your code.
You could have multiple truncation bugs festering for years and never catch them.
What happened to ensuring your Data-Integrity?
It's dangerous to assume your SQL Code will only ever be called after all Parameters are Validated.
I try to add the same Validation to both my Website and in the Sproc it calls,
and I still catch Errors in my Sproc that slipped past the website. It's a great sanity-check!
What if you want to re-use your Sproc for a WebSite/WebService and also have it called from other
Sprocs/Jobs/Deployment/Ad-Hoc Scripts (where there is no front-end to Validate Parameters)?
MSSQL Needs a "NO_TRUNC" Option to Enforce this on any Non-Max String Variable
(even those used as Parameters for Sprocs and Functions).
It could be Connection/Session-Scoped:
(like how the "TRANSACTION ISOLATION LEVEL READ UNCOMMITTED" Option affects all Queries)
Or focused on a Single Variable:
(like how "NOLOCK" is a Table Hint for just 1 Table).
Or a Trace-Flag or Database Property you turn on to apply this to All Sproc/Function Parameters in the Database.
I'm not asking to upend decades of Legacy Code.
Just asking MS for the option to better manage our Databases.

Updating records from a XML

I need to provide 4 MySQL stored procedures for each table in a database. They are for get, update, insert and delete.
"Get", "delete" and "insert" are straightforward. The problem is "update", because I don't know which parameters will be set and which ones not. Some parameters could be set to NULL, and other simply won't change so they won't be provided.
As I'm already working with XML, after several search in Google I've found that is possible to use a function called UpdateXML, but the examples are too complex and some articles are from 2007. So I don't know if there is a better technique at this moment or something easier.
Any comment, documentation, link, article or whatever of something that you've used and you're happy with, will be well appreciated :D
Cheers.
Usually when you have data from a row in your database in the front-end, you should have all of the values that you might use to update that row in the database. You should pass all of those values into your update, regardless of whether or not they have actually changed. Otherwise, your database doesn't really know whether it's getting a NULL value for a column because that's what it's supposed to be or because you just didn't pass the real value along.
If you are going to have areas of the application where you don't need certain columns from a table, then it's possible to set up additional stored procedures that do not use those columns. It's often easier though to just retrieve all of the columns from the database when you fill your front-end object. The overhead of the extra columns is usually minimal and worth the saved maintenance of multiple update stored procedures.
Here's an example. It's MS SQL Server syntax, so you may have to alter it slightly, but hopefully it illustrates the idea:
CREATE PROCEDURE Update_My_Table
#my_table_id INT,
#name VARCHAR(40),
#description VARCHAR(500),
#some_other_col INT
AS
BEGIN
UPDATE
My_Table
SET
name = #name,
description = #description,
some_other_col = #some_other_col
WHERE
my_table_id = #my_table_id
END
CREATE PROCEDURE Update_My_Table_Limited
#my_table_id INT,
#name VARCHAR(40),
#description VARCHAR(500)
AS
BEGIN
UPDATE
My_Table
SET
name = #name,
description = #description
WHERE
my_table_id = #my_table_id
END
As you can see, just eliminate those columns that you're not updating from the UPDATE statement. Just don't go overboard and try to have a stored procedure for every possible combination of columns that you might want to update. It's much easier to just get the extra columns from the DB when you select from the table in the first place. You'll end up passing the same value back and your server will wind up updating the column with the same exact value, but that's not a big deal. You can code your front end to make sure that at least one column has changed before it will actually try to update anything in the database.