How can I get the stored procedure returned columns without knowing the input parameters? - sql

I want to execute a stored procedure virtually and get the returned columns. I use fmtonly like below :
set fmtonly on
exec spName null
set fmtonly off
but using fmtonly caused to run all the lines of code and result of this work is ERROR.
Is there any solution for doing this work?

You need to use sp_describe_first_result_set which is new to SQL Server 2012. Note that this requires you to provide the input parameters (at least the types).
In T-SQL development one is expected to know what procedures is calling and what is the expected result set. Before SQL server 2012 there was very little support for dynamic, runtime, discovery of procedure output and required parameters. This new procedure, along with others like sp_describe_undeclared_parameters can be used to create tools that need to explore the available programming API surface. The very fact that these were added to 2012 should indicate that the equivalent cannot be properly handled pre-2012. Solutions like loopback linked servers have many problems, primarily because they actually execute the code with potential disastrous effects.

Related

check if stored procedure is valid

I have a stored procedure (sproc A) which is syntactically correct. So when I hit "run" on its create or alter statement, it is saved into the database.
However, sproc A has a call to another stored procedure (sproc B). It does not provide enough parameters for sproc B, so I don't see how it's a valid stored procedure.
I want to detect any stored procedures in my database which aren't passing enough parameters to their own stored procedures.
Thankyou,
Fidel
Unfortunately, there is no mechanism in SQL Server to test dependencies, parameters etc
You have to search+check, or provide defaults for parameters. You'll only pick it up by testing otherwise.
A good auto complete tool like Red Gate SQL prompt can list parameters + types for you
Note:
It's a long standing problem and there is even a request to MS including this.
SP parameter checking is one of the OPTION STRICT suggestions

multiple select statements in single ODBCdataAdapter

I am trying to use an ODBCdataadapter in C# to run a query which needs to select some data into a temporary table as a preliminary step. However, this initial select statement is causing the query to terminate so that data gets put into the temp table but I can't run the second query to get it out. I have determined that the problem is the presence of two select statements in a single dataadapter query. That is to say the following code only runs the first select:
select 1
select whatever from wherever
When I run my query directly through SQL Server Management Studio it works fine. Has anyone encountered this sort of issue before? I have tried the exact same query previously on similar databases using the same C# code (only the connection string is different) and had no problems.
Before you ask, the temp table is helpful because otherwise I would be running a whole lot of inner select statements which would bog down the database.
Assuming you're executing a Command that's command type is CommandText you need a ; to separate the statements.
select 1;
select whatever from wherever;
You might also want to consider using a Stored Procedure if possible. You should also use the SQL client objects instead of the ODBC client. That way you can take advantage of additional methods that aren't available otherwise. You're supposed to get better perf as well.
If you need to support multiple Databases you can just use the DataAdapter class and use a Factory o create the concrete types. This gives you the benefits of using the native drivers without being tied to a specific backend. ORMS that support multiple back ends typically do this. The Enterprise Library Data Access Application Block while not an ORM does this as well.
Unfortunately I do not have write access to the DB as my organization has been contracted just to extract information to a data warehouse. The program is one generalized for use on multiple systems which is why we went with ODBC. I suppose it would not be terrible to rewrite it using SQL Management Objects.
ODBC Connection requires a single select statement and its retrieval from SQL Server.
If any such functionality is required, a Hack can do the purpose
use the query
SET NOCOUNT ON
at the top of your select statement.
When SET NOCOUNT is ON, the count (indicating the number of rows affected by a Transact-SQL statement) is not returned.
When SET NOCOUNT is OFF, the count is returned. It is used with any SELECT, INSERT, UPDATE, DELETE statement.
The setting of SET NOCOUNT is set at execute or run time and not at parse time.
SET NOCOUNT ON mainly improves stored procedure (SP) performance.
Syntax:
SET NOCOUNT { ON | OFF }

Creating entities from stored procedures which have dynamic sql

I have a stored procedure which uses a couple of tables and creates a cross-tab result set. For creating the cross-tab result set I am using CASE statements which are dynamically generated on basis of records in a table.
Is it possible to generate an entity from this SP using ADO.NET Entity framework? Cuz each time I try Get Column Information for the particular SP, it says that The selected stored procedure returns no columns.
Any help would be appreciated.
A member of my team recently encountered something like this, where a stored procedure was generating all kinds of dynamic SQL and returning calculated columns so the data context didn't know what to make of it. I haven't tried it myself yet, but this was the solution he claimed worked:
The solution is simply to put the line
“SET FMTONLY OFF;” into the proc.
This allows the Data Context to
actually generate the return class.
This works in this case, only because
the proc is doing nothing but querying
data.
Full details here:
http://tonesdotnetblog.wordpress.com/2010/06/02/solution-my-generated-linq-to-sql-stored-procedure-returns-an-int-when-it-should-return-a-table/
You only need the “SET FMTONLY OFF” in
the proc long enough to generate the
class. You can then comment it out.

Stored procedure SQL SELECT statement issue

I am using SQL Server 2008 Enterprise on Windows Server 2008 Enterprise. In a stored procedure, we can execute a SELECT statement directly. And it could also be executed in this new way, I am wondering which method is better, and why?
New method,
declare #teststatement varchar(500)
set #teststatement = 'SELECT * from sometable'
print #teststatement
exec (#teststatement)
Traditional method,
SELECT * from sometable
regards,
George
FYI: it’s not a new method, it is known as Dynamic SQL.
Dynamic SQL are preferred when we need to set or concatenate certain values into sql statements.
Traditional or normal way sql statements are recommended, because stored procedures are complied. Complied on first run "Stored Procedure are Compiled on First Run"
, execution plan of statements are being created at the time of compilation.
Dynamic sqls are ignored while creating execution plans, because it is taken as string (VARCHAR or NVARCHAR as declared).
Refer following articles for more details about dynamic query and stored procs
Introduction to Dynamic SQL Part 1
Introduction to Dynamic SQL Part 2
Everything you wanted to know about Stored Procedures
The traditional method is safer, because the query is parsed when you save it. The query in the 'exec' method is not parsed and can contain errors.
The "new" way, as mentioned, has nothing to do with SQL 2008. EXEC has been available for quite some time. It's also - in most cases - a Very Bad Idea.
You lose parameterization - meaning you are now vulnerable to SQL Injection. It's ugly and error-prone. It's less efficient. And it creates a new execution scope - meaning it can't share variables, temp tables, etc. - from it's calling stored proc.
sp_executesql is another (and preferred) method of executing dynamic SQL. It's what your client apps use, and it supports parameters - which fixes the most glaring problem of EXEC. However, it too has very limited use cases within a stored proc. About the only redeeming use is when you need a dynamic table or column name. T-SQL does not support a variable for that - so you need to use sp_executesql. The number of times you need or should be doing that are very low.
Bottom line - you'd be best off forgetting you ever heard of it.

Why does the SqlServer optimizer get so confused with parameters?

I know this has something to do with parameter sniffing, but I'm just perplexed at how something like the following example is even possible with a piece of technology that does so many complex things well.
Many of us have run into stored procedures that intermittently run several of orders of magnitude slower than usual, and then if you copy out the sql from the procedure and use the same parameter values in a separate query window, it runs as fast as usual.
I just fixed a procedure like that by converting this:
alter procedure p_MyProc
(
#param1 int
) as -- do a complex query with #param1
to this:
alter procedure p_MyProc
(
#param1 int
)
as
declare #param1Copy int;
set #param1Copy = #param1;
-- Do the query using #param1Copy
It went from running in over a minute back down to under one second, like it usually runs. This behavior seems totally random. For 9 out of 10 #param1 inputs, the query is fast, regardless of how much data it ends up needing to crunch, or how big the result set it. But for that 1 out of 10, it just gets lost. And the fix is to replace an int with the same int in the query?
It makes no sense.
[Edit]
#gbn linked to this question, which details a similar problem:
Known issue?: SQL Server 2005 stored procedure fails to complete with a parameter
I hesitate to cry "Bug!" because that's so often a cop-out, but this really does seem like a bug to me. When I run the two versions of my stored procedure with the same input, I see identical query plans. The only difference is that the original takes more than a minute to run, and the version with the goofy parameter copying runs instantly.
The 1 in 10 gives the wrong plan that is cached.
RECOMPILE adds an overhead, masking allows each parameter to be evaluated on it's own merits (very simply).
By wrong plan, what if the 1 in 10 generates an scan on index 1 but the other 9 produce a seek on index 2? eg, the 1 in 10 is, say, 50% of the rows?
Edit: other questions
Known issue?: SQL Server 2005 stored procedure fails to complete with a parameter
Stored Procedure failing on a specific user
Edit 2:
Recompile does not work because the parameters are sniffed at compile time.
From other links (pasted in):
This article explains...
...parameter values are sniffed during compilation or recompilation...
Finally (edit 3):
Parameter sniffing was probably a good idea at the time and probably works well mostly. We use it across the board for any parameter that will end up in a WHERE clause.
We don't need to use it because we know that only a few (more complex eg reports or many parameters) could cause issues but we use it for consistency.
And the fact that it will come back and bite us when the users complain and we should have used masking...
It's probably caused by the fact that SQL Server compiles stored procedures and caches execution plans for them and the cached execution plan is probably unsuitable for this new set of parameters. You can try WITH RECOMPILE option to see if it's the cause.
EXECUTE MyProcedure [parameters] WITH RECOMPILE
WITH RECOMPILE option will force SQL Server to ignore the cached plan.
I have had this problem repeatedly on moving my code from a test server to production - on two different builds of SQL Server 2005. I think there are some big problems with the parameter sniffing in some builds of SQL Server 2005. I never had this problem on the dev server, or on two local developer edition boxes. I've never seen it it be such a big problem on SQL Server 2000 or any version going back to 6.5 either.
The cases where I found it, the only workaround was to use parameter masking, and I'm still hoping the DBAs will patch up the production server to SP3 so it will maybe go away. Things which did not work:
using the WITH RECOMPILE hint on EXEC or in the SP itself.
dropping and recreating the SP
using sp_recompile
Note that in the case I was working on, the data was not changing since an earlier invocation - I had simply scripted the code onto the production box which already had data loaded. All the invocations came with no changes to the data since before the SPs existed.
Oh, and if SQL Server can't handle this without masking, they need to add a parameter modifier NOSNIFF or something. What happens if you mask all your parameters, so you have #Something_parm and #Something_var and someone changes the code to use the wrong one and all of a sudden you have a sniffing problem again? Plus you are polluting the namespace within the SP. All these SPs I am "fixing" drive me nuts because I know they are going to be a maintenance nightmare for the less experienced satff I will be handing this project off to one day.
Could you check on the SQL Profiler how many reads and execution time when it is quick and when it is slow? It could be related to the number of rows fetched depending on the parameter value. It doesn't sound like a cache plan issue.
I know this is a 2 year old thread, but it might help someone down the line.
Once you analyze the query execution plans and know what the difference is between the two plans (query by itself and query executing in the stored procedure with a flawed plan), you can modify the query within the stored procedure with a query hint to resolve the issue. This works in a scenario where the query is using the incorrect index when executed in the stored procedure. You would add the following after the table in the appropriate location of your procedure:
SELECT col1, col2, col3
FROM YourTableHere WITH (INDEX (PK_YourIndexHere))
This will force the query plan to use the correct index which should resolve the issue. This does not answer why it happens but it does provide a means to resolve the issue without worrying about copying the parameters to avoid parameter sniffing.
As indicated it be a compilation issue. Does this issue still occur if you revert the procedure? One thing you can try if this occurs again to force a recompilation is to use:
sp_recompile [ #objname = ] 'object'
Right from BOL in regards to #objname parameter:
Is the qualified or unqualified name of a stored procedure, trigger, table, or view in the current database. object is nvarchar(776), with no default. If object is the name of a stored procedure or trigger, the stored procedure or trigger will be recompiled the next time that it is run. If object is the name of a table or view, all the stored procedures that reference the table or view will be recompiled the next time they are run.
If you drop and recreate the procedure you could cause clients to fail if they try and execute the procedure. You will also need to reapply security settings.
Is there any chance that the parameter value being provided is sometimes not int?
Is every query reference to the parameter comparing it with int values, without functions and without casting?
Can you increase the specificity of any expressions using the parameter to make the use of multifield indexes more likely?
It is a problem with plan caching, and it isn't always related to parameters, as it was in your scenario.
(Parameter Sniffing problems occur when a proc is called with unusual parameters the FIRST time it runs, and so the cached plan works great for those odd values, but lousy for most other times the proc is called.)
We had a similar situation when the app team deleted all old records from a highly-used log table on a production server. Removing records improves performance, right? Nope, performance immediately tanked.
Turns out that a frequently-used stored proc was recompiled right when the table was nearly empty, and it cached an extremely poor execution plan ("hey, there's only 50 records here, might as well do a Table Scan!"). Would have happened no matter what the initial parameters.
Our fix was to force a recompile with sp_recompile.