I have a cursor which returns two values: one which I will use (and therefore will assign to an out variable) and another which I've only had returned to make the ROWNUM thing work.
If I run the cursor as a query, it works as expected. But if I execute the procedure the out variable comes empty. Is my approach somehow not supported? I mean, returning two values but only using one of them?
Here is my procedure code: (Don't delve too much on the query itself. It works, I know it's a bit ugly but it works. It was the only way I found to return the second-last row)
procedure retorna_infos_tabela_164(i_nip in varchar,
o_CODSDPANTERIOR out number) is
cursor c_tabela_164 is
select *
from(
select CODSDP,ROWNUM rn
from
(
select NRONIP,CODTIPOMOV,CODSDP
from TB164_HISTORICOMOVIMENTACOES
where NRONIP = i_nip and
CODTIPOMOV='S1'
order by DTHMOV desc
)
)
where rn=2;
v_temp_nr number;
begin
open c_tabela_164;
fetch c_tabela_164 into o_CODSDPANTERIOR,v_temp_nr;
close c_tabela_164;
end retorna_infos_tabela_164;
EDIT The way I've tried to run this procedure was by dbms_output.put_line(o_CODSDPANTERIOR) which didn't work. Then I googled a little bit and saw I should TO_CHAR() my var first and then have it output. Didn't work either.
There's no problem with passing a number to DBMS_OUTPUT.PUT_LINE. Oracle will silently convert other built-in types to VARCHAR2 using the default format. You only need to use TO_CHAR if you want to control the format used -- which is often a good idea, but not generally necessary.
One possibility, though, is that you are not seeing the output because you have not enabled it. If you are running your test in SQLPlus, make sure you SET SERVEROUTPUT ON before running code that includes DBMS_OUTPUT calls. If you are using some other client, consult its documentation for the proper way to enable DBMS_OUTPUT. (You can of course test if it's enabled by adding another call to output a string literal.)
There's nothing inherently wrong with the technique you're using to populate the out parameter. However, it's not necessary to return two columns from the cursor; your select * could simply be select CODSDP. You seem to be under the misconception that any column referenced in the predicates has to be in the select list, but that's not the case. In your innermost query, the select list does not need to include NRONIP or CODTIPOMOV, because they are not referenced in the outer blocks; the WHERE clause in that query can reference any column in the table, regardless of whether it is in the select list.
So, my first guess is that you simply don't have server output enabled. The only other possibility I can think of right now is that you're running your query and the procedure in two different sessions, and one of them has uncommitted transaction against the table, so they are actually seeing different data.
If those suggestions don't seem to be the problem, I'd suggest you run your tests of the standalone query and the procedure in a single SQLPlus session, then copy and paste the entire session here, so we can see exactly what you're doing.
I'm sorry I've had you guys take the time to answer me when the answer was something to do with the tool I'm using. I hope all you guys have learnt something.
The query does work for me at least, I've not come across any edge cases where it doesn't work, but I haven't tested it exhaustively.
The problem was that TOAD, the tool I'm using to run the procedures, sometimes populates the procedures with the parameters I tell it to but sometimes it doesn't. The issue here was that I was trying to execute the procedure with no parameters, yielding no results...
Lesson Learnt: double check the generated procedure code when you run a Procedure using Right Click > Run Procedure on TOAD version 9.
Related
I have a procedure that looks like this:
create or replace procedure proc1 (prc out sys_refcursor, <filter variables>)
as
begin
open prc for (select * from blah blah blah.. <logic using filter variables,
calculations,etc>
end proc1
I was wondering if it is possible to use the output from this procedure in another procedure to further filter the data I am looking at and do more calculations. Is there a way to pass the sys_refcursor to another procedure and select into that (probably a bad idea)? Or would a temporary table help here?
I understand that I could make this into one procedure but I need the data from both separately as they are both relevant to what I am doing.
Once you wrapped your result set in a cursor, your sql options are limited. You can of course pass the cursor to another function and fetch from it there. But you'll have to do all the dirty filtering work yourself.
Still passing cursors around is sometimes a valid design pattern. Typicall you will fetch from the cursor and generate other selects from that. However, in your case you want to further filter your data, and in this case a cursor is not a good choice in general, because you loose the power of SQL.
If you really want to do such a thing you can use pipelined functions. In contrast to cursors these allow you to create a (virtual) table where you can use plain old select again. And of course you can create such a pipelined function when given a cursor by fetching from it and invoking pipe row repeatedly.
But all of this is tedious and requires quite some boilerplate code.
In general there is not much penaltly in just writing multiple selects with different where clauses. If you want to explicitly encode that these selects are restriciting the result set more and more, use select from select, maybe placing the inner selects into a view, thus creating a hierarchy of views.
I'm using SQL Server 2012, and I'm debugging a store procedure that do some INSERT INTO #temporal table SELECT.
There is any way to view the data selected in the command (the subquery of the insert into?)
There is any way to view the data inserted and/or the temporal table where the insert maked the changes?
It doesn't matter if is the total rows, not one by one
UPDATE:
Requirements from AT Compliance and Company Policy requires that any modification can be done in the process of test and it's probable this will be managed by another team. There is any way to avoid any change on the script?
The main idea is that the AT user check in their workdesktop the outputs, copy and paste them, without make any change on environment or product.
Thanks and kind regards.
If I understand your question correctly, then take a look at the OUTPUT clause:
Returns information from, or expressions based on, each row affected
by an INSERT, UPDATE, DELETE, or MERGE statement. These results can be
returned to the processing application for use in such things as
confirmation messages, archiving, and other such application
requirements.
For instance:
INSERT INTO #temporaltable
OUTPUT inserted.*
SELECT *
FROM ...
Will give you all the rows from the INSERT statement that was inserted into the temporal table, which were selected from the other table.
Is there any reason you can't just do this: SELECT * FROM #temporal? (And debug it in SQL Server Management Studio, passing in the same parameters your application is passing in).
It's a quick and dirty way of doing it, but one reason you might want to do it this way over the other (cleaner/better) answer, is that you get a bit more control here. And, if you're in a situation where you have multiple inserts to your temp table (hopefully you aren't), you can just do a single select to see all of the inserted rows at once.
I would still probably do it the other way though (now I know about it).
I know of no way to do this without changing the script. Howeer, for the future, you should never write a complex strored proc or script without a debug parameter that allows you to put in the data tests you will want. Make it the last parameter with a default value of 0 and you won't even have to change your current code that calls the proc.
Then you can add statements like the below everywhere you will want to check intermediate results. Further in debug mode you might always rollback any transactions so that a bug will not affect the data.
IF #debug = 1
BEGIN
SELECT * FROM #temp
END
Does anyone have some code to simply log some detailed information to a file within a SQL query (or stored procedure or trigger)? I'm not looking for anything fancy. All I want to do is to quickly put debug information into my SQL, much like folks do for JavaScript debugging using alerts. I looked at using Lumigent, but that seems like overkill for what I want to do. I don't care what the format of the logging is in. Below is a simple example of what I'd like to do.
Example:
DECLARE #x int;
SET #x = '123'
-- log the value of #x
============
9/6/2011 # 4:01pm update
I tried the sqlcmd below, which works well. But it doesn't work well when there are 100 parameters on a stored procedure when I want to debug. In that case, I need to go put a break-point in my client code, then get the value of each argument. Then go and type out the exec command, and then look at the output file. All I want to do is put one simple line of code into my SQL (perhaps calling another stored procedure if it takes more than one line of code), that writes a variable value to a file. That's it. I'm just using this for debugging purposes.
One pretty easy method is to use either OSQL or SQLCMD to run your procedure. These are command-line methods for executing SQL commands/procedures/scripts.
With those utilities you can pipe the output (what would normally appear in the "Messages" tab in SSMS) to a text file.
If you do this, in your example the code would be:
DECLARE #x int;
SET #x = '123'
PRINT #x
If you are running the same procedure multiple times, you can just save it a a one-line batch file to make it very easy to test.
Now with more background I think I can promote my comment to an answer:
Why does it have to be a file? If this is just during debugging, can't you just as easily log to a table, and when you want to see the recent results:
SELECT TOP (n) module, parameters, etc.
FROM logTable
ORDER BY DateCreated DESC;
You can simplify the logging or at least make it easier to replicate from procedure to procedure by having a stored procedure that takes various arguments such as ##PROCID and others to centralize the logging. See this article I wrote for some ideas there - it's geared to just logging once per stored procedure call but you could certainly call it as often as you like inside any stored procedure.
This seems like much less hassle than using an archaic file-based log approach. You're already using a database, take advantage!
If you're committed to using a file for whatever reason (it might help to understand or counter if you enumerate those reasons), then the next best choice would likely be CLR, as already mentioned. A complete solution in this case might be beyond the scope of this single question, but there are tons of examples online.
what is the best way of troubleshoot a stored procedure in SQL Server, i mean from where do you start etc..?
Test each SELECT statements (if any) outside of your stored procedure to see whether it returns the expected results;
Make INSERT and UPDATE statements as simple as possible;
Try to test Inserts and Updates outside of your SP so that you can check it gives the expected results;
Use the debugger provided with SSMS Express 2008.
Visual Studio 2008 / 2010 has a debug facility. Simply connect to to your SQL Server instance in 'Server Explorer' and browse to your stored procedure.
Visual Studio 'Test Edition' also can generate Unit Tests around your stored procedures.
Troubleshooting a complex stored proc is far more than just determining if you can get it to run or not and finding the step which won't run. What is most critical is whether it actually returns the corect results or performs the correct actions.
There are two kinds of stored procs that need extensive abilites to troublshoot. First the the proc which creates dynamic SQL. I never create one of these without an input parameter of #debug. When this parameter is set, I have the proc print the SQl statment as it would have run and not run it. Almost everytime, this leads you right away to the problem as you can then see the syntax error in the generated SQL code. You also can run this sql code to see if it is returning the records you expect.
Now with complex procs that have many steps that affect data, I always use an #test input parameter. There are two things I do with the #test parameter, first I make it rollback the actions so that a mistake in development won't mess up the data. Second, I have it display the data before it rollsback to see what the results would have been. (These actually appear in the reverse order in the proc; I just think of them in this order.)
Now I can see what would have gone into the table or been deleted from the tables without affecting the data permananently. Sometimes, I might start with a select of the data as it was before any actions and then compare it to a select run afterwards.
Finally, I often want to log actions of a complex proc and see exactly what steps happened. I don't want those logs to get rolled back if the proc hits an error, so I set up a table variable for the logging information I want at the start of the proc. After each step (or after an error depending on what I want to log), I insert to this table variable. After the rollback or commit statement, I select the results of the table variable or use those results to log to a permanent logging table. This can be especially nice if you are using dynamic SQL because you can log the SQL that was run and then when something strange fails on prod, you have a record of which statement was run when it failed. You do this in a table variable because those do not go out of scope in a rollback.
In SSMS, you can simply start by opening the proc., and clicking on the check mark button (Parse) next to the Execute button on the menu bar. It reports any errors it finds.
If there are no errors there and you're stored procedure is harmless to run (you're not inserting into tables, just creating a temp table for example), then comment out the CREATE PROCEDURE x (or ALTER PROCEDURE x) and declare all the parameters by copying that part, then define them with valid values. Then run it to see what happens.
Maybe this is simple, but it's a place to start.
I know this has something to do with parameter sniffing, but I'm just perplexed at how something like the following example is even possible with a piece of technology that does so many complex things well.
Many of us have run into stored procedures that intermittently run several of orders of magnitude slower than usual, and then if you copy out the sql from the procedure and use the same parameter values in a separate query window, it runs as fast as usual.
I just fixed a procedure like that by converting this:
alter procedure p_MyProc
(
#param1 int
) as -- do a complex query with #param1
to this:
alter procedure p_MyProc
(
#param1 int
)
as
declare #param1Copy int;
set #param1Copy = #param1;
-- Do the query using #param1Copy
It went from running in over a minute back down to under one second, like it usually runs. This behavior seems totally random. For 9 out of 10 #param1 inputs, the query is fast, regardless of how much data it ends up needing to crunch, or how big the result set it. But for that 1 out of 10, it just gets lost. And the fix is to replace an int with the same int in the query?
It makes no sense.
[Edit]
#gbn linked to this question, which details a similar problem:
Known issue?: SQL Server 2005 stored procedure fails to complete with a parameter
I hesitate to cry "Bug!" because that's so often a cop-out, but this really does seem like a bug to me. When I run the two versions of my stored procedure with the same input, I see identical query plans. The only difference is that the original takes more than a minute to run, and the version with the goofy parameter copying runs instantly.
The 1 in 10 gives the wrong plan that is cached.
RECOMPILE adds an overhead, masking allows each parameter to be evaluated on it's own merits (very simply).
By wrong plan, what if the 1 in 10 generates an scan on index 1 but the other 9 produce a seek on index 2? eg, the 1 in 10 is, say, 50% of the rows?
Edit: other questions
Known issue?: SQL Server 2005 stored procedure fails to complete with a parameter
Stored Procedure failing on a specific user
Edit 2:
Recompile does not work because the parameters are sniffed at compile time.
From other links (pasted in):
This article explains...
...parameter values are sniffed during compilation or recompilation...
Finally (edit 3):
Parameter sniffing was probably a good idea at the time and probably works well mostly. We use it across the board for any parameter that will end up in a WHERE clause.
We don't need to use it because we know that only a few (more complex eg reports or many parameters) could cause issues but we use it for consistency.
And the fact that it will come back and bite us when the users complain and we should have used masking...
It's probably caused by the fact that SQL Server compiles stored procedures and caches execution plans for them and the cached execution plan is probably unsuitable for this new set of parameters. You can try WITH RECOMPILE option to see if it's the cause.
EXECUTE MyProcedure [parameters] WITH RECOMPILE
WITH RECOMPILE option will force SQL Server to ignore the cached plan.
I have had this problem repeatedly on moving my code from a test server to production - on two different builds of SQL Server 2005. I think there are some big problems with the parameter sniffing in some builds of SQL Server 2005. I never had this problem on the dev server, or on two local developer edition boxes. I've never seen it it be such a big problem on SQL Server 2000 or any version going back to 6.5 either.
The cases where I found it, the only workaround was to use parameter masking, and I'm still hoping the DBAs will patch up the production server to SP3 so it will maybe go away. Things which did not work:
using the WITH RECOMPILE hint on EXEC or in the SP itself.
dropping and recreating the SP
using sp_recompile
Note that in the case I was working on, the data was not changing since an earlier invocation - I had simply scripted the code onto the production box which already had data loaded. All the invocations came with no changes to the data since before the SPs existed.
Oh, and if SQL Server can't handle this without masking, they need to add a parameter modifier NOSNIFF or something. What happens if you mask all your parameters, so you have #Something_parm and #Something_var and someone changes the code to use the wrong one and all of a sudden you have a sniffing problem again? Plus you are polluting the namespace within the SP. All these SPs I am "fixing" drive me nuts because I know they are going to be a maintenance nightmare for the less experienced satff I will be handing this project off to one day.
Could you check on the SQL Profiler how many reads and execution time when it is quick and when it is slow? It could be related to the number of rows fetched depending on the parameter value. It doesn't sound like a cache plan issue.
I know this is a 2 year old thread, but it might help someone down the line.
Once you analyze the query execution plans and know what the difference is between the two plans (query by itself and query executing in the stored procedure with a flawed plan), you can modify the query within the stored procedure with a query hint to resolve the issue. This works in a scenario where the query is using the incorrect index when executed in the stored procedure. You would add the following after the table in the appropriate location of your procedure:
SELECT col1, col2, col3
FROM YourTableHere WITH (INDEX (PK_YourIndexHere))
This will force the query plan to use the correct index which should resolve the issue. This does not answer why it happens but it does provide a means to resolve the issue without worrying about copying the parameters to avoid parameter sniffing.
As indicated it be a compilation issue. Does this issue still occur if you revert the procedure? One thing you can try if this occurs again to force a recompilation is to use:
sp_recompile [ #objname = ] 'object'
Right from BOL in regards to #objname parameter:
Is the qualified or unqualified name of a stored procedure, trigger, table, or view in the current database. object is nvarchar(776), with no default. If object is the name of a stored procedure or trigger, the stored procedure or trigger will be recompiled the next time that it is run. If object is the name of a table or view, all the stored procedures that reference the table or view will be recompiled the next time they are run.
If you drop and recreate the procedure you could cause clients to fail if they try and execute the procedure. You will also need to reapply security settings.
Is there any chance that the parameter value being provided is sometimes not int?
Is every query reference to the parameter comparing it with int values, without functions and without casting?
Can you increase the specificity of any expressions using the parameter to make the use of multifield indexes more likely?
It is a problem with plan caching, and it isn't always related to parameters, as it was in your scenario.
(Parameter Sniffing problems occur when a proc is called with unusual parameters the FIRST time it runs, and so the cached plan works great for those odd values, but lousy for most other times the proc is called.)
We had a similar situation when the app team deleted all old records from a highly-used log table on a production server. Removing records improves performance, right? Nope, performance immediately tanked.
Turns out that a frequently-used stored proc was recompiled right when the table was nearly empty, and it cached an extremely poor execution plan ("hey, there's only 50 records here, might as well do a Table Scan!"). Would have happened no matter what the initial parameters.
Our fix was to force a recompile with sp_recompile.