If I am not mistaken, it is not possible to have the Explain Plan for Procedures in Toad and Oracle 10g. If this is true, is there anyway that I can see the cost of my procedures?
When I make a small change in one of the functions which are called by ProcedureX, the execution time of ProcedureX increases dramatically.
I tried to run each query which exist inside the ProcedureX, but it is almost impossible due to the huge number of callings and parameters that are passed through them.
Do you have any idea?
Thank you
DBMS_PROFILER is probably what you are looking for.
The DBMS_PROFILER package provides an interface to profile existing PL/SQL applications and identify performance bottlenecks. You can then collect and persistently store the PL/SQL profiler data.
The final HTML report it generates is pretty useful in grouping different function calls,s o you can see where your procedure is spending most of the time (provided you run it with sufficient data).
Take a look at this link and see if it helps.
http://docs.oracle.com/cd/B19306_01/appdev.102/b14258/d_profil.htm
Related
I have a job that runs daily and executes dozens of stored procedures.
Most of them run just fine, but several of them recently started taking a while to run (4-5 minutes).
When I come in in the morning and try to troubleshoot them, they only take 10-20 seconds, just as they supposed to.
This has been happening for the last 10 days or so. No changes had been made to the server (we are running SQL 2012).
How do I even troubleshoot it and what can I do to fix this??
Thanks!!
You can use some DMVs (Dynamic Management Views) that SQL provides to investigate the plan cache. However, the results can be a little intimidating and without some background in it, it may be hard to dig through the results. I would recommend looking into some DMVs like sys.dm_exec_query_stats and sys.dm_exec_cached_plans. Kimberly Tripp from SQLSkills.com does some GREAT courses on Pluralsight on how to use these and get some awesome results by building more advanced queries off of those DMVs.
As well, these DMVs will return a plan_handle column which you can pass to another DMV, sys.dm_exec_query_plan(plan_handle), to return the Execution Plan for a specific statement. The hard part is going to be digging through the results of dm_exec_cached_plans to find the specific job/stored procs that are causing issues. sys.dm_exec_sql_text(qs.[sql_handle]) can help by providing a snapshot of the SQL that was run for that job but you'll get the most benefit out of it (in my opinion) by CROSS APPLYing it with some of the other DMVs I mentioned. If you can identify the Job/Proc/Statement and look at the plan, it will likely show you some indication of the parameter sniffing problem that Sean Lange mentioned.
Just in case: parameter sniffing is when you run the first instance of a query/stored proc, SQL looks at the parameter that you passed in and builds a plan based off of it. The plan that gets generated from that initial compilation of the query/proc will be ideal for the specific parameter that you passed in but might not be ideal for other parameters. Imagine a highly skewed table where all of the dates are '01-01-2000', except one which is '10-10-2015'.
Passing those two parameters in would generate vastly different plans due to data selectivity (read: how unique is the data?). If one of those plans gets saved to cache and called for each subsequent execution, it's possible (and in some cases, likely) that it's not going to be ideal for other parameters.
The likely reason why you're seeing a difference in speed between the Job and when you run the command yourself, is that when you run it, you're running it Ad Hoc. The Job isn't, it's running them as Stored Procs, which means they're going to use different execution plans.
TL;DR:
The Execution Plan that you have saved for the Job is not optimized. However, when you run it manually, you're likely creating an Ad Hoc plan that is optimized for that SPECIFIC run. It's a bit of a hard road to dig into the plan cache and see what's going on but it's 100% worth it. I highly recommend looking up Kimberly Tripp's blog as she has some great posts about this and also some fantastic courses on Pluralsight regarding this.
I've just inherited an old PostgreSQL installation and need to do some diagnostics to find out why this database is running slow. On MS SQL you would use a tool such as Profiler to see what queries are running and then see how their execution plan looks like.
What tools, if any, exist for PostgreSQL that I can do this with? I would appreciate any help since I´m quite new with Postgres.
Use pg_stat_statements extension to get long running queries. then use select* from pg_stat_statements order by total_time/calls desc limit 10 to get ten longest. then use explain to see the plan...
My general approach is usually a mixture of approaches. This requires no extensions.
set the log_min_duration_statement to catch long-running queries. https://dba.stackexchange.com/questions/62842/log-min-duration-statement-setting-is-ignored should get you started.
Use profiling of client applications to see which queries they are spending their time on. Sometimes one has queries which take a small duration but are so frequently repeated to cause performance problems.
Of course then explain analyze can help. If you are looking inside plpgsql functions however, often you need to pull out the queries and run explain analyze on them directly.
Note: ALWAYS run explain analyze in a transaction that rolls back or a read-only transaction unless you know that it does not write to the database.
I want to write a generic logging snip-it into a collection of stored procedures. I'm writing this to have a quantitative measure of our front-user user experience as I know which SP's are used by the front-end software and how they are used. I'd like to use this to gather a base-line before we commence performance tunning and afterward to show the outcome of tunning.
I can dynamically pull the object name from ##PROCID, but I've been unable to determine all parameters passed and their values. Anyone know if this is possible?
EDIT: marking my response as the answer to close this question. Appears extended events are the least intrusive item to performance, however i'm not sure if there is any substantial difference between minimal profiling and extended events. Perhaps something for a rainy day.
I can get the details of the parameters taken by the proc without parsing its text (at least in SQL Server 2005).
select * from INFORMATION_SCHEMA.PARAMETERS where
SPECIFIC_NAME = OBJECT_NAME(##PROCID)
And I guess that this means that I could, with some appropriately madcap dynamic SQL, also pull out their values.
I don't know how to do this off the top of my head, but I would consider running a trace instead if I were you. You can use SQL Server Profiler to gather only information for the stored procedures that you specify (using filters). You can send the output to a table and then query the results to your heart's content. The output can include IO information, what parameters were passed, the client userid and machine, and much much more.
After running the trace you can aggregate the results into reports that would show how many times a procedure was called, what parameters were used, etc...
Here is a link that might help:
http://msdn.microsoft.com/en-us/library/ms187929.aspx
Appears the best solution to my situation is to do profiling gathering only SP:starting and SP:completed and writing some TSQL to iterate through data and populate a tracking table.
I personally preferred code-generation for this, but politically where i'm working they preferred this solution. We lost some granularity in logging, but this is a sufficient solution to my problem.
EDIT: This ended being an OK solutions. Even profiling just these two items degrades performance to a noticeable degree. :( I wish we had a MSFT provided way to profile a workload that didn't degrade production performance. Oracle has nice solution to this, but it's has its tradeoff's as well. I'd love to see MSFT implement something similar. The new DMV's and extended events help to correlate items. Thanks again for the link Martin.
Does the size of a stored procedure affect its execution performance?
Is it better to have a large SP that does all the process or to split it to multiple SPs, regarding to performance?
Let me paraphrase: "Does the size of my function affect it's execution performance?"
The obvious answer is: No. The function will run as fast as it possibly can on the hardware it happens to run on. (To be clear: A longer function will take longer to execute, of course. But it will not run slower. Therefore, the performance is unafffected.)
The right question is: "Does the way I write my function affect it's execution performance?"
The answer is: Yes, by all means.
If you are in fact asking the second question, you should add a code sample and be more specific.
No, not really - or not much, anyway. The Stored Proc is precompiled on the server - and it's not being sent back and forth between server and client - so it's size is really not all that relevant.
It's more important to have it set up in a maintainable and easy to read way.
Marc
Not sure what you mean by the SIZE of a stored procedure ( lines of code?, complexity? number of tables? number of joins? ) but the execution performance depends entirely upon the execution plan of the defined and compiled SQL within the stored procedure. This can be monitored quite well through SQL Profiler if you are using SQL Server. Performance is most heavy taxed by things like joins and table scans, and a good tool can help you figure out where to place your indexes, or think of better ways to define the SQL. Hope this helps.
You could possibly cause worse performance by coding multiple stored procedures, if the execution plans need to be flushed to reclaim local memory and a single procedure would not.
We have hit situations where a flushed stored procedure is needed again and must be recompiled. When querying a view accessing hundreds of partition tables, this can be costly and has caused timeouts in our production. Combining into two from eight solved this problem.
On the other hand, we had one stored procedure that was so complex that breaking it up into multiples allowed the query execution plan to be simpler for the chunks and performed better.
The other answers that are basically "it depends" are dead on. No matter how fast of a DB you have, a bad query an bring it to its knees. And each situation is unique. In most places, coding in a modular and easily understandable way, is better performing and cheaper to maintain. SQL server has to "understand" it to, as it builds the query plans.
One of my co-workers claims that even though the execution path is cached, there is no way parameterized SQL generated from an ORM is as quick as a stored procedure. Any help with this stubborn developer?
I would start by reading this article:
http://decipherinfosys.wordpress.com/2007/03/27/using-stored-procedures-vs-dynamic-sql-generated-by-orm/
Here is a speed test between the two:
http://www.blackwasp.co.uk/SpeedTestSqlSproc.aspx
Round 1 - You can start a profiler trace and compare the execution times.
For most people, the best way to convince them is to "show them the proof." In this case, I would create a couple basic test cases to retrieve the same set of data, and then time how long it takes using stored procedures versus NHibernate. Once you have the results, hand it over to them and most skeptical people should yield to the evidence.
I would only add a couple things to Rob's answer:
First, Make sure the amount of data involved in the test cases is similiar to production values. In other words if your queries are normally against tables with hundreds of thousands or rows, then create such a test environment.
Second, make everything else equal except for the use of an nHibernate generated query and a s'proc call. Hopefully you can execute the test by simply swapping out a provider.
Finally, realize that there is usually a lot more at stake than just stored procedures vs. ORM. With that in mind the test should look at all of the factors: execution time, memory consumption, scalability, debugging ability, etc.
The problem here is that you've accepted the burden of proof. You're unlikely to change someone's mind like that. Like it or not, people--even programmers-- are just too emotional to be easily swayed by logic. You need to put the burden of proof back on him- get him to convince you otherwise- and that will force him to do the research and discover the answer for himself.
A better argument to use stored procedures is security. If you use only stored procedures, with no dynamic sql, you can disable SELECT, INSERT, UPDATE, DELETE, ALTER, and CREATE permissions for the application database user. This will protect you against most 2nd order SQL Injection, whereas parameterized queries are only effective against first order injection.
Measure it, but in a non-micro-benchmark, i.e. something that represents real operations in your system. Even if there would be a tiny performance benefit for a stored procedure it will be insignificant against the other costs your code is incurring: actually retrieving data, converting it, displaying it, etc. Not to mention that using stored procedures amounts to spreading your logic out over your app and your database with no significant version control, unit tests or refactoring support in the latter.
Benchmark it yourself. Write a testbed class that executes a sampled stored procedure a few hundred times, and run the NHibernate code the same amount of times. Compare the average and median execution time of each method.
It is just as fast if the query is the same each time. Sql Server 2005 caches query plans at the level of each statement in a batch, regardless of where the SQL comes from.
The long-term difference might be that stored procedures are many, many times easier for a DBA to manage and tune, whereas hundreds of different queries that have to be gleaned from profiler traces are a nightmare.
I've had this argument many times over.
Almost always I end up grabbing a really good dba, and running a proc and a piece of code with the profiler running, and get the dba to show that the results are so close its negligible.
Measure it.
Really, any discussion on this topic is probably futile until you've measured it.
He may be correct for the specific use case he is thinking of. A stored procedure will probably execute faster for some complex set of SQL, that can be arbitrarily tuned. However, something you get from things like hibernate is caching. This may prove much faster for the lifetime of your actual application.
The additional layer of abstraction will cause it to be slower than a pure call to a sproc. Just by the fact that you have additional allocations on the managed heap, and additional pushes and pops off the callstack, the truth of the matter is that it is more efficient to call a sproc over having an ORM build the query, regardless how good the ORM is.
How slow, if its even measurable, is debatable. This is also helped by the fact that most ORM's have a caching mechanism to avoid doing the query at all.
Even if the stored procedure is 10% faster (it probably isn't), you may want to ask yourself how much it really matters. What really matters in the end, is how easy it is to write and maintain code for your system. If you are coding a web app, and your pages all return in 0.25 seconds, then the extra time saved by using stored procedures is negligible. However, there can be many added advantages of using an ORM like NHibernate, which would be extremely hard to duplicate using only stored procedures.