Are there any 3rd party profiler apps for SQL Server 2005+ - sql-server-2005

I know that's a crazy question. I know I can use the built-in SQL profiler, client statistics, other statistics built into SQL Server (oh just view the query plan) -- but that's all ugly, takes too much time to set up and the results are not intuitive.
What I was hoping for was something like JetBrains DotTrace where you could see hotspots of slow code - but applied to stored procedures.
Let me also add I am working with existing stored procs that are lengthy - some are 10k plus lines. While this is not ideal, I only want to start with refactoring small parts of only the worse performing stored procs - and so I don't have to spend all day looking at performance numbers/timings/etc, I just want a profiler that shows me where in those stored procs the time is being spent (which blocks or lines).
Crazy request, I know - hopefully someone knows something that would be helpful.

Have a look at DBSophic's tools. The free tool helps you analyze a trace you've already collected, and gives recommendations for T-SQL re-writes, schema changes etc., focusing on the most painful parts (regardless of number of lines in modules).
If you combine their Workload Analyzer with our tool, SQL Sentry Performance Advisor, you can point it at workload data that we've already collected - so no worry about going and manually collecting your own trace. I wrote a blog post about this.

Related

Stored procedure runs slow on the first run

I have a job that runs daily and executes dozens of stored procedures.
Most of them run just fine, but several of them recently started taking a while to run (4-5 minutes).
When I come in in the morning and try to troubleshoot them, they only take 10-20 seconds, just as they supposed to.
This has been happening for the last 10 days or so. No changes had been made to the server (we are running SQL 2012).
How do I even troubleshoot it and what can I do to fix this??
Thanks!!
You can use some DMVs (Dynamic Management Views) that SQL provides to investigate the plan cache. However, the results can be a little intimidating and without some background in it, it may be hard to dig through the results. I would recommend looking into some DMVs like sys.dm_exec_query_stats and sys.dm_exec_cached_plans. Kimberly Tripp from SQLSkills.com does some GREAT courses on Pluralsight on how to use these and get some awesome results by building more advanced queries off of those DMVs.
As well, these DMVs will return a plan_handle column which you can pass to another DMV, sys.dm_exec_query_plan(plan_handle), to return the Execution Plan for a specific statement. The hard part is going to be digging through the results of dm_exec_cached_plans to find the specific job/stored procs that are causing issues. sys.dm_exec_sql_text(qs.[sql_handle]) can help by providing a snapshot of the SQL that was run for that job but you'll get the most benefit out of it (in my opinion) by CROSS APPLYing it with some of the other DMVs I mentioned. If you can identify the Job/Proc/Statement and look at the plan, it will likely show you some indication of the parameter sniffing problem that Sean Lange mentioned.
Just in case: parameter sniffing is when you run the first instance of a query/stored proc, SQL looks at the parameter that you passed in and builds a plan based off of it. The plan that gets generated from that initial compilation of the query/proc will be ideal for the specific parameter that you passed in but might not be ideal for other parameters. Imagine a highly skewed table where all of the dates are '01-01-2000', except one which is '10-10-2015'.
Passing those two parameters in would generate vastly different plans due to data selectivity (read: how unique is the data?). If one of those plans gets saved to cache and called for each subsequent execution, it's possible (and in some cases, likely) that it's not going to be ideal for other parameters.
The likely reason why you're seeing a difference in speed between the Job and when you run the command yourself, is that when you run it, you're running it Ad Hoc. The Job isn't, it's running them as Stored Procs, which means they're going to use different execution plans.
TL;DR:
The Execution Plan that you have saved for the Job is not optimized. However, when you run it manually, you're likely creating an Ad Hoc plan that is optimized for that SPECIFIC run. It's a bit of a hard road to dig into the plan cache and see what's going on but it's 100% worth it. I highly recommend looking up Kimberly Tripp's blog as she has some great posts about this and also some fantastic courses on Pluralsight regarding this.

TSQL Dynamically determine parameter list for SP/Function

I want to write a generic logging snip-it into a collection of stored procedures. I'm writing this to have a quantitative measure of our front-user user experience as I know which SP's are used by the front-end software and how they are used. I'd like to use this to gather a base-line before we commence performance tunning and afterward to show the outcome of tunning.
I can dynamically pull the object name from ##PROCID, but I've been unable to determine all parameters passed and their values. Anyone know if this is possible?
EDIT: marking my response as the answer to close this question. Appears extended events are the least intrusive item to performance, however i'm not sure if there is any substantial difference between minimal profiling and extended events. Perhaps something for a rainy day.
I can get the details of the parameters taken by the proc without parsing its text (at least in SQL Server 2005).
select * from INFORMATION_SCHEMA.PARAMETERS where
SPECIFIC_NAME = OBJECT_NAME(##PROCID)
And I guess that this means that I could, with some appropriately madcap dynamic SQL, also pull out their values.
I don't know how to do this off the top of my head, but I would consider running a trace instead if I were you. You can use SQL Server Profiler to gather only information for the stored procedures that you specify (using filters). You can send the output to a table and then query the results to your heart's content. The output can include IO information, what parameters were passed, the client userid and machine, and much much more.
After running the trace you can aggregate the results into reports that would show how many times a procedure was called, what parameters were used, etc...
Here is a link that might help:
http://msdn.microsoft.com/en-us/library/ms187929.aspx
Appears the best solution to my situation is to do profiling gathering only SP:starting and SP:completed and writing some TSQL to iterate through data and populate a tracking table.
I personally preferred code-generation for this, but politically where i'm working they preferred this solution. We lost some granularity in logging, but this is a sufficient solution to my problem.
EDIT: This ended being an OK solutions. Even profiling just these two items degrades performance to a noticeable degree. :( I wish we had a MSFT provided way to profile a workload that didn't degrade production performance. Oracle has nice solution to this, but it's has its tradeoff's as well. I'd love to see MSFT implement something similar. The new DMV's and extended events help to correlate items. Thanks again for the link Martin.

Accessing Advantage Management Utility values for feedback

In our report generation application, there's some pretty hefty queries that take a considerable amount of time to run. User feedback up until this point has been basically zip while the server chugs away at their request. I noticed that there's a tab on the ADA Management Utility that shows progress on the query both as percent complete and estimated seconds remaining. I tried digging through the tables to see if I could find any of this information exposed, as well as picking through the limited documentation available for ADBS and couldn't find anything useful.
Does anyone know if there's a way I can cull this information outside ADA to provide some needed user feedback?
ADA is getting that information from the sp_GetSQLStatements system procedure.
However, the traditional way of providing progress information for any operation is through a callback function.
This isn't an answer to the question but might be useful in helping reduce the time it takes to run the queries in the report. You may have already done this and made it as optimized as it gets. But if not, you might look at the query plan within Advantage Data Architect to check for optimization issues. In the query window where you run a query, you can choose Show Plan from the SQL menu (or click the button in the toolbar). This will display the execution plan with optimization information that might help identify missing indexes.
Another tool that might be helpful in identifying unoptimized queries is through query logging. It is also discussed here.

How to Get SQL Server Database performance Information?

Are there any standard queries that can be run that will show the performance of a SQL Server 2005 database?
Note: I need to know the performance of every aspect of the database.
EDIT:
I am looking for a way to measure the time it takes for typical queries to execute. I am then going to apply indexing to certain tables in the database and then time how long the same queries take to execute and see if there is a significant difference. Is there an easy way to do this?
Thanks!
(Edited, link hopefully fixed)
For general background research/analysis of SQL Server performance, I prefer to watch how SQL is performing as it is performing. The best tools for that are SQL Profiler and sometimes Windows System Monitor (aka Performance Monitor aka PerfMon). Alas, neither are particularly simple, let alone simple queries against the system -- though some PerfMon counters are exposed through a few DMViews I can't dig up just now.
BOL has reasonable information on these; a good top-level (online) page for this is here. Be wary, there is serious DBA stuff beyond that point
There are some dynamic management views and functions build in:
http://msdn.microsoft.com/en-us/library/ms188754(SQL.90).aspx
select * from sys.dm_db_index_usage_stats
select * from sys.dm_os_memory_objects

PostgreSQL performance monitoring tool

I'm setting up a web application with a FreeBSD PostgreSQL back-end. I'm looking for some database performance optimization tool/technique.
Database optimization is usually a combination of two things
Reduce the number of queries to the database
Reduce the amount of data that needs to be looked at to answer queries
Reducing the amount of queries is usually done by caching non-volatile/less important data (e.g. "Which users are online" or "What are the latest posts by this user?") inside the application (if possible) or in an external - more efficient - datastore (memcached, redis, etc.). If you've got information which is very write-heavy (e.g. hit-counters) and doesn't need ACID-semantics you can also think about moving it out of the Postgres database to more efficient data stores.
Optimizing the query runtime is more tricky - this can amount to creating special indexes (or indexes in the first place), changing (possibly denormalizing) the data model or changing the fundamental approach the application takes when it comes to working with the database. See for example the Pagination done the Postgres way talk by Markus Winand on how to rethink the concept of pagination to make it more database efficient
Measuring queries the slow way
But to understand which queries should be looked at first you need to know how often they are executed and how long they run on average.
One approach to this is logging all (or "slow") queries including their runtime and then parsing the query log. A good tool for this is pgfouine which has already been mentioned earlier in this discussion, it has since been replaced by pgbadger which is written in a more friendly language, is much faster and more actively maintained.
Both pgfouine and pgbadger suffer from the fact that they need query-logging enabled, which can cause a noticeable performance hit on the database or bring you into disk space troubles on top of the fact that parsing the log with the tool can take quite some time and won't give you up-to-date insights on what is going in the database.
Speeding it up with extensions
To address these shortcomings there are now two extensions which track query performance directly in the database - pg_stat_statements (which is only helpful in version 9.2 or newer) and pg_stat_plans. Both extensions offer the same basic functionality - tracking how often a given "normalized query" (Query string minus all expression literals) has been run and how long it took in total. Due to the fact that this is done while the query is actually run this is done in a very efficient manner, the measurable overhead was less than 5% in synthetic benchmarks.
Making sense of the data
The list of queries itself is very "dry" from an information perspective. There's been work on a third extension trying to address this fact and offer nicer representation of the data called pg_statsinfo (along with pg_stats_reporter), but it's a bit of an undertaking to get it up and running.
To offer a more convenient solution to this problem I started working on a commercial project which is focussed around pg_stat_statements and pg_stat_plans and augments the information collected by lots of other data pulled out of the database. It's called pganalyze and you can find it at https://pganalyze.com/.
To offer a concise overview of interesting tools and projects in the Postgres Monitoring area i also started compiling a list at the Postgres Wiki which is updated regularly.
pgfouine works fairly well for me. And it looks like there's a FreeBSD port for it.
I've used pgtop a little. It is quite crude, but at least I can see which query is running for each process ID.
I tried pgfouine, but if I remember, it's an offline tool.
I also tail the psql.log file and set the logging criteria down to a level where I can see the problem queries.
#log_min_duration_statement = -1 # -1 is disabled, 0 logs all statements
# and their durations, > 0 logs only
# statements running at least this time.
I also use EMS Postgres Manager to do general admin work. It doesn't do anything for you, but it does make most tasks easier and makes reviewing and setting up your schema more simple. I find that when using a GUI, it is much easier for me to spot inconsistencies (like a missing index, field criteria, etc.). It's only one of two programs I'm willing to use VMWare on my Mac to use.
Munin is quite simple yet effective to get trends of how the database is evolving and performing over time. In the standard kit of Munin you can among other thing monitor the size of the database, number of locks, number of connections, sequential scans, size of transaction log and long running queries.
Easy to setup and to get started with and if needed you can write your own plugin quite easily.
Check out the latest postgresql plugins that are shipped with Munin here:
http://munin-monitoring.org/browser/branches/1.4-stable/plugins/node.d/
Well, the first thing to do is try all your queries from psql using "explain" and see if there are sequential scans that can be converted to index scans by adding indexes or rewriting the query.
Other than that, I'm as interested in the answers to this question as you are.
Check out Lightning Admin, it has a GUI for capturing log statements, not perfect but works great for most needs. http://www.amsoftwaredesign.com
DBTuna http://www.dbtuna.com/postgresql_monitor.php has recently started supporting PostgreSQL monitoring. We use it extensively for MySQL monitoring, so if it provides the same for Postgres then it should be a good fit for you too.