I've just inherited an old PostgreSQL installation and need to do some diagnostics to find out why this database is running slow. On MS SQL you would use a tool such as Profiler to see what queries are running and then see how their execution plan looks like.
What tools, if any, exist for PostgreSQL that I can do this with? I would appreciate any help since I´m quite new with Postgres.
Use pg_stat_statements extension to get long running queries. then use select* from pg_stat_statements order by total_time/calls desc limit 10 to get ten longest. then use explain to see the plan...
My general approach is usually a mixture of approaches. This requires no extensions.
set the log_min_duration_statement to catch long-running queries. https://dba.stackexchange.com/questions/62842/log-min-duration-statement-setting-is-ignored should get you started.
Use profiling of client applications to see which queries they are spending their time on. Sometimes one has queries which take a small duration but are so frequently repeated to cause performance problems.
Of course then explain analyze can help. If you are looking inside plpgsql functions however, often you need to pull out the queries and run explain analyze on them directly.
Note: ALWAYS run explain analyze in a transaction that rolls back or a read-only transaction unless you know that it does not write to the database.
Related
I have a complex query that joins on tables that have large amounts of data. The query timesout after the application runs it a few times. The only ways I can get it working again are by restarting SQL Server or running:
DBCCÂ DROPCLEANBUFFERS;
Can someone give me an idea of what things I should be looking into? I am trying to narrow down what needs to be done to fix this. Is there a way to completely disable caching for the query? It seems like the caching is what is making it timeout eventually.
TheGameiswar's suggestion of updating statistics is a very good idea. Though, you may want to investigate further even if updating statistics alleviates the timeout issue.
It sounds like you are getting a query plan that is only good for some of the parameters being sent to it, which could be caused parameter sniffing; particularly with heavily skewed data.
Have you tried adding option (recompile)to the query or with recompile if it is a procedure?
Have you checked the execution plan?
Reference:
Parameter Sniffing, Embedding, and the RECOMPILE Options - Paul White
It seems your query might have out of date statistics,try updating the statistics for all tables involved in the query,this presents SQLServer a good chance in getting right estimates which also lowers several grants
If this happens even after updating statistics ,try fine tuning the query
to update statistics ,use below query..also try running with full scan,even this might not be needed for all cases
UPDATE STATISTICS tablename with fullscan;
I have a job that runs daily and executes dozens of stored procedures.
Most of them run just fine, but several of them recently started taking a while to run (4-5 minutes).
When I come in in the morning and try to troubleshoot them, they only take 10-20 seconds, just as they supposed to.
This has been happening for the last 10 days or so. No changes had been made to the server (we are running SQL 2012).
How do I even troubleshoot it and what can I do to fix this??
Thanks!!
You can use some DMVs (Dynamic Management Views) that SQL provides to investigate the plan cache. However, the results can be a little intimidating and without some background in it, it may be hard to dig through the results. I would recommend looking into some DMVs like sys.dm_exec_query_stats and sys.dm_exec_cached_plans. Kimberly Tripp from SQLSkills.com does some GREAT courses on Pluralsight on how to use these and get some awesome results by building more advanced queries off of those DMVs.
As well, these DMVs will return a plan_handle column which you can pass to another DMV, sys.dm_exec_query_plan(plan_handle), to return the Execution Plan for a specific statement. The hard part is going to be digging through the results of dm_exec_cached_plans to find the specific job/stored procs that are causing issues. sys.dm_exec_sql_text(qs.[sql_handle]) can help by providing a snapshot of the SQL that was run for that job but you'll get the most benefit out of it (in my opinion) by CROSS APPLYing it with some of the other DMVs I mentioned. If you can identify the Job/Proc/Statement and look at the plan, it will likely show you some indication of the parameter sniffing problem that Sean Lange mentioned.
Just in case: parameter sniffing is when you run the first instance of a query/stored proc, SQL looks at the parameter that you passed in and builds a plan based off of it. The plan that gets generated from that initial compilation of the query/proc will be ideal for the specific parameter that you passed in but might not be ideal for other parameters. Imagine a highly skewed table where all of the dates are '01-01-2000', except one which is '10-10-2015'.
Passing those two parameters in would generate vastly different plans due to data selectivity (read: how unique is the data?). If one of those plans gets saved to cache and called for each subsequent execution, it's possible (and in some cases, likely) that it's not going to be ideal for other parameters.
The likely reason why you're seeing a difference in speed between the Job and when you run the command yourself, is that when you run it, you're running it Ad Hoc. The Job isn't, it's running them as Stored Procs, which means they're going to use different execution plans.
TL;DR:
The Execution Plan that you have saved for the Job is not optimized. However, when you run it manually, you're likely creating an Ad Hoc plan that is optimized for that SPECIFIC run. It's a bit of a hard road to dig into the plan cache and see what's going on but it's 100% worth it. I highly recommend looking up Kimberly Tripp's blog as she has some great posts about this and also some fantastic courses on Pluralsight regarding this.
If I am not mistaken, it is not possible to have the Explain Plan for Procedures in Toad and Oracle 10g. If this is true, is there anyway that I can see the cost of my procedures?
When I make a small change in one of the functions which are called by ProcedureX, the execution time of ProcedureX increases dramatically.
I tried to run each query which exist inside the ProcedureX, but it is almost impossible due to the huge number of callings and parameters that are passed through them.
Do you have any idea?
Thank you
DBMS_PROFILER is probably what you are looking for.
The DBMS_PROFILER package provides an interface to profile existing PL/SQL applications and identify performance bottlenecks. You can then collect and persistently store the PL/SQL profiler data.
The final HTML report it generates is pretty useful in grouping different function calls,s o you can see where your procedure is spending most of the time (provided you run it with sufficient data).
Take a look at this link and see if it helps.
http://docs.oracle.com/cd/B19306_01/appdev.102/b14258/d_profil.htm
Sql server query takes 1 second when run in query analyzer with single user. I started stress tool written by Adam Machanic with same query and run that for 200 users and in parrallel I ran the same query in query analyzer it takes more than when 20 second.
How to find which join or where clause is creating problem in a stress test situation. What is taking so long?
Thanks,
Ron
It's likely going to be locking and blocking. A starting point is reading this article on the MSDN that gives a sproc you can run (and the output of which is very verbose). Indexes may be one way to sort it out, but without any more information (schema, query, volumes of data, etc) it's unlikely we can provide more.
see this article: Slow in the Application, Fast in SSMS? Understanding Performance Mysteries by Erland Sommarskog, it is the most comprehensive article that I've ever seen on this issue.
I'd bet that it is one of The Default Settings, like QUOTED_IDENTIFIER.
Are there any standard queries that can be run that will show the performance of a SQL Server 2005 database?
Note: I need to know the performance of every aspect of the database.
EDIT:
I am looking for a way to measure the time it takes for typical queries to execute. I am then going to apply indexing to certain tables in the database and then time how long the same queries take to execute and see if there is a significant difference. Is there an easy way to do this?
Thanks!
(Edited, link hopefully fixed)
For general background research/analysis of SQL Server performance, I prefer to watch how SQL is performing as it is performing. The best tools for that are SQL Profiler and sometimes Windows System Monitor (aka Performance Monitor aka PerfMon). Alas, neither are particularly simple, let alone simple queries against the system -- though some PerfMon counters are exposed through a few DMViews I can't dig up just now.
BOL has reasonable information on these; a good top-level (online) page for this is here. Be wary, there is serious DBA stuff beyond that point
There are some dynamic management views and functions build in:
http://msdn.microsoft.com/en-us/library/ms188754(SQL.90).aspx
select * from sys.dm_db_index_usage_stats
select * from sys.dm_os_memory_objects