Mysql benchamrking GUI tool - optimization

Hi
I need to optimize an application which is already there for a long time. Optimization will include move inline queries from php pages to "stored procedures", get rid of sub queries and convert them to "joins" etc etc.
I guess the best way is to use benchmarking tools for this purpose, but is there any GUI based tool available which I could use with Windows 7? Please help!
Also moving the inline queries to stored procedures and getting rid of sub queries, will that help in a major performance boost? Please feel free to express your opinion.
The major focus is on finding a suitable tool for benchmarking purposes however. Just a quick question will "Mysql workbench" help in this scenario? Pls advise.
Many thanks for your time in advance. Any kind of help is much appreciated.

I do not know about the moving the inline queries to stored procedures. It really depends if you are going to use that query a lot or not. Switching from usign many queries to JOIN coudl be a major improvement in most cases, depends how much extra queries you were running initially.
As for benchmarking, well you can always use even phpMyAdmin to see how much time are takign certain queries and/or use a build in application benchmakring/profiling tool (even created from you PHP code) to bench the performance. This things usually do not have a straight and simple answer :(
Link with some suggestions http://beerpla.net/2008/04/16/mysql-conference-liveblogging-benchmarking-tools-wednesday-425pm/ maybe try WAST http://west-wind.com/presentations/webstress/webstress.htm
If you have a DB abstraction layer use it to log the performance of the queries and see if there are any that repeat too often or take too much time and also in which script they where called.

Related

Best way to determine bottlenecks in NHibernate?

This is in reference to a question I asked earlier. Aside from viewing the SQL generated by NHibernate on calls to the database, what is the best way to find bottlenecks between NHibernate and the DB? In particular, I have queries that return very quickly when run directly against the database, but very slow (over 3-4x) return times when running the code in unit tests and on the web page. I am relatively sure this has something to do with the way I have mapped my tables and the primary keys. How can I dig in further to see where my slow areas are occurring? Are there other tools available? I know this is an extremely broad question, but I have not had the need to explore these problems yet. Any help would be greatly appreciated.
AFAIK there is no single tool to profile NHibernate, yet. This is about to change with Ayende's NHIbernate Profiler. In the meantime, you can use a combination of code profilers (e.g. dotTrace), SQL Server Profiler, the NHibernate logger, and static analysis, i.e. if you know about the SELECT N+1 problem, most of the time you can spot it just by looking at the code.
EDIT: NHProf is now available!
I use JProbe for analyzing my code. It can show the amount of time spent in any particular location in my code, and where the bottlenecks are. I'm sure there are other tools available though, some of which may be cheaper.
for now use Sql profiler. Later on you can BUY Ayende's Nhibernate Profiler. Also you can log the sql and check.

How to use SQLab Xpert Tuning to tune SQL for better performance?

Anyone have any idea? And any open source sofware which also seens to perform this kind of functionality?
I am not sure what you are asking, it is pretty straightforward, you type in your SQL and SQLLab Xpert tries many combinations of rewriting your query and runs them all, selecting the fastest. I find the approach a little dubious, you probably will get something that runs faster than what you originally had, but probably not the fastest possible (unless it is very simple SQL).
I prefer to hand tune, the Oracle performance manual http://download.oracle.com/docs/cd/B28359_01/server.111/b28274/toc.htm, Chapters 11-20, has all the information you need, in my opinion better than the shotgun approach SQLLab Xpert takes.

How do I 'refactor' SQL Queries?

I have several MS Access queries (in views and stored procedures) that I am converting to SQL Server 2000 (T-SQL). Due to Access's limitations regarding sub-queries, and or the limitations of the original developer, many views have been created that function only as sub-queries for other views.
I don't have a clear business requirements spec, except to 'do what the Access application does', and half a page of notes on reports/CSV extracts, but the Access application doesn't even do what I suspect is required properly.
I, therefore, have to take a bottom up approach, and 'copy' the Access DB to T-SQL, where I would normally have a better understanding of requirements and take a top down approach, creating new queries to satisfy well defined requirements.
Is there a method I can follow in doing this? Do I spread it all out and spend a few days 'grokking' it, or do I continue just copying the Access views and adopt an evolutionary approach to optimising the querying?
Work out what access does with the queries, and then use this knowledge to check that you've transferred it properly. Only once you've done this can you think about refactoring. I'd start with slow queries and then go from there: work out what indexes you need and then progressively rewrite. This way you can deliver as soon as you've proved that you moved everything successfully (even if it is potentially a bit slower). That's much better than not being able to deliver at all because problem X came along.
I'd probably start with the Access database, exercise the queries in situ and see what the resultset is. Often you can understand what the query accomplishes and then work back to your own design to accomplish it. (To be thorough, you'll need to understand the intent pretty completely anyway.) And that sounds like the best statement of requirements you're going to get - "Just like it's implemented now."
Other than that, You're approach is the best I can think of. Once they are in SQL Server, just start testing and grokking.
When you are dealing with a problem like this it's often helpful to keep things working as they are while you make incremental changes. This is better from a risk management perspective.
I'd concentrate on getting it working, then checking the database performance and optimizing performance problems. Then, as you add features and fix bugs, clean up the code that's hard to maintain. As you said, a sub-query is really very similar to a view. So if it's not broken you may not need to change it.
This depends on your timeline. If you have to get the project running absolutely as soon as possible (I know this is true for EVERY project, but if it's REALLY true for you), then yes, duplicate the functionality and infrastructure from Access then do your refactoring either later or as you go.
If you have SOME time you can dedicate to it, then refactoring it now will give you two things:
You'll be happier with the code, and it will (likely) perform better, since actual analysis was done rather than the transcoding equivalent of a copy-paste
You'll likely gain a greater understanding of what the true business rules are, since you'll almost certainly come across things that aren't in the spec (especially considering how you describe them)
I would recommend copying the views to SQL Server immediately, and then use its sophisticated tools to help you grok them.
For example, SQL Server can tell you what views, stored procedures, etc, rely on a particular view, so you can see from there whether the view is a one-of or if it's actually used in more than one place. It will help you determine which views are more important than which.

PostgreSQL performance monitoring tool

I'm setting up a web application with a FreeBSD PostgreSQL back-end. I'm looking for some database performance optimization tool/technique.
Database optimization is usually a combination of two things
Reduce the number of queries to the database
Reduce the amount of data that needs to be looked at to answer queries
Reducing the amount of queries is usually done by caching non-volatile/less important data (e.g. "Which users are online" or "What are the latest posts by this user?") inside the application (if possible) or in an external - more efficient - datastore (memcached, redis, etc.). If you've got information which is very write-heavy (e.g. hit-counters) and doesn't need ACID-semantics you can also think about moving it out of the Postgres database to more efficient data stores.
Optimizing the query runtime is more tricky - this can amount to creating special indexes (or indexes in the first place), changing (possibly denormalizing) the data model or changing the fundamental approach the application takes when it comes to working with the database. See for example the Pagination done the Postgres way talk by Markus Winand on how to rethink the concept of pagination to make it more database efficient
Measuring queries the slow way
But to understand which queries should be looked at first you need to know how often they are executed and how long they run on average.
One approach to this is logging all (or "slow") queries including their runtime and then parsing the query log. A good tool for this is pgfouine which has already been mentioned earlier in this discussion, it has since been replaced by pgbadger which is written in a more friendly language, is much faster and more actively maintained.
Both pgfouine and pgbadger suffer from the fact that they need query-logging enabled, which can cause a noticeable performance hit on the database or bring you into disk space troubles on top of the fact that parsing the log with the tool can take quite some time and won't give you up-to-date insights on what is going in the database.
Speeding it up with extensions
To address these shortcomings there are now two extensions which track query performance directly in the database - pg_stat_statements (which is only helpful in version 9.2 or newer) and pg_stat_plans. Both extensions offer the same basic functionality - tracking how often a given "normalized query" (Query string minus all expression literals) has been run and how long it took in total. Due to the fact that this is done while the query is actually run this is done in a very efficient manner, the measurable overhead was less than 5% in synthetic benchmarks.
Making sense of the data
The list of queries itself is very "dry" from an information perspective. There's been work on a third extension trying to address this fact and offer nicer representation of the data called pg_statsinfo (along with pg_stats_reporter), but it's a bit of an undertaking to get it up and running.
To offer a more convenient solution to this problem I started working on a commercial project which is focussed around pg_stat_statements and pg_stat_plans and augments the information collected by lots of other data pulled out of the database. It's called pganalyze and you can find it at https://pganalyze.com/.
To offer a concise overview of interesting tools and projects in the Postgres Monitoring area i also started compiling a list at the Postgres Wiki which is updated regularly.
pgfouine works fairly well for me. And it looks like there's a FreeBSD port for it.
I've used pgtop a little. It is quite crude, but at least I can see which query is running for each process ID.
I tried pgfouine, but if I remember, it's an offline tool.
I also tail the psql.log file and set the logging criteria down to a level where I can see the problem queries.
#log_min_duration_statement = -1 # -1 is disabled, 0 logs all statements
# and their durations, > 0 logs only
# statements running at least this time.
I also use EMS Postgres Manager to do general admin work. It doesn't do anything for you, but it does make most tasks easier and makes reviewing and setting up your schema more simple. I find that when using a GUI, it is much easier for me to spot inconsistencies (like a missing index, field criteria, etc.). It's only one of two programs I'm willing to use VMWare on my Mac to use.
Munin is quite simple yet effective to get trends of how the database is evolving and performing over time. In the standard kit of Munin you can among other thing monitor the size of the database, number of locks, number of connections, sequential scans, size of transaction log and long running queries.
Easy to setup and to get started with and if needed you can write your own plugin quite easily.
Check out the latest postgresql plugins that are shipped with Munin here:
http://munin-monitoring.org/browser/branches/1.4-stable/plugins/node.d/
Well, the first thing to do is try all your queries from psql using "explain" and see if there are sequential scans that can be converted to index scans by adding indexes or rewriting the query.
Other than that, I'm as interested in the answers to this question as you are.
Check out Lightning Admin, it has a GUI for capturing log statements, not perfect but works great for most needs. http://www.amsoftwaredesign.com
DBTuna http://www.dbtuna.com/postgresql_monitor.php has recently started supporting PostgreSQL monitoring. We use it extensively for MySQL monitoring, so if it provides the same for Postgres then it should be a good fit for you too.

SQL With A Safety Net

My firm have a talented and smart operations staff who are working very hard. I'd like to give them a SQL-execution tool that helps them avoid common, easily-detected SQL mistakes that are easy to make when they are in a hurry. Can anyone suggest such a tool? Details follow.
Part of the operations team remit is writing very complex ad-hoc SQL queries. Not surprisingly, operators sometimes make mistakes in the queries they write because they are so busy.
Luckily, their queries are all SELECTs not data-changing SQL, and they are running on a copy of the database anyway. Still, we'd like to prevent errors in the SQL they run. For instance, sometimes the mistakes lead to long-running queries that slow down the duplicate system they're using and inconvenience others until we find the culprit query and kill it. Worse, occasionally the mistakes lead to apparently-correct answers that we don't catch until much later, with consequent embarrassment.
Our developers also make mistakes in complex code that they write, but they have Eclipse and various plugins (such as FindBugs) that catch errors as they type. I'd like to give operators something similar - ideally it would see
SELECT U.NAME, C.NAME FROM USER U, COMPANY C WHERE U.NAME = 'ibell';
and before you executed, it would say "Hey, did you realise that's a Cartesian product? Are you sure you want to do that?" It doesn't have to be very smart - finding obviously missing join conditions and similar evident errors would be fine.
It looks like TOAD should do this but I can't seem to find anything about such a feature. Are there other tools like TOAD that can provide this kind of semi-intelligent error correction?
Update: I forgot to mention that we're using MySQL.
If your people are using the mysql(1) program to run queries, you can use the safe-updates option (aka i-am-a-dummy) to get you part of what you need. Its name is somewhat misleading; it not only prevents UPDATE and DELETE without a WHERE (which you're not worried about), but also adds an implicit LIMIT 1000 to SELECT statements, and aborts SELECTs that have joins and are estimated to consider over 1,000,000 tuples --- perfect for discouraging Cartesian joins.
..."writing very complex ad-hoc SQL queries.... they are so busy"
Danger Will Robinson!
Automate Automate Automate.
Ideally, the ops team should not be put into a position where they have to write queries on the fly in a high stress situation – it’s a recipe for disaster! Better for them to build up a library of pre-written scripts that have undergone the appropriate testing to make sure it a) does what you want b) provides an audit trail c) has a possible ‘undo’ type function.
Failing that, giving them a user ID that only has SELECT premissions might help :-)
You might find SQL Prompt from redgate useful. I'm not sure what database engine you're using, as it's only for MSSQL Server
I'm not expecting anything like this to exist. The tool would have to first implement everything that the SQL parser in your database implements, and then it would have to do a data model analysis to predict "bad" queries.
Your best bet might be to write a plugin for a text editor that did some basic checking for suspicious patterns and highlighted them differently than the standard .sql mode. But even that would be quite difficult.
I would be happy with a tool that set off alarm bells whenever I typed in an update statement without a where clause. And perhaps administered a mild electric shock, since it's usually about 1 in the morning after a long day when mistakes like that happen.
It would be pretty easy to build this by setting up a sample database with a extremely small amount of dummy data, which would receive the query first. A couple of things will happen:
You might get a SQL syntax error, which would not load the database much since it's a small database.
You might get back a response which could clearly be shown to contain every row in one or more tables, which is probably not what they want.
Things which pass the above conditions are likely to be okay, so you can run them against the copy of the production database.
Assuming your schema doesn't change much and is not particularly weird, writing the above is likely the quickest solution to your problem.
I'd start with some coding standards - for instance never use the type of join in your example - it often results in bad results (especially in SQL Server if you try to do an outer join that way, you will get bad results). require them to do explicit joins.
If you have complex relationships, you might consider putting them in views and then writing the adhoc queries from the views. Then at least they will never make the mistake of getting the joins wrong.
Can't you just limit the amount of time a query can run for? I'm not sure about MySQL, but for SQL Server, even just the default query analyzer can restrict how long queries will run before they time out. Couple that with limited rights so they can only run SELECT queries, and you should be pretty much covered.