MiniProfiler Ruby: Getting a better breakdown for non-SQL calls - ruby-on-rails-3

I'm trying to profile some of our Rails controllers using Mini Profiler, but I think I'm trying to use it for something it isn't built for. I've got it profiling SQL queries just fine, but I need to break down the non-SQL code, because we're seeing a lot of chug in some pages, but the SQL doesn't seem to be the problem.
Here's a screenshot of what I'm talking about: http://cl.ly/image/2J3i1C1c072O
You can see that the top level (Executing action: show) takes 9136ms to complete, but the queries executed are only a fraction of that total time. I suppose what I'm asking is if there's a way to display more "detailed" information about the code being executed, or if I need to find a different tool to use. New Relic isn't an option, unfortunately.
Thanks for any help.

You can insert custom steps in the areas you think are responsible.
# in your initializer
Rack::MiniProfiler.profile_method SomeClass, "method"
# or
Rack::MiniProfiler.step "some step" do
# your code
end
Additionally you can run ruby-prof to figure out what is going on everywhere and then strategically instrument.

Related

TSQL Dynamically determine parameter list for SP/Function

I want to write a generic logging snip-it into a collection of stored procedures. I'm writing this to have a quantitative measure of our front-user user experience as I know which SP's are used by the front-end software and how they are used. I'd like to use this to gather a base-line before we commence performance tunning and afterward to show the outcome of tunning.
I can dynamically pull the object name from ##PROCID, but I've been unable to determine all parameters passed and their values. Anyone know if this is possible?
EDIT: marking my response as the answer to close this question. Appears extended events are the least intrusive item to performance, however i'm not sure if there is any substantial difference between minimal profiling and extended events. Perhaps something for a rainy day.
I can get the details of the parameters taken by the proc without parsing its text (at least in SQL Server 2005).
select * from INFORMATION_SCHEMA.PARAMETERS where
SPECIFIC_NAME = OBJECT_NAME(##PROCID)
And I guess that this means that I could, with some appropriately madcap dynamic SQL, also pull out their values.
I don't know how to do this off the top of my head, but I would consider running a trace instead if I were you. You can use SQL Server Profiler to gather only information for the stored procedures that you specify (using filters). You can send the output to a table and then query the results to your heart's content. The output can include IO information, what parameters were passed, the client userid and machine, and much much more.
After running the trace you can aggregate the results into reports that would show how many times a procedure was called, what parameters were used, etc...
Here is a link that might help:
http://msdn.microsoft.com/en-us/library/ms187929.aspx
Appears the best solution to my situation is to do profiling gathering only SP:starting and SP:completed and writing some TSQL to iterate through data and populate a tracking table.
I personally preferred code-generation for this, but politically where i'm working they preferred this solution. We lost some granularity in logging, but this is a sufficient solution to my problem.
EDIT: This ended being an OK solutions. Even profiling just these two items degrades performance to a noticeable degree. :( I wish we had a MSFT provided way to profile a workload that didn't degrade production performance. Oracle has nice solution to this, but it's has its tradeoff's as well. I'd love to see MSFT implement something similar. The new DMV's and extended events help to correlate items. Thanks again for the link Martin.

Accessing Advantage Management Utility values for feedback

In our report generation application, there's some pretty hefty queries that take a considerable amount of time to run. User feedback up until this point has been basically zip while the server chugs away at their request. I noticed that there's a tab on the ADA Management Utility that shows progress on the query both as percent complete and estimated seconds remaining. I tried digging through the tables to see if I could find any of this information exposed, as well as picking through the limited documentation available for ADBS and couldn't find anything useful.
Does anyone know if there's a way I can cull this information outside ADA to provide some needed user feedback?
ADA is getting that information from the sp_GetSQLStatements system procedure.
However, the traditional way of providing progress information for any operation is through a callback function.
This isn't an answer to the question but might be useful in helping reduce the time it takes to run the queries in the report. You may have already done this and made it as optimized as it gets. But if not, you might look at the query plan within Advantage Data Architect to check for optimization issues. In the query window where you run a query, you can choose Show Plan from the SQL menu (or click the button in the toolbar). This will display the execution plan with optimization information that might help identify missing indexes.
Another tool that might be helpful in identifying unoptimized queries is through query logging. It is also discussed here.

Is there a better way to debug SQL?

I have worked with SQL for several years now, primarily MySQL/PhpMyAdmin, but also Oracle/iSqlPlus and PL/SQL lately. I have programmed in PHP, Java, ActionScript and more. I realise SQL isn't an imperative programming language like the others - but why do the error messages seem so much less specific in SQL? In other environments I'm pointed straight to the root of the problem. More often that not, MySQL gives me errors like "error AROUND where u.id = ..." and prints the whole query. This is even more difficult with stored procedures, where debugging can be a complete nightmare.
Am I missing a magic tool/language/plugin/setting that gives better error reporting or are we stuck with this? I want a debugger or language which gives me the same amount of control that Eclipse gives me when setting breakpoints and stepping trough the code. Is this possible?
I think the answer lies in the fact that SQL is a set-based language with a few procedural things attached. Since the designers were thinking in set-based terms, they didn't think that the ordinary type of debugging that other languages have is important. However, I think some of this is changing. You can set breakpoints in SQL Server 2008. I haven't used it really as you must have SQL Server 2008 databases before it will work and most of ours are still SQL Server 2000. But it is available and it does allow you to step through things. You still are going to have problems when your select statement is 150 lines long and it knows that the syntax isn't right but it can't point out exactly where as it is all one command.
Personally when I am writing a long procedural SP, I build in a test mode that includes showing me the results of things I do, the values of key variables at specific points I'm interested in, and print staments that let me know what steps have been completed and then rolling the whole thing back when done. That way I can see what would have happened if it had run for real, but not have hurt any of the data in the database if I got it wrong. I find this very useful. It can vastly increase the size of your proc though. I have a template I use that has most of the structure I need set up in it, so it doesn't really take me too long to do. Especially since I never add an insert. update or delete to a proc without first testing the associated select to ensure I have the records I want.
I think the explanation is that "regular" languages have much smaller individual statements than SQL, so that single-statement granularity points to a much smaller part of the code in them than in SQL. A single SQL statement can be a page or more in length; in other languages it's usually a single line.
I don't think that makes it impossible for debuggers / IDEs to more precisely identify errors, but I suspect it makes it harder.
I agree with your complaint.
Building a good logging framework and overusing it in your sprocs is what works best for me.
Before and after every transaction or important piece of logic, I write out the sproc name, step timestamp and a rowcount (if relevant) to my log table. I find that when I have done this, I can usually narrow down the problem spot within a few minutes.
Add a debug parameter to the sproc (default to "N") and pass it through to any other sprocs that it calls so that you can easily turn logging on or off.
As for breakpoints and stepping through code, you can do this with MS SQL Server (in my opinion, it's easier on 2005+ than with 2000).
For the simple cases, early development debugging, the sometimes cryptic messages are usually good enough to get the error resolved -- syntax error, can't do X with Y. If I'm in a tough sproc, I'll revert to "printf debugging" on the sproc text because it's quick and easy. After a while with your database of choice, the simple issues become old hat and you just take them in stride.
However, once the code is released, the complexity of the issues is way too high. I consider myself lucky if I can reproduce them. Also, the places where the developer in me would want a debugger the DBA in me says "no way you're putting a debugger there."
I do use the following tactics.
During writing of the stored procedure have a #procStep var
each time a new logical step is executed
set #procStep = "What the ... is happening here " ;
the rest is here

Performance metrics on specific routines: any best practices?

I'd like to gather metrics on specific routines of my code to see where I can best optimize. Let's take a simple example and say that I have a "Class" database with multiple "Students." Let's say the current code calls the database for every student instead of grabbing them all at once in a batch. I'd like to see how long each trip to the database takes for every student row.
This is in C#, but I think it applies everywhere. Usually when I get curious as to a specific routine's performance, I'll create a DateTime object before it runs, run the routine, and then create another DateTime object after the call and take the milliseconds difference between the two to see how long it runs. Usually I just output this in the page's trace...so it's a bit lo-fi. Any best practices for this? I thought about being able to put the web app into some "diagnostic" mode and doing verbose logging/event log writing with whatever I'm after, but I wanted to see if the stackoverflow hive mind has a better idea.
For database queries, you have a two small problems. Cache: data cache and statement cache.
If you run the query once, the statement is parsed, prepared, bound and executed. Data is fetched from files into cache.
When you execute the query a second time, the cache is used, and performance is often much, much better.
Which is the "real" performance number? First one or second one? Some folks say "worst case" is the real number, and we have to optimize that. Others say "typical case" and run the query twice, ignoring the first one. Others says "average" and run in 30 times, averaging them all. Other say "typical average", run the 31 times and average the last 30.
I suggest that the "last 30 of 31" is the most meaningful DB performance number. Don't sweat the things you can't control (parse, prepare, bind) times. Sweat the stuff you can control -- data structures, I/O loading, indexes, etc.
I use this method on occasion and find it to be fairly accurate. The problem is that in large applications with a fairly hefty amount of debugging logs, it can be a pain to search through the logs for this information. So I use external tools (I program in Java primarily, and use JProbe) which allow me to see average and total times for my methods, how much time is spent exclusively by a particular method (as opposed to the cumulative time spent by the method and any method it calls), as well as memory and resource allocations.
These tools can assist you in measuring the performance of entire applications, and if you are doing a significant amount of development in an area where performance is important, you may want to research the tools available and learn how to use one.
Some times approach you take will give you a best look at you application performance.
One things I can recommend is to use System.Diagnostics.Stopwatch instead of DateTime ,
DateTime is accurate only up to 16 ms where Stopwatch is accurate up to the cpu tick.
But I recommend to complement it with custom performance counters for production and running the app under profiler during development.
There are some Profilers available but, frankly, I think your approach is better. The profiler approach is overkill. Maybe the use of profilers is worth the trouble if you absolutely have no clue where the bottleneck is. I would rather spend a little time analyzing the problem up front and putting a few strategic print statements than figure out how to instrument your app for profiling then pour over gargantuan reports where every executable line of code is timed.
If you're working with .NET, then I'd recommend checking out the Stopwatch class. The times you get back from that are going to be much more accurate than an equivalent sample using DateTime.
I'd also recommend checking out ANTS Profiler for scenarios in which performance is exceptionally important.
It is worth considering investing in a good commercial profiler, particularly if you ever expect to have to do this a second time.
The one I use, JProfiler, works in the Java world and can attach to an already-running application, so no special instrumentation is required (at least with the more recent JVMs).
It very rapidly builds a sorted list of hotspots in your code, showing which methods your code is spending most of its time inside. It filters pretty intelligently by default, and allows you to tune the filtering further if required, meaning that you can ignore the detail of third party libraries, while picking out those of your methods which are taking all the time.
In addition, you get lots of other useful reports on what your code is doing. It paid for the cost of the licence in the time I saved the first time I used it; I didn't have to add in lots of logging statements and construct a mechanism to anayse the output: the developers of the profiler had already done all of that for me.
I'm not associated with ej-technologies in any way other than being a very happy customer.
I use this method and I think it's very accurate.
I think you have a good approach. I recommend that you produce "machine friendly" records in the log file(s) so that you can parse them more easily. Something like CSV or other-delimited records that are consistently structured.

What's your favored method for debugging MS SQL stored procedures?

Most of my SPs can simply be executed (and tested) with data entered manually. This works well and using simple PRINT statements allows me to "debug".
There are however cases where more than one stored procedure is involved and finding valid data to input is tedious. It's easier to just trigger things from within my web app.
I have a little experience with the profiler but I haven't found a way to explore what's going on line by line in my stored procedures.
What are your methods?
Thank you, as always.
Note: I'm assuming use of SQL Server 2005+
Profiler is very handy, just add SP:StmtStarting events, and filter the activity down to just your process by setting SPID=xxx. Once you have it set up, it's a breeze to see what's going on.
You can actually attach a debugger to your sql server :) - from vs, given you have configured that on your sql server.
Check this link for more info, notice you can set break points :) https://web.archive.org/web/20090303135325/http://dbazine.com/sql/sql-articles/cook1.
Check this link for a more general set of info: http://msdn.microsoft.com/en-us/library/zefbf0t6.aspx
Update: Regarding "There are however cases where more than one stored procedure is involved and finding valid data to input is tedious. It's easier to just trigger things from within my web app."
I suggest you set up integration tests, focused on the specific methods that interact with those procedures. If the procedures are being driven by the web app, it is a great place to have valid tests + inputs you can run at any time. If there are multiple apps that interact with the procedures, I would look at unit testing the procedures instead.
I prefer to just use stored procs for dataset retrieval, and do any complex "work" on the application side. Because you are correct, trying to "debug" what's happening inside the guts of a many layered, cursor-looping, temp-table using, nested stored proc is very difficult.
That said, MS KB 316549 describes how to use visual studio to debug stored procs.
According to this article, there are a number of limitations to debugging in this fashion:
You cannot "break" execution.
You cannot "edit and continue."
You cannot change the order of statement execution.
Although you can change the value of variables, your changes may not take effect because the variable values are cached.
Output from the SQL PRINT statement is not displayed.
Edit: Obviously, if you are the person making this stored proc, then don't make it "many layered, cursor-looping, temp-table using, and nested". In my role as a DBA, though, that's pretty much what I encounter daily from the app developers.
This trick is pretty handy:
Custom user configurable Profiler Events
As far as not knowing what the valid inputs would be, you need to test a wide range of inputs including especially invalid inputs. You should define your test cases before you write your procs. Then you have a reproducable set of tests to run every time someone changes the complex process.
My team uses SPs by rule as our interface to the database; we do it in a way that the application user can ONLY execute SPs (with our naming convention).
One best practice that we use, that works well, is that certain test scripts are contained within the SP comments, and must be executed on each rev of an SP, or development of a new SP.
You should always, ALWAYS test the SP as thoroughly as possible without any application layer involved (through Management Studio, for example).
Make sure you step into main stored proc in VS2005/2008, when it encounters a nested function, hit F11 (step into ) to enter in...continue debugging... It was not very obvious from the debug menu.
I prefer not to debug, I do test driven development instead, which almost eliminates the need to debug.