I'd like to gather metrics on specific routines of my code to see where I can best optimize. Let's take a simple example and say that I have a "Class" database with multiple "Students." Let's say the current code calls the database for every student instead of grabbing them all at once in a batch. I'd like to see how long each trip to the database takes for every student row.
This is in C#, but I think it applies everywhere. Usually when I get curious as to a specific routine's performance, I'll create a DateTime object before it runs, run the routine, and then create another DateTime object after the call and take the milliseconds difference between the two to see how long it runs. Usually I just output this in the page's trace...so it's a bit lo-fi. Any best practices for this? I thought about being able to put the web app into some "diagnostic" mode and doing verbose logging/event log writing with whatever I'm after, but I wanted to see if the stackoverflow hive mind has a better idea.
For database queries, you have a two small problems. Cache: data cache and statement cache.
If you run the query once, the statement is parsed, prepared, bound and executed. Data is fetched from files into cache.
When you execute the query a second time, the cache is used, and performance is often much, much better.
Which is the "real" performance number? First one or second one? Some folks say "worst case" is the real number, and we have to optimize that. Others say "typical case" and run the query twice, ignoring the first one. Others says "average" and run in 30 times, averaging them all. Other say "typical average", run the 31 times and average the last 30.
I suggest that the "last 30 of 31" is the most meaningful DB performance number. Don't sweat the things you can't control (parse, prepare, bind) times. Sweat the stuff you can control -- data structures, I/O loading, indexes, etc.
I use this method on occasion and find it to be fairly accurate. The problem is that in large applications with a fairly hefty amount of debugging logs, it can be a pain to search through the logs for this information. So I use external tools (I program in Java primarily, and use JProbe) which allow me to see average and total times for my methods, how much time is spent exclusively by a particular method (as opposed to the cumulative time spent by the method and any method it calls), as well as memory and resource allocations.
These tools can assist you in measuring the performance of entire applications, and if you are doing a significant amount of development in an area where performance is important, you may want to research the tools available and learn how to use one.
Some times approach you take will give you a best look at you application performance.
One things I can recommend is to use System.Diagnostics.Stopwatch instead of DateTime ,
DateTime is accurate only up to 16 ms where Stopwatch is accurate up to the cpu tick.
But I recommend to complement it with custom performance counters for production and running the app under profiler during development.
There are some Profilers available but, frankly, I think your approach is better. The profiler approach is overkill. Maybe the use of profilers is worth the trouble if you absolutely have no clue where the bottleneck is. I would rather spend a little time analyzing the problem up front and putting a few strategic print statements than figure out how to instrument your app for profiling then pour over gargantuan reports where every executable line of code is timed.
If you're working with .NET, then I'd recommend checking out the Stopwatch class. The times you get back from that are going to be much more accurate than an equivalent sample using DateTime.
I'd also recommend checking out ANTS Profiler for scenarios in which performance is exceptionally important.
It is worth considering investing in a good commercial profiler, particularly if you ever expect to have to do this a second time.
The one I use, JProfiler, works in the Java world and can attach to an already-running application, so no special instrumentation is required (at least with the more recent JVMs).
It very rapidly builds a sorted list of hotspots in your code, showing which methods your code is spending most of its time inside. It filters pretty intelligently by default, and allows you to tune the filtering further if required, meaning that you can ignore the detail of third party libraries, while picking out those of your methods which are taking all the time.
In addition, you get lots of other useful reports on what your code is doing. It paid for the cost of the licence in the time I saved the first time I used it; I didn't have to add in lots of logging statements and construct a mechanism to anayse the output: the developers of the profiler had already done all of that for me.
I'm not associated with ej-technologies in any way other than being a very happy customer.
I use this method and I think it's very accurate.
I think you have a good approach. I recommend that you produce "machine friendly" records in the log file(s) so that you can parse them more easily. Something like CSV or other-delimited records that are consistently structured.
Related
I am using VBA and I have downloaded a tool called MZ-Tools, it helps me find all the unused variables in all the code, now I have almost 300 objects which roughly 500 lines in each.
Overall it has found almost 500 unused variables/procedures
Would removing these variables speed up the program a lot or would it just be a waste of time to clean up code which doesn't have much effect on the program?
Short answer: It is never a waste of time to clean up code. You or someone else will be so happy when you have to revise it a year later or so.
Longer answer: The application probably wont speed up a lot. At least you probably will not feel a change. This depends on how heavy it already is. Also it depends on the kind of objects that are created, how 'big' and complex they are. If there is some of those objects running methods every couple of seconds for example in a loop, it will affect the performance of the application considerably.
More: As result of cleaning up your application you will get a better performance. If it is perceptible or not, depends on a variety of stuff. The bigger problem is that you will not know if the objects used wont cause errors in the future. Maybe some of them will get discontinued at some time, or they could cause other kind of unexpected exceptions. This is, I think the biggest threat.
Have fun going trough the code sooner or later!
Based on your question and comments, my impression is your focus is exclusively on execution speed. If that's all you and the team care about for that project, don't invest any time cleaning up those items because I doubt you will notice any runtime performance improvement.
However, I suggest you look beyond only execution speed. How challenging is this project to debug/troubleshoot for the current maintainer(s)? How difficult to add new features, if needed? How about if someone new has to take over responsibility? How much easier would those tasks be without the distractions of unused variables and procedures?
A related consideration is just how much time are we talking about for that cleanup effort? I wonder whether someone has over-estimated the workload.
Make a copy of the db file. From the Mz-Tools code review panel, choose "export" and save the analysis report as a text file. Print the text file. Then move though that printed list, fix each item, and cross it off the list. If you're really slow, you may only average 2 per minute. And for 500 items, that means 250 minutes. But realistically, the task should take less than 4 hours. Running the Mz-tools code review again will show you if you missed anything. And compiling will tell you whether you removed something by mistake.
For most SP-taught developers, there are no option between Linq and Stored-Procedures/Functions. That's may be true.
However, there are a road junctions nowadays. Before I spending too much time into syntax of F#, i would like more inputs about where the power (and opposite) of F# lies.
How will F# perform on this topic (against SP)?
F# have to communicate with a database on some way. Through Linq2Sql/Entity-app-layer or directly though AnyDbConnection. Nothing new there. But F# have the power of parallellism and less overhead in thier work (Functional Programming with C#/F#). Also F# has it's effeciency as a layer for data and machine. Pretty much like C# power of being a layer between human and machine.
Would I really still let the DB Server handle a request of recurring nodes, or just fetch plain data to F# and handle it there? Encapsulated nice and smoothly as a object method call from C#?
Would a stored procedure still be the best option for scanning 50 millions of records for finding orphans or a criteria that matching 0,5% of the result?
Would a SP or function still be best for a simple task as finding next parent node?
Would a SP still being best to collect a million records of data and return calculated sums and/or periods?
Wouldn't a single f# dll library fully built on the Single responsibility principle being of more use then stored procedures hooked up inside a sql server? There are pros and cons, of course. But what are they?
Stored procedures are not magically super-fast. Often, they're actually rather slow.
Many people will downvote this answer providing anecdotal evidence that a stored procedure once made an application faster overall. However, all of those examples that I've actually seen code for indicate that they totally rethought some bad SQL to package it as an SP. I submit that the discipline of repackaging bad SQL into a procedure helped more than the SP itself.
Most of your points can't be evaluated without a measured benchmark.
I suggest that you do the following.
Write it in F#.
Measure it.
If it's too slow for your production application, then try some stored procedures to see if it's faster. If it's fast enough for your production application, then you have your answer, F# worked for you. For your application. For your data. For your architecture.
There's no "general" answer. Although my benchmarks for some particular kinds of queries indicate that the SP engine is pretty slow compared with Java. F# will probably be faster than the SP engine also.
The important thing is to make sure that the database -- if it's going to be "pure" data -- is already optimized so that queries like your "scanning 50 millions of records for finding orphans or a criteria that matching 0,5% of the result?" would retrieve the rows as quickly as possible. This often involves tweaking buffers and array sizes and other elements of the database-to-F# connection. This usually means that you want a more direct connection so that you can adjust the sizes.
Databases are efficient for certain tasks (e.g. when they can uses index to search for a specified row), but probably won't be any faster than F# if you need to process all rows and ubdate them (in database) or calculate some new result based on all the data.
As S. Lott suggests, the best option is to try implementing what you need in F# and you'll find out. Parallelism can give you some performance benefits, especially if you're doing some computationally heavy calculations. However, you may still want to store the data in databases, load it and process it in F# (I believe this is how F# was used by adCenter at Microsoft).
Possibly the most important note is that databases give you various guarantees about the consistency of the data - no matter what happens, you'll still end up with consistent state. Implementing this yourself may be tricky (e.g. when updating data), but you need to consider whether you need it or not.
You ask this:
Would a stored procedure still be the best option for scanning 50 millions of records for finding orphans or a criteria that matching 0,5% of the result?
I take your question to mean 'I have this data in sql server. Should i query it in sql or in client code (F# in this case). Queries like this should absolutely be performed in sql if at all possible. If you're doing it in F#, you're transferring those 50 million rows to the client just to do some aggregation/lookups.
I hope I understood your question correctly.
As I understand an SP just means you call some precompiled execution plan, and you can call it through an API, instead of pushing a string to the server. These two save in the order of millseconds, nowhere near a second. For larger queries that difference is negligible. They're good for highfrequency/ throughput stuff (and of course encapsulating complex logic, but that doens't seem to apply here).
Because an SP uses a procompiled plan, it can indeed be slower than a normal query because it no longer checks the statitsics of the underlying data(becuase the execution plan is already compiled.) Since you mention a condition that applies to 0.5% of the rows, this could be important.
In the discussion of SP vs F# I would reword that to 'on the server' vs 'on the client'. If you're talking higher data volumes (50M rows qualifies) my first choice would always be to 'put the mill where the wood is', that means execute on the server if possible. Only if there is some very complicated logic involved you might want to consider F#, but I don't think that applies. Then still I'd prefer to execute that on the server than first drag all those rows over the network (potentially slow).
GJ
I have some pretty complex reports to write. Some of them... I'm not sure how I could write an sql query for just one of the values, let alone stuff them in a single query.
Is it common to just pull a crap load of data and figure it all via code instead? Or should I try and find a way to make all the reports rely on sql?
I have a very rich domain model. In fact, parts of code can be expanded on to calculate exactly what they want. The actual logic is not all that difficult to write - and it's nicer to work my domain model than with SQL. With SQL, writing the business logic, refactoring it, testing it and putting it version control is a royal pain because it's separate from your actual code.
For example, one the statistics they want is the % of how much they improved, especially in relation to other people in the same class, the same school, and compared to other schools. This requires some pretty detailed analysis of how they performed in the past to their latest information, as well as doing a calculation for the groups you are comparing against as a whole. I can't even imagine what the sql query would even look like.
The thing is, this % improvement is not a column in the database - it involves a big calculation in of itself by analyzing all the live data in real-time. There is no way to cache this data in a column as doing this calculation for every row it's needed every time the student does something is CRAZY.
I'm a little afraid about pulling out hundreds upon hundreds of records to get these numbers though. I may have to pull out that many just to figure out 1 value for 1 user... and if they want a report for all the users on a single screen, it's going to basically take analyzing the entire database. And that's just 1 column of values of many columns that they want on the report!
Basically, the report they want is a massive performance hog no matter what method I choose to write it.
Anyway, I'd like to ask you what kind of solutions you've used to these kind of a problems.
Sometimes a report can be generated by a single query. Sometimes some procedural code has to be written. And sometimes, even though a single query CAN be used, it's much better/faster/clearer to write a bit of procedural code.
Case in point - another developer at work wrote a report that used a single query. That query was amazing - turned a table sideways, did some amazing summation stuff - and may well have piped the output through hyperspace - truly a work of art. I couldn't have even conceived of doing something like that and learned a lot just from readying through it. It's only problem was that it took 45 minutes to run and brought the system to its knees in the process. I loved that query...but in the end...I admit it - I killed it. ((sob!)) I dismembered it with a chainsaw while humming "Highway To Hell"! I...I wrote a little procedural code to cover my tracks and...nobody noticed. I'd like to say I was sorry, but...in the end the job ran in 30 seconds. Oh, sure, it's easy enough to say "But performance matters, y'know"...but...I loved that query... ((sniffle...)) Anybody seen my chainsaw..? >;->
The point of the above is "Make Things As Simple As You Can, But No Simpler". If you find yourself with a query that covers three pages (I loved that query, but...) maybe it's trying to tell you something. A much simpler query and some procedural code may take up about the same space, page-wise, but could possibly be much easier to understand and maintain.
Share and enjoy.
Sounds like a challenging task you have ahead of you. I don't know all the details, but I think I would go at it from several directions:
Prioritize: You should try to negotiate with the "customer" and prioritize functionality. Chances are not everything is equally useful for them.
Manage expectations: If they have unrealistic expectations then tell them so in a nice way.
IMHO SQL is good in many respects, but it's not a brilliant programming language. So I'd rather just do calculations in the application rather than in the database.
I think I'd go for some delay in the system .. perhaps by caching calculated results for some minutes before recalculating. This is with a mind towards performance.
The short answer: for analysing large quantities of data, a SQL database is probably the best tool around.
However, that does not mean you should analyse this straight off your production database. I suggest you look into Datawarehousing.
For a one-off report, I'll write the code to produce it in whatever I can best reason about it in.
For a report that'll be generated more than once, I'll check on who is going to be producing it the next time. I'll still write the code in whatever I can best reason about it in, but I might add something to make it more attractive to use to that other person.
People usually use a third party report writing system rather than writing SQL. As an application developer, if you're spending a lot of time writing complex reports, I would severely question your manager's actions in NOT buying an off-the-shelf solution and letting less-skilled people build their own reports using some GUI.
In our report generation application, there's some pretty hefty queries that take a considerable amount of time to run. User feedback up until this point has been basically zip while the server chugs away at their request. I noticed that there's a tab on the ADA Management Utility that shows progress on the query both as percent complete and estimated seconds remaining. I tried digging through the tables to see if I could find any of this information exposed, as well as picking through the limited documentation available for ADBS and couldn't find anything useful.
Does anyone know if there's a way I can cull this information outside ADA to provide some needed user feedback?
ADA is getting that information from the sp_GetSQLStatements system procedure.
However, the traditional way of providing progress information for any operation is through a callback function.
This isn't an answer to the question but might be useful in helping reduce the time it takes to run the queries in the report. You may have already done this and made it as optimized as it gets. But if not, you might look at the query plan within Advantage Data Architect to check for optimization issues. In the query window where you run a query, you can choose Show Plan from the SQL menu (or click the button in the toolbar). This will display the execution plan with optimization information that might help identify missing indexes.
Another tool that might be helpful in identifying unoptimized queries is through query logging. It is also discussed here.
I'm setting up a web application with a FreeBSD PostgreSQL back-end. I'm looking for some database performance optimization tool/technique.
Database optimization is usually a combination of two things
Reduce the number of queries to the database
Reduce the amount of data that needs to be looked at to answer queries
Reducing the amount of queries is usually done by caching non-volatile/less important data (e.g. "Which users are online" or "What are the latest posts by this user?") inside the application (if possible) or in an external - more efficient - datastore (memcached, redis, etc.). If you've got information which is very write-heavy (e.g. hit-counters) and doesn't need ACID-semantics you can also think about moving it out of the Postgres database to more efficient data stores.
Optimizing the query runtime is more tricky - this can amount to creating special indexes (or indexes in the first place), changing (possibly denormalizing) the data model or changing the fundamental approach the application takes when it comes to working with the database. See for example the Pagination done the Postgres way talk by Markus Winand on how to rethink the concept of pagination to make it more database efficient
Measuring queries the slow way
But to understand which queries should be looked at first you need to know how often they are executed and how long they run on average.
One approach to this is logging all (or "slow") queries including their runtime and then parsing the query log. A good tool for this is pgfouine which has already been mentioned earlier in this discussion, it has since been replaced by pgbadger which is written in a more friendly language, is much faster and more actively maintained.
Both pgfouine and pgbadger suffer from the fact that they need query-logging enabled, which can cause a noticeable performance hit on the database or bring you into disk space troubles on top of the fact that parsing the log with the tool can take quite some time and won't give you up-to-date insights on what is going in the database.
Speeding it up with extensions
To address these shortcomings there are now two extensions which track query performance directly in the database - pg_stat_statements (which is only helpful in version 9.2 or newer) and pg_stat_plans. Both extensions offer the same basic functionality - tracking how often a given "normalized query" (Query string minus all expression literals) has been run and how long it took in total. Due to the fact that this is done while the query is actually run this is done in a very efficient manner, the measurable overhead was less than 5% in synthetic benchmarks.
Making sense of the data
The list of queries itself is very "dry" from an information perspective. There's been work on a third extension trying to address this fact and offer nicer representation of the data called pg_statsinfo (along with pg_stats_reporter), but it's a bit of an undertaking to get it up and running.
To offer a more convenient solution to this problem I started working on a commercial project which is focussed around pg_stat_statements and pg_stat_plans and augments the information collected by lots of other data pulled out of the database. It's called pganalyze and you can find it at https://pganalyze.com/.
To offer a concise overview of interesting tools and projects in the Postgres Monitoring area i also started compiling a list at the Postgres Wiki which is updated regularly.
pgfouine works fairly well for me. And it looks like there's a FreeBSD port for it.
I've used pgtop a little. It is quite crude, but at least I can see which query is running for each process ID.
I tried pgfouine, but if I remember, it's an offline tool.
I also tail the psql.log file and set the logging criteria down to a level where I can see the problem queries.
#log_min_duration_statement = -1 # -1 is disabled, 0 logs all statements
# and their durations, > 0 logs only
# statements running at least this time.
I also use EMS Postgres Manager to do general admin work. It doesn't do anything for you, but it does make most tasks easier and makes reviewing and setting up your schema more simple. I find that when using a GUI, it is much easier for me to spot inconsistencies (like a missing index, field criteria, etc.). It's only one of two programs I'm willing to use VMWare on my Mac to use.
Munin is quite simple yet effective to get trends of how the database is evolving and performing over time. In the standard kit of Munin you can among other thing monitor the size of the database, number of locks, number of connections, sequential scans, size of transaction log and long running queries.
Easy to setup and to get started with and if needed you can write your own plugin quite easily.
Check out the latest postgresql plugins that are shipped with Munin here:
http://munin-monitoring.org/browser/branches/1.4-stable/plugins/node.d/
Well, the first thing to do is try all your queries from psql using "explain" and see if there are sequential scans that can be converted to index scans by adding indexes or rewriting the query.
Other than that, I'm as interested in the answers to this question as you are.
Check out Lightning Admin, it has a GUI for capturing log statements, not perfect but works great for most needs. http://www.amsoftwaredesign.com
DBTuna http://www.dbtuna.com/postgresql_monitor.php has recently started supporting PostgreSQL monitoring. We use it extensively for MySQL monitoring, so if it provides the same for Postgres then it should be a good fit for you too.