Performance gains in stored procs for long running transactions - sql

I have several long running report type transactions that take 5-10 minutes. Would I see any performance increase by using stored procs? Would it be significant?
each query runs once a night.

Probably not. Stored procs give you the advantage of pre-compiled SQL. If your SQL is invoked infrequently, they this advantage will be pretty worthless. So if you have SQL that is expensive because the queries themselves are expensive, then stored procs will gain you no meaningful performance advantage. If you have queries that are invoked very frequently and which themselves execute quickly, then it's worth having a proc.

Most likely not. The performance gains from stored procs, if any (depends on your use case) are the kind that are un-noticable in the micro -- only in the macro.
Reporting-type queries are ones that aggregate LOTS of data and if that's the case it'll be slow no matter how the execution method. Only indexing and/or other physical data changes can make it faster.
See:
Are Stored Procedures more efficient, in general, than inline statements on modern RDBMS's?

The short answer is: no, stored procedures aren't going to improve the performance.
For a start, if you are using parameterised queries there is no difference in performance between a stored procedure and inline SQL. The reason is that ALL queries have cached execution plans - not just stored procedures.
Have a look at http://weblogs.asp.net/fbouma/archive/2003/11/18/38178.aspx
If you aren't parameterising your inline queries and you're just building the query up and inserting the 'parameters' as literals then each query will look different to the database and it will need to pre-compile each one. So in this case, you would be doing yourself a favour by using parameters in your inline SQL. And you should do this anyway from a security perspective, otherwise you are opening yourself up to SQL injection attacks.
But anyway the pre-compilation issue is a red herring here. You are talking about long running queries - so long that the pre-compliation is going to be insignificant. So unfortunately, you aren't going to get off easily here. Your solution is going to be to optimise the actual design of your queries, or even to rethink the whole way you are aproaching the task.

yes, the query plan for stored procs can be optimized
and even if it can't procs are preferred over embedded sql
"would you see any performance improvement" - the only way to know for certain is to try it
in theory, stored procedures pre-parse the sql and store the query plan instead of figuring out each time, so there should be some speedup just from that, however, i doubt it would be significant in a 5-10 minute process
if the speed is of concern your best bet is to look at the query plan and see if it can be improved with different query structures and/or adding indices et al
if the speed is not of concern, stored procs provide better encapsulation than inline sql

As others have said, you won't see much performance gain from the stored procedure being pre-compiled. However, if your current transactions have multiple statements, with data going back and forth between the server, then wrapping it in a stored procedure could eliminate some of that back-and-forth, which can be a real performance killer.
Look into proper indexing, but also consider the fact that the queries themselves (or the whole process if it consists of multiple steps) might be inefficient. Without seeing your actual code it's hard to say.

Related

Stored procedure select VS select from external connection

I am trying to find the pros and cons of using stored procedures instead of SQL queries from an external connection, but I am unable to find any direct comparison.
What is the benefit of using stored procedures instead of SQL queries from an external connection?
Is there any execution speed differences between them for small volume and big volume outputs?
Is there any benefits for the database management as well?
What is the benefit of using stored procedures instead of SQL queries from an external connection?
Stored Procedures can be complex. Very complex. They can do things
that a single SQL query cannot do. (Execute Block aside.)
They have their own set of grants so they can do things that current user
cannot do at all.
Firebird optimizer is not that bad but obviously complex queries require more time for optimization and the result still may be suboptimal. Using imperative language the programmer can split complex query to set of simpler ones making Data Access Paths more predictable.
Is there any execution speed differences between them for small volume
and big volume outputs?
No.
Is there any benefits for the database management as well?
It depends on what you call "database management" and what benefits you have on mind. Most likely - no.
What is the benefit of using stored procedures instead of SQL queries from an external connection?
One benefit, in terms of execution, is stored procedures store their query plan whereas dynamic sql query plans will not be stored and must be calculated each time the query is executed.
Is there any execution speed differences between them for small volume and big volume outputs?
Once the query plan is calculated, no, there is no speed difference.
Is there any benefits for the database management as well?
This is very subjective! In the past I worked at a place where ALL database access went through stored procedures so that they could lock down access to just the SPs. Other places I've worked didn't use stored procs at all because they are generally outside source control and problematic for developers who aren't SQL gurus. Also, business logic spread across multiple systems can become a real problem.

Does the size of a stored procedure affect its execution performance?

Does the size of a stored procedure affect its execution performance?
Is it better to have a large SP that does all the process or to split it to multiple SPs, regarding to performance?
Let me paraphrase: "Does the size of my function affect it's execution performance?"
The obvious answer is: No. The function will run as fast as it possibly can on the hardware it happens to run on. (To be clear: A longer function will take longer to execute, of course. But it will not run slower. Therefore, the performance is unafffected.)
The right question is: "Does the way I write my function affect it's execution performance?"
The answer is: Yes, by all means.
If you are in fact asking the second question, you should add a code sample and be more specific.
No, not really - or not much, anyway. The Stored Proc is precompiled on the server - and it's not being sent back and forth between server and client - so it's size is really not all that relevant.
It's more important to have it set up in a maintainable and easy to read way.
Marc
Not sure what you mean by the SIZE of a stored procedure ( lines of code?, complexity? number of tables? number of joins? ) but the execution performance depends entirely upon the execution plan of the defined and compiled SQL within the stored procedure. This can be monitored quite well through SQL Profiler if you are using SQL Server. Performance is most heavy taxed by things like joins and table scans, and a good tool can help you figure out where to place your indexes, or think of better ways to define the SQL. Hope this helps.
You could possibly cause worse performance by coding multiple stored procedures, if the execution plans need to be flushed to reclaim local memory and a single procedure would not.
We have hit situations where a flushed stored procedure is needed again and must be recompiled. When querying a view accessing hundreds of partition tables, this can be costly and has caused timeouts in our production. Combining into two from eight solved this problem.
On the other hand, we had one stored procedure that was so complex that breaking it up into multiples allowed the query execution plan to be simpler for the chunks and performed better.
The other answers that are basically "it depends" are dead on. No matter how fast of a DB you have, a bad query an bring it to its knees. And each situation is unique. In most places, coding in a modular and easily understandable way, is better performing and cheaper to maintain. SQL server has to "understand" it to, as it builds the query plans.

Rule of thumb on when to use WITH RECOMPILE option

I understand that the WITH RECOMPILE option forces the optimizer to rebuild the query plan for stored procs but when would you want that to happen?
What are some rules of thumb on when to use the WITH RECOMPILE option and when not to?
What's the effective overhead associated with just putting it on every sproc?
As others have said, you don't want to simply include WITH RECOMPILE in every stored proc as a matter of habit. By doing so, you'd be eliminating one of the primary benefits of stored procedures: the fact that it saves the query plan.
Why is that potentially a big deal? Computing a query plan is a lot more intensive than compiling regular procedural code. Because the syntax of a SQL statement only specifies what you want, and not (generally) how to get it, that allows the database a wide degree of flexibility when creating the physical plan (that is, the step-by-step instructions to actually gather and modify data). There are lots of "tricks" the database query pre-processor can do and choices it can make - what order to join the tables, which indexes to use, whether to apply WHERE clauses before or after joins, etc.
For a simple SELECT statement, it might not make a difference, but for any non-trivial query, the database is going to spend some serious time (measured in milliseconds, as opposed to the usual microseconds) to come up with an optimal plan. For really complex queries, it can't even guarantee an optimal plan, it has to just use heuristics to come up with a pretty good plan. So by forcing it to recompile every time, you're telling it that it has to go through that process over and over again, even if the plan it got before was perfectly good.
Depending on the vendor, there should be automatic triggers for recompiling query plans - for example, if the statistics on a table change significantly (like, the histogram of values in a certain column starts out evenly distributed by over time becomes highly skewed), then the DB should notice that and recompile the plan. But generally speaking, the implementers of a database are going to be smarter about that on the whole than you are.
As with anything performance related, don't take shots in the dark; figure out where the bottlenecks are that are costing 90% of your performance, and solve them first.
Putting it on every stored procedure is NOT a good idea, because compiling a query plan is a relatively expensive operation and you will not see any benefit from the query plans being cached and re-used.
The case of a dynamic where clause built up inside a stored procedure can be handled using sp_executesql to execute the TSQL rather than adding WITH RECOMPILE to the stored procedure.
Another solution (SQL Server 2005 onwards) is to use hint with specific parameters using the OPTIMIZE FOR hint. This works well if the values in the rows are static.
SQL Server 2008 has introduced a little known feature called "OPTIMIZE FOR UNKNOWN":
This hint directs the query optimizer
to use the standard algorithms it has
always used if no parameters values
had been passed to the query at all.
In this case the optimizer will look
at all available statistical data to
reach a determination of what the
values of the local variables used to
generate the queryplan should be,
instead of looking at the specific
parameter values that were passed to
the query by the application.
generally a much better alternative to WITH RECOMPILE is OPTION(RECOMPILE)
as you can see on the explanation below, taken from the answer of this question here
When a parameter-sensitivity problem is encountered, a common piece of
advice on forums and Q&A sites is to "use recompile" (assuming the
other tuning options presented earlier are unsuitable). Unfortunately,
that advice is often misinterpreted to mean adding WITH RECOMPILE
option to the stored procedure.
Using WITH RECOMPILE effectively returns us to SQL Server 2000
behaviour, where the entire stored procedure is recompiled on every
execution. A better alternative, on SQL Server 2005 and later, is to
use the OPTION (RECOMPILE) query hint on just the statement that
suffers from the parameter-sniffing problem. This query hint results
in a recompilation of the problematic statement only; execution plans
for other statements within the stored procedure are cached and reused
as normal.
Using WITH RECOMPILE also means the compiled plan for the stored
procedure is not cached. As a result, no performance information is
maintained in DMVs such as sys.dm_exec_query_stats. Using the query
hint instead means that a compiled plan can be cached, and performance
information is available in the DMVs (though it is limited to the most
recent execution, for the affected statement only).
For instances running at least SQL Server 2008 build 2746 (Service
Pack 1 with Cumulative Update 5), using OPTION (RECOMPILE) has another
significant advantage over WITH RECOMPILE: only OPTION (RECOMPILE)
enables the Parameter Embedding Optimization.
The most common use is when you might have a dynamic WHERE clause in a procedure...you wouldn't want that particular query plan to get compiled and saved for subsequent executions because it very well might not be the exact same clause the next time the procedure is called.
It should only be used when testing with reprentative data and context demonstrate that doing without produces invalid query plans (whatever the possible reasons might be). Don't assume beforehand (without testing) that an SP won't optimize properly.
Sole exception for manual invocation only (i.e. don't code it into the SP): When you know that you've substantially altered the character of the target tables. e.g. TRUNCATE, bulk loads, etc.
It's yet another opportunity for premature optimization.

Are Stored Procedures more efficient, in general, than inline statements on modern RDBMS's? [duplicate]

This question already has answers here:
Which is better: Ad hoc queries or stored procedures? [closed]
(22 answers)
Closed 10 years ago.
Conventional wisdom states that stored procedures are always faster. So, since they're always faster, use them ALL THE TIME.
I am pretty sure this is grounded in some historical context where this was once the case. Now, I'm not advocating that Stored Procs are not needed, but I want to know in what cases stored procedures are necessary in modern databases such as MySQL, SQL Server, Oracle, or <Insert_your_DB_here>. Is it overkill to have ALL access through stored procedures?
NOTE that this is a general look at stored procedures not regulated to a specific
DBMS. Some DBMS (and even, different
versions of the same DBMS!) may operate
contrary to this, so you'll want to
double-check with your target DBMS
before assuming all of this still holds.
I've been a Sybase ASE, MySQL, and SQL Server DBA on-and off since for almost a decade (along with application development in C, PHP, PL/SQL, C#.NET, and Ruby). So, I have no particular axe to grind in this (sometimes) holy war.
The historical performance benefit of stored procs have generally been from the following (in no particular order):
Pre-parsed SQL
Pre-generated query execution plan
Reduced network latency
Potential cache benefits
Pre-parsed SQL -- similar benefits to compiled vs. interpreted code, except on a very micro level.
Still an advantage?
Not very noticeable at all on the modern CPU, but if you are sending a single SQL statement that is VERY large eleventy-billion times a second, the parsing overhead can add up.
Pre-generated query execution plan.
If you have many JOINs the permutations can grow quite unmanageable (modern optimizers have limits and cut-offs for performance reasons). It is not unknown for very complicated SQL to have distinct, measurable (I've seen a complicated query take 10+ seconds just to generate a plan, before we tweaked the DBMS) latencies due to the optimizer trying to figure out the "near best" execution plan. Stored procedures will, generally, store this in memory so you can avoid this overhead.
Still an advantage?
Most DBMS' (the latest editions) will cache the query plans for INDIVIDUAL SQL statements, greatly reducing the performance differential between stored procs and ad hoc SQL. There are some caveats and cases in which this isn't the case, so you'll need to test on your target DBMS.
Also, more and more DBMS allow you to provide optimizer path plans (abstract query plans) to significantly reduce optimization time (for both ad hoc and stored procedure SQL!!).
WARNING Cached query plans are not a performance panacea. Occasionally the query plan that is generated is sub-optimal.
For example, if you send SELECT *
FROM table WHERE id BETWEEN 1 AND
99999999, the DBMS may select a
full-table scan instead of an index
scan because you're grabbing every row
in the table (so sayeth the
statistics). If this is the cached
version, then you can get poor
performance when you later send
SELECT * FROM table WHERE id BETWEEN
1 AND 2. The reasoning behind this is
outside the scope of this posting, but
for further reading see:
http://www.microsoft.com/technet/prodtechnol/sql/2005/frcqupln.mspx
and
http://msdn.microsoft.com/en-us/library/ms181055.aspx
and http://www.simple-talk.com/sql/performance/execution-plan-basics/
"In summary, they determined that
supplying anything other than the
common values when a compile or
recompile was performed resulted in
the optimizer compiling and caching
the query plan for that particular
value. Yet, when that query plan was
reused for subsequent executions of
the same query for the common values
(‘M’, ‘R’, or ‘T’), it resulted in
sub-optimal performance. This
sub-optimal performance problem
existed until the query was
recompiled. At that point, based on
the #P1 parameter value supplied, the
query might or might not have a
performance problem."
Reduced network latency
A) If you are running the same SQL over and over -- and the SQL adds up to many KB of code -- replacing that with a simple "exec foobar" can really add up.
B) Stored procs can be used to move procedural code into the DBMS. This saves shuffling large amounts of data off to the client only to have it send a trickle of info back (or none at all!). Analogous to doing a JOIN in the DBMS vs. in your code (everyone's favorite WTF!)
Still an advantage?
A) Modern 1Gb (and 10Gb and up!) Ethernet really make this negligible.
B) Depends on how saturated your network is -- why shove several megabytes of data back and forth for no good reason?
Potential cache benefits
Performing server-side transforms of data can potentially be faster if you have sufficient memory on the DBMS and the data you need is in memory of the server.
Still an advantage?
Unless your app has shared memory access to DBMS data, the edge will always be to stored procs.
Of course, no discussion of Stored Procedure optimization would be complete without a discussion of parameterized and ad hoc SQL.
Parameterized / Prepared SQL
Kind of a cross between stored procedures and ad hoc SQL, they are embedded SQL statements in a host language that uses "parameters" for query values, e.g.:
SELECT .. FROM yourtable WHERE foo = ? AND bar = ?
These provide a more generalized version of a query that modern-day optimizers can use to cache (and re-use) the query execution plan, resulting in much of the performance benefit of stored procedures.
Ad Hoc SQL
Just open a console window to your DBMS and type in a SQL statement. In the past, these were the "worst" performers (on average) since the DBMS had no way of pre-optimizing the queries as in the parameterized/stored proc method.
Still a disadvantage?
Not necessarily. Most DBMS have the ability to "abstract" ad hoc SQL into parameterized versions -- thus more or less negating the difference between the two. Some do this implicitly or must be enabled with a command setting (SQL server: http://msdn.microsoft.com/en-us/library/ms175037.aspx , Oracle: http://www.praetoriate.com/oracle_tips_cursor_sharing.htm).
Lessons learned?
Moore's law continues to march on and DBMS optimizers, with every release, get more sophisticated. Sure, you can place every single silly teeny SQL statement inside a stored proc, but just know that the programmers working on optimizers are very smart and are continually looking for ways to improve performance. Eventually (if it's not here already) ad hoc SQL performance will become indistinguishable (on average!) from stored procedure performance, so any sort of massive stored procedure use ** solely for "performance reasons"** sure sounds like premature optimization to me.
Anyway, I think if you avoid the edge cases and have fairly vanilla SQL, you won't notice a difference between ad hoc and stored procedures.
Reasons for using stored procedures:
Reduce network traffic -- you have to send the SQL statement across the network. With sprocs, you can execute SQL in batches, which is also more efficient.
Caching query plan -- the first time the sproc is executed, SQL Server creates an execution plan, which is cached for reuse. This is particularly performant for small queries run frequently.
Ability to use output parameters -- if you send inline SQL that returns one row, you can only get back a recordset. With sprocs you can get them back as output parameters, which is considerably faster.
Permissions -- when you send inline SQL, you have to grant permissions on the table(s) to the user, which is granting much more access than merely granting permission to execute a sproc
Separation of logic -- remove the SQL-generating code and segregate it in the database.
Ability to edit without recompiling -- this can be controversial. You can edit the SQL in a sproc without having to recompile the application.
Find where a table is used -- with sprocs, if you want to find all SQL statements referencing a particular table, you can export the sproc code and search it. This is much easier than trying to find it in code.
Optimization -- It's easier for a DBA to optimize the SQL and tune the database when sprocs are used. It's easier to find missing indexes and such.
SQL injection attacks -- properly written inline SQL can defend against attacks, but sprocs are better for this protection.
In many cases, stored procedures are actually slower because they're more genaralized. While stored procedures can be highly tuned, in my experience there's enough development and institutional friction that they're left in place once they work, so stored procedures often tend to return a lot of columns "just in case" - because you don't want to deploy a new stored procedure every time you change your application. An OR/M, on the other hand, only requests the columns the application is using, which cuts down on network traffic, unnecessary joins, etc.
It's a debate that rages on and on (for instance, here).
It's as easy to write bad stored procedures as it is to write bad data access logic in your app.
My preference is for Stored Procs, but that's because I'm typically working with very large and complex apps in an enterprise environment where there are dedicated DBAs who are responsible for keeping the database servers running sweetly.
In other situations, I'm happy enough for data access technologies such as LINQ to take care of the optimisation.
Pure performance isn't the only consideration, though. Aspects such as security and configuration management are typically at least as important.
Edit: While Frans Bouma's article is indeed verbose, it misses the point with regard to security by a mile. The fact that it's 5 years old doesn't help its relevance, either.
There is no noticeable speed difference for stored procedures vs parameterized or prepared queries on most modern databases, because the database will also cache execution plans for those queries.
Note that a parameterized query is not the same as ad hoc sql.
The main reason imo to still favor stored procedures today has more to do with security. If you use stored procedures exclusively, you can disable INSERT, SELECT, UPDATE, DELETE, ALTER, DROP, and CREATE etc permissions for your application's user, only leaving it with EXECUTE.
This provides a little extra protection against 2nd order sql injection. Parameterized queries only protect against 1st order injection.
Obviously, actual performance ought to be measured in individual cases, not assumed. But even in cases where performance is hampered by a stored procedure, there are good reasons to use them:
Application developers aren't always the best SQL coders. Stored procedures hides SQL from the application.
Stored procedures automatically use bind variables. Application developers often avoid bind variables because they seem like unneeded code and show little benefit in small test systems. Later on, the failure to use bind variables can throttle RDBMS performance.
Stored procedures create a layer of indirection that might be useful later on. It's possible to change implementation details (including table structure) on the database side without touching application code.
The exercise of creating stored procedures can be useful for documenting all database interactions for a system. And it's easier to update the documentation when things change.
That said, I usually stick raw SQL in my applications so that I can control it myself. It depends on your development team and philosophy.
The one topic that no one has yet mentioned as a benefit of stored procedures is security. If you build the application exclusively with data access via stored procedures, you can lockdown the database so the ONLY access is via those stored procedures. Therefor, even if someone gets a database ID and password, they will be limited in what they can see or do against that database.
In 2007 I was on a project, where we used MS SQL Server via an ORM. We had 2 big, growing tables which took up to 7-8 seconds of load time on the SQL Server. After making 2 large, stored SQL procedures, and optimizing them from the query planner, each DB load time got down to less than 20 milliseconds, so clearly there are still efficiency reasons to use stored SQL procedures.
Having said that, we found out that the most important benefit of stored procedures was the added maintaince-ease, security, data-integrity, and decoupling business-logic from the middleware-logic, benefitting all middleware-logic from reuse of the 2 procedures.
Our ORM vendor made the usual claim that firing off many small SQL queries were going to be more efficient than fetching large, joined data sets. Our experience (to our surprise) showed something else.
This may of course vary between machines, networks, operating systems, SQL servers, application frameworks, ORM frameworks, and language implementations, so measure any benefit, you THINK you may get from doing something else.
It wasn't until we benchmarked that we discovered the problem was between the ORM and the database taking all the load.
I prefer to use SP's when it makes sense to use them. In SQL Server anyway there is no performance advantage to SP's over a parametrized query.
However, at my current job my boss mentioned that we are forced to use SP's because our customer's require them. They feel that they are more secure. I have not been here long enough to see if we are implementing role based security but I have a feeling we do.
So the customer's feelings trump all other arguments in this case.
Read Frans Bouma's excellent post (if a bit biased) on that.
To me one advantage of stored procedures is to be host language agnostic: you can switch from a C, Python, PHP or whatever application to another programming language without rewriting your code. In addition, some features like bulk operations improve really performance and are not easily available (not at all?) in host languages.
I don't know that they are faster. I like using ORM for data access (to not re-invent the wheel) but I realize that's not always a viable option.
Frans Bouma has a good article on this subject : http://weblogs.asp.net/fbouma/archive/2003/11/18/38178.aspx
All I can speak to is SQL server. In that platform, stored procedures are lovely because the server stores the execution plan, which in most cases speeds up performance a good bit. I say "in most cases", because if the SP has widely varying paths of execution you might get suboptimal performance. However, even in those cases, some enlightened refactoring of the SPs can speed things up.
Using stored procedures for CRUD operations is probably overkill, but it will depend on the tools be used and your own preferences (or requirements). I prefer inline SQL, but I make sure to use parameterized queries to prevent SQL injection attacks. I keep a print out of this xkcd comic as a reminder of what can go wrong if you are not careful.
Stored procedures can have real performance benefits when you are working with multiple sets of data to return a single set of data. It's usually more efficient to process sets of data in the stored procedure than sending them over the wire to be processed at the client end.
Realising this is a bit off-topic to the question, but if you are using a lot of stored procedures, make sure there is a consistent way to put them under some sort of source control (e.g., subversion or git) and be able to migrate updates from your development system to the test system to the production system.
When this is done by hand, with no way to easily audit what code is where, this quickly becomes a nightmare.
Stored procs are great for cases where the SQL code is run frequently because the database stores it tokenized in memory. If you repeatedly ran the same code outside of a stored proc, you will likey incur a performance hit from the database reparsing the same code over and over.
I typically frequently called code as a stored proc or as a SqlCommand (.NET) object and execute as many times as needed.
Yes, they are faster most of time. SQL composition is a huge performance tuning area too. If I am doing a back office type app I may skip them but anything production facing I use them for sure for all the reasons others spoke too...namely security.
IMHO...
Restricting "C_UD" operations to stored procedures can keep the data integrity logic in one place. This can also be done by restricting"C_UD" operations to a single middle ware layer.
Read operations can be provided to the application so they can join only the tables / columns they need.
Stored procedures can also be used instead of parameterized queries (or ad-hoc queries) for some other advantages too :
If you need to correct something (a sort order etc.) you don't need to recompile your app
You could deny access to all tables for that user account, grant access only to stored procedures and route all access through stored procedures. This way you can have custom validation of all input much more flexible than table constraints.
Reduced network traffic -- SP are generally worse then Dynamic SQL. Because people don't create a new SP for every select, if you need just one column you are told use the SP that has the columns they need and ignore the rest. Get an extra column and any less network usage you had just went away. Also you tend to have a lot of client filtering when SP are used.
caching -- MS-SQL does not treat them any differently, not since MS-SQL 2000 may of been 7 but I don't remember.
permissions -- Not a problem since almost everything I do is web or have some middle application tier that does all the database access. The only software I work with that have direct client to database access are 3rd party products that are designed for users to have direct access and are based around giving users permissions. And yes MS-SQL permission security model SUCKS!!! (have not spent time on 2008 yet) As a final part to this would like to see a survey of how many people are still doing direct client/server programming vs web and middle application server programming; and if they are doing large projects why no ORM.
Separation -- people would question why you are putting business logic outside of middle tier. Also if you are looking to separate data handling code there are ways of doing that without putting it in the database.
Ability to edit -- What you have no testing and version control you have to worry about? Also only a problem with client/server, in the web world not problem.
Find the table -- Only if you can identify the SP that use it, will stick with the tools of the version control system, agent ransack or visual studio to find.
Optimization -- Your DBA should be using the tools of the database to find the queries that need optimization. Database can tell the DBA what statements are talking up the most time and resources and they can fix from there. For complex SQL statements the programmers should be told to talk to the DBA if simple selects don't worry about it.
SQL injection attacks -- SP offer no better protection. The only thing they get the nod is that most of them teach using parameters vs dynamic SQL most examples ignore parameters.

Why is parameterized SQL generated by NHibernate just as fast as a stored procedure?

One of my co-workers claims that even though the execution path is cached, there is no way parameterized SQL generated from an ORM is as quick as a stored procedure. Any help with this stubborn developer?
I would start by reading this article:
http://decipherinfosys.wordpress.com/2007/03/27/using-stored-procedures-vs-dynamic-sql-generated-by-orm/
Here is a speed test between the two:
http://www.blackwasp.co.uk/SpeedTestSqlSproc.aspx
Round 1 - You can start a profiler trace and compare the execution times.
For most people, the best way to convince them is to "show them the proof." In this case, I would create a couple basic test cases to retrieve the same set of data, and then time how long it takes using stored procedures versus NHibernate. Once you have the results, hand it over to them and most skeptical people should yield to the evidence.
I would only add a couple things to Rob's answer:
First, Make sure the amount of data involved in the test cases is similiar to production values. In other words if your queries are normally against tables with hundreds of thousands or rows, then create such a test environment.
Second, make everything else equal except for the use of an nHibernate generated query and a s'proc call. Hopefully you can execute the test by simply swapping out a provider.
Finally, realize that there is usually a lot more at stake than just stored procedures vs. ORM. With that in mind the test should look at all of the factors: execution time, memory consumption, scalability, debugging ability, etc.
The problem here is that you've accepted the burden of proof. You're unlikely to change someone's mind like that. Like it or not, people--even programmers-- are just too emotional to be easily swayed by logic. You need to put the burden of proof back on him- get him to convince you otherwise- and that will force him to do the research and discover the answer for himself.
A better argument to use stored procedures is security. If you use only stored procedures, with no dynamic sql, you can disable SELECT, INSERT, UPDATE, DELETE, ALTER, and CREATE permissions for the application database user. This will protect you against most 2nd order SQL Injection, whereas parameterized queries are only effective against first order injection.
Measure it, but in a non-micro-benchmark, i.e. something that represents real operations in your system. Even if there would be a tiny performance benefit for a stored procedure it will be insignificant against the other costs your code is incurring: actually retrieving data, converting it, displaying it, etc. Not to mention that using stored procedures amounts to spreading your logic out over your app and your database with no significant version control, unit tests or refactoring support in the latter.
Benchmark it yourself. Write a testbed class that executes a sampled stored procedure a few hundred times, and run the NHibernate code the same amount of times. Compare the average and median execution time of each method.
It is just as fast if the query is the same each time. Sql Server 2005 caches query plans at the level of each statement in a batch, regardless of where the SQL comes from.
The long-term difference might be that stored procedures are many, many times easier for a DBA to manage and tune, whereas hundreds of different queries that have to be gleaned from profiler traces are a nightmare.
I've had this argument many times over.
Almost always I end up grabbing a really good dba, and running a proc and a piece of code with the profiler running, and get the dba to show that the results are so close its negligible.
Measure it.
Really, any discussion on this topic is probably futile until you've measured it.
He may be correct for the specific use case he is thinking of. A stored procedure will probably execute faster for some complex set of SQL, that can be arbitrarily tuned. However, something you get from things like hibernate is caching. This may prove much faster for the lifetime of your actual application.
The additional layer of abstraction will cause it to be slower than a pure call to a sproc. Just by the fact that you have additional allocations on the managed heap, and additional pushes and pops off the callstack, the truth of the matter is that it is more efficient to call a sproc over having an ORM build the query, regardless how good the ORM is.
How slow, if its even measurable, is debatable. This is also helped by the fact that most ORM's have a caching mechanism to avoid doing the query at all.
Even if the stored procedure is 10% faster (it probably isn't), you may want to ask yourself how much it really matters. What really matters in the end, is how easy it is to write and maintain code for your system. If you are coding a web app, and your pages all return in 0.25 seconds, then the extra time saved by using stored procedures is negligible. However, there can be many added advantages of using an ORM like NHibernate, which would be extremely hard to duplicate using only stored procedures.