I will not go into the details why I am exploring the use of Micro ORMs at this stage - except to say that I feel powerless when I use a full blown ORM. There are too many things going on in the background that happens automatically, and not all of them are the best possible choices. I was quite ready to go back to raw database access, but I found out about the three new guys on the block: Dapper, PetaPoco and Massive. So I decided to give the low-level approach a go with a pet project. It is not relevant, but so far, I am using PetaPoco.
In any case, I am having trouble deciding how to go about maintaining the SQL strings that I will use from the higher levels. There are three main solutions that I can think of:
Sprinkle the SQL queries wherever I need them. This is the least infrastructure heavy method. However, it suffers in both maintainability and testability areas.
Limit the query usage to some service classes. This helps maintainability, is still low on infrastructure I need to implement. It may also be possible to build these service classes such that it would be easy to mock for testing purposes.
Prepare some classes to make the system somewhat flexible. I have started on this path. I implemented a Repository interface, and a database dependent Repository class. I have also build some tiny interfaces to capture SQL queries that can be passed to my Repository's GetMany() method. All the queries are implemented as individual classes right now, and I will probably need a little more interface around this to add some level of database independence - and maybe for some flexibility in decorating queries into paged and sorted queries (again, this would also make them a little bit more flexible in handling different databases).
What I am mainly worried about right now is that I have entered the slippery slope of writing all the functions needed for a full blown ORM, but badly. For example, it feels sensible right now that I write or find a library to convert linq calls into SQL statements so that I can massage my queries easily or write extenders that can decorate any query I pass to it, etc. But that is a large task, and is already done by the big guys, so I am resisting the urge to go there. I also want to retain control over what queries I send to the database - by explicitly writing them.
So what is the suggestion? Should I go #2 option, or try to stumble along on option #3? I am certain I cannot show any code written in the first option to anyone without blushing. Is there any other approach you can recommend?
EDIT: After I've asked the question, I realized there is another option, somewhat orthogonal to these three options: stored procedures. There seems to be a few advantages to putting all your queries inside the database as stored procedures. They are kept in a central location, and not spread through the code (though maintenance is an issue - the parameters may get out of sync). The reliance on database dialect is solved automatically: if you move databases, you port all your stored procedures, and you are done. And there is also the security benefits.
With the stored procedure option, the alternatives 1 and 2 seem a little bit more suitable. There seems to be not enough entities to warrant option 3 - but it is still possible to separate the procedure call commands from database accessing code.
I've implemented option 3 without stored procedures, and option 2 with stored procedures, and it seems like the latter is more suitable for me (in case anyone is interested with the outcome of the question).
I would say put the sql where you would have put the equivalent LINQ query, or the sql for DataContext.ExecuteQuery. As for where that is... well, that is up to you and depends on how much separation you want. - Marc Gravell, creator on Dapper
See Marc's opinion on the matter
I think the key point is, you shouldn't really be re-using the SQL. If your logic is re-used then it should be wrapped in a method called that can then be called from multiple places.
I know you've accepted your answer already but I still wanted to show you a nice alternative that may be helpful in your case as well. Now or in the future.
When using stored procedures it's wise to use T4
I tend to use stored procedures on my project even though it's not using PetaPoco, Dapper or Massive (project started before these were here). It uses BLToolkit instead. Anyway. Instead of writing my methods to run stored procedures and write code to provide stored procedure parameters, I've written a T4 template that generates the code for me.
Whenever stored procedures change (some may be added/removed, parameters added/removed/renamed/retyped), my code will break on compilation because method calls will not match their signature any more.
I keep my stored procedures in a file (so they get version controlled). If you work in a multi-developer team it may be sensible to have stored procedures each in its own file. It makes updates much less painful. I've experienced that on some project and it worked ok as long as number of SPs is not huge. You can restructure them into folders based on the entity they're related to.
Anyway. Maintenance is related to stored procedures, code change is just a simple click of a button in Visual Studio that converts all T4s at once. You don't have to search your methods that use those procedures. You'll be reported errors while compiling. One thing less to worry about.
So instead of writing
using (var db = new DbManager())
{
return db
.SetSpCommand(
"Person_SaveWithRelations",
db.Parameter("#Name", name),
db.Parameter("#Email", email),
db.Parameter("#Birth", birth),
db.Parameter("#ExternalID", exId))
.ExecuteObject<Person>();
}
and having a bunch of magic strings I can just simply write:
using (var db = new DataManager())
{
return db
.Person
.SaveWithRelations(name, email, birth, exId)
.ExecuteObject<Person>();
}
This is nicer, cleaner breaks on compile and provides intellisense so it's also faster to while developing.
The good thing is that stored procedures may become very complex and may do many things. In my upper example I check some data, insert person record and some related one as well and in the end return the newly inserted Person record. Inserts and updated should usually return data that was added/changed to reflect actual state.
Related
I have a medium-sized app written in Ruby, which makes pretty heavy use of a RDBMS. As our code grows, I found the ugly SQL statements are spreading to all modules and methods in my app and embedded in many application logic. I am not sure if this is bad, however, my gut tells me this is quite ugly...
So generally in any languages, how do you manage your SQL statements? Or do you think it is harmful for maintainibility to let many SQL statements embedded in the application logic? Why or why not?
Thanks.
SQL is a language for accessing databases. Often, it gets confused as being the API into the data store for a larger application. In fact, you should design a real API between the data store and the app.
The means several things.
For accessing data stored in tables, you want to go through views in the database, rather than directly access the tables.
For data modification steps, you want to wrap insert/update/delete in stored procedures. This has secondary benefits, where you can handle constraints and triggers in the stored procedure and better log what is happening.
For security, you want to include database security as part of your security architecture. Giving all users full access may not be the best approach.
Unfortunately, it is easy to write a simple app that uses a database directly, whether in java or ruby or VBA or whatever. This grows into a bigger app, and then the maintenance problems arise.
I would suggest an incremental approach to fixing this. Go through the code and create views where you have nasty select statements. You'll probably find you need many fewer views than selects (the views can be re-used -- a good thing).
Find places where code is being modified, and change these to stored procedures. I always return status from the stored procedure for error checking and put log information into a table called someting like splog or _spcalls.
If you want to limit permissions for different users of your app, then you might be interested in this.
Leaving the raw SQL statements in the code is a problem. Just wait until you want to rename a column and you have to find all the places where this breaks the code.
Yes, this is not optimal - maintenance becomes a nightmare; it's hard to forecast and determine which code must change when underlying DB changes occur. This is why it is good practice to create a data access layer (DAL) to encapsulate CRUD operations from the application logic. There is often an business logic layer (BLL) between the application logic and DAL to enforce business rules/logic.
Google "data access layer" "business logic layer" and even "n-tier architecture" to learn more.
If you are concerned about the SQL statements littered around your application logic, maybe consider implementing them as Stored Procedures?
That way you will only be including the procedure name and any parameters that need to be passed to it in your code.
It has other benefits too, a common one being easier to re-use in multiple files.
There is much debate about speed and security of Stored Procedure and you will never get a definitive answer about that so I won't even open that can of worms.
Here is how you do this with Java: Create a class that encapsulates all access to the database. Add a method to the class for each query you need to run.
The answer for ruby will be similar to this.
It depends on the architecture of your application but a simple solution is to keep each sql in a file, qry.sql. For each Ruby module (or whatever is used in Ruby to aggregate related code) you can keep a folder SQL with these files. So, the collection of SQL folder/files form the data access layer of your application. The Ruby code provides the business layer. If your data model changes (field names, etc), you can do greps to identify the sql files that need changes. Anyway, definitely separate SQL from your logic code.
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
What are the pros and cons to keeping SQL in Stored Procs versus Code
Just curious on the advantages and disadvantages of using a stored procedure vs. other forms of getting data from a database. What is the preferred method to ensure speed, accuracy, and security (we don't want sql injections!).
(should I post this question to another stack exchange site?)
As per the answer to all database questions 'it depends'. However, stored procedures definitely help in terms of speed because of plan caching (although properly parameterized SQL will benefit from that too). Accuracy is no different - an incorrect query is incorrect whether it's in a stored procedure or not. And in terms of security, they can offer a useful way of limiting access for users - seeing as you don't need to give them direct access to the underlying tables - you can just allow them to execute the stored procedures that you want. There are, however, many many questions on this topic and I'd advise you to search a bit and find out some more.
There are several questions on Stackoverflow about this problem. I really don't think you'll get a "right" answer here, both can work out very well, and both can work horribly. I think if you are using Java then the general pattern is to use an ORM framework like Hibernate/JPA. This can be completely safe from SQL injection attacks as long as you use the framework correctly. My experience with .Net developers is that they are more likely to use stored procedure backed persistence, but that seems to be more open than it was before. Both NHibernate and other MS technologies seem to be gaining popularity.
My personal view is that in general an ORM will save you some time from lots of verbose coding since it can automatically generate much of the SQL you use in a typical CRUD type system. To gain this you will likely give up a little performance and some flexibility. If your system is low to medium volume (10's of thousands of requests per day) then an ORM will be just fine for you. If you start getting in to the millions of requests per day then you may need something a little more bare metal like straight SQL or stored procedures. Note than an ORM doesn't prevent you from going more direct to the DB, it's just not normally what you would use.
One final note, is that I think ORM persistence makes an application much more testable. If you use stored procedures for much of your persistence then you are almost bound to start getting a bunch of business logic in these. To test them you have to actually persist data and interact with the DB, this makes testing slow and brittle. Using an ORM framework you can either avoid most of this testing or use an in memory DB when you really want to test persistence.
See:
Stored Procedures and ORM's
Manual DAL & BLL vs. ORM
This may be better on the Programmers SE, but I'll answer here.
CRUD stored procedures used to be, and sometimes still are, the best practice for data persistence and retrieval on a SQL DBMS. Every such DBMS has stored procedures, so you're practically guaranteed to be able to use this solution regardless of the coding language and DBMS, and code which uses the solution can be pointed to any DB that has the proper stored procs and it'll work with minimal code changes (there are some syntax changes required when calling SPs in different DBMSes; often these are integrated into a language's library support for accessing SPs on a particular DBMS). Perhaps the biggest advantage is centralized access to the table data; you can lock the tables themselves down like Fort Knox, and dispense access rights for the SPs as necessary to more limited user accounts.
However, they have some drawbacks. First off, SPs are difficult to TDD, because the tools don't really exist within database IDEs; you have to create tests in other code that exercise the SPs (and so the test must set up the DB with the test data that is expected). From a technical standpoint, such a test is not and cannot be a "unit test", which is a small, narrow test of a small, narrow area of functionality, which has no side effects (such as reading/writing to the file system). Also, SPs are one more layer that has to be changed when making a needed change to functionality. Adding a new field to a query result requires changing the table, the retrieval source code, and the SP. Adding a new way to search for records of a particular type requires the statement to be created and tested, then encapsulated in a SP, and the corresponding method created on the DAO.
The new best practice where available, IMO, is a library called an object-relational mapper or ORM. An ORM abstracts the actual data layer, so what you're asking for becomes the code objects themselves, and you query for them based on properties of those objects, not based on table data. These queries are almost always code-configurable, and are translated into the DBMS's flavor of SQL based on one or more "mappings" that you define between the object model and the data model (objects of type A are persisted as records in table B, where this property C is written to field D).
The advantages are more flexibility within the code actually looking for data in the form of these code objects. The criteria of a query is usually able to be customized in-code; if a new query is needed that has a different WHERE clause, you just write the query, and the ORM will translate it into the new SQL statement. Because the ORM is the only place where SQL is actually used (and most ORMs use system stored procs to execute parameterized query strings where available) injection attacks are virtually impossible. Lastly, depending on the language and the ORM, queries can be compiler-checked; in .NET, a library called Linq is available that provides a SQL-ish keyword syntax, that is then converted into method calls that are given to a "query provider" that can translate those method calls into the data store's native query language. This also allows queries to be tested in-code; you can verify that the query used will produce the desired results given an in-memory collection of objects that stands in for the actual DBMS.
The disadvantages of an ORM is that the ORM library is usually language-specific; Hibernate is available in Java, NHibernate (and L2E and L2SQL) in .NET, and a few similar libraries like Pork in PHP, but if you're coding in an older or more esoteric language there's simply nothing of the sort available. Another one is that security becomes a little trickier; most ORMs require direct access to the tables in order to query and update them. A few will tolerate being pointed to a view for retrieval and SPs for updating (allowing segregation of view/SP and table security and the ability to restrict the retrievable fields), but now you're mixing the worst of both worlds; you still have to define mappings, but now you also have code in the data layer. The easiest way to overcome this is to implement your security elsewhere; force applications to get data using a web service, which provides the data using the ORM and has specific, limited "front doors". Also, many ORMs have some performance problems when used in certain ways; most are designed to "lazy-load" data, where data is retrieved the moment it's actually needed and not before, which increases up-front performance when you don't need every record you asked for. However, when you DO need every record you asked for, this creates extra round trips. You have to structure queries in specific ways to get around this expected use-case behavior.
Which is better? You have to decide. I can tell you now that using an ORM is MUCH easier to set up and get working correctly than SPs, and it's much easier to make (and limit the scope of) changes to the schema and to queries. In the modern development house, where the priority is to make it work first, and then make it perform well and/or be secure against intrusion, that's a HUGE plus. In most cases where you think security is an issue, it really isn't, and when security really is an issue, putting the solution in the DB layer is usually the wrong place, because the DBMS is the very last line of defense against intrusion; if the DBMS itself has to be counted on to stop something unwanted from happening, you have failed to do so (or even encouraged it to happen) in many layers of software and firmware above it.
I have read many strong views (both for and against) SPs or DS.
I am writing a query engine in C++ (mySQL backend for now, though I may decide to go with a C++ ORM). I cant decide whether to write a SP, or to dynamically creat the SQL and send the query to the db engine.#
Any tips on how to decide?
Here's the simple answer:
If your programmers do both database and coding work, keep the SQL with the app. It's easier to maintain that way. Otherwise, let the DB guys handle it in SPs.
You have more control over the mechanisms outside the database. The biggest win for taking care of this outside the database is simply maintenance (in my mind). It'd be slightly hard to version control the SP vs the code you generate outside the database. One more thing to keep track of.
While we're on the topic, it's similar to handling data/schema migrations. It's annoyingly complex to version/handle schema migrations, if you don't already have a mechanism for this, you will have yet another thing you'll need to manage. It comes down to simply being easier to manage/version these things outside the database.
Consider the scenario where you have a bug in your SP. Now it needs to be changed, but then you hop over to another developers database/sandbox. What version is the sandbox and the SP? Now you have to track multiple versions.
One of the main differentiators is whether you are writing the "one true front end" or whether the database is the central piece of your application.
If you are going to have multiple front ends stored procedures make a lot of sense because you reduce your maintenance overhead. If you are writing only one interface, stored procedures are a pain, because you lose a lot of flexibility in changing your data set as your front end needs change, plus you now have to do code maintenance, version control, etc. in two places. Databases are a real pain to keep in sync with code repositories.
Finally, if you are coding for multiple databases (Oracle and SQL compatible code, for example), I'd avoid stored procedures completely.
You may in certain rare circumstances, after profiling, determine that some limited stored procedures are useful to you. This situation comes up way less than people think it does.
The main scenarios when you MUST have the SP is:
1) When you have very complex set of queries with heavy compile overhead and data drift low enough that recompiling is not needed on a regular basis.
2) When the "Only True" logic for accessing the specific data set is VERY complicated, needs to be accessed from several different codebases on different platforms (so writing multiple APIs in code is much more expensive).
Any other scenario, it's debatable, and can be decided one way or another.
I must also say that the other posters' arguments about versioning are not really such a big deal in my experience - having your SPs in version control is as easy as creating a "sql/db_name" directory structure and having easy basic "database release" script which releases the SP code from the version control location to the database. Every company I worked for had some kind of setup like this, central one run by DBAs or departmental one run by developers.
The one thing you want to avoid is to have your business logic spread across multiple tiers of your application. Database DDL and DML are difficult enough to keep in sync with an application code base as it is.
My recommendation is to create a good relational schema, but all your constraints and triggers so that the data retains integrity even if somebody goes to the database and tries to do something through some command line SQL.
Put all your business logic in an application or service that calls (static/dynamic) SQL then wraps the business functionality you are are trying to expose.
Stored-procedures have two purposes that I can think of.
An aid to simplifying data access.
The Stored Procedure does not have
any business logic in it, it just
knows about the structure of the
data and exposes an interface to
isolate accessing three tables and a
view just to get a single piece of
information.
Mapping the Domain Model to the Data
Model, Stored Procedures can assist
in making the Data Model look like a
given Domain Model.
After the program has been completed and has been profiled there are often performance issues with the pre 1.0 release. Stored procedures do offer batching of SQL without traffic needing to go back and forth between the DBMS and the Application. That being said in rare and extreme cases due to performance a few business rules might need to be migrated to the Stored-Procedure side. Make sure to document any exceptions to the architectural philosophy in multiple prominent places.
Stored Procedures are ideal for:
Creating reusable abstractions over complex queries;
Enforcing specific types of insertions/updates to tables (if you also deny permissions to the table);
Performing privileged operations that the logged-in user wouldn't normally be allowed to do;
Guaranteeing a consistent execution plan;
Extending the capabilities of an ORM (batch updates, hierarchy queries, etc.)
Dynamic SQL is ideal for:
Variable search arguments or output columns:
Optional search conditions
Pivot tables
IN clauses with user-specified values
ORM implementations (most can use SPs, but can't be built entirely on them);
DDL and administrative scripts.
They solve different problems, really. Use whichever one is more appropriate to the task at hand, and don't restrict yourself to just one or the other. After you work on database code for a while you'll start to get a more intuitive feel for these things; you'll find yourself banging together some rat's nest of strings for a query and think, "this should really go in a stored procedure."
Final note: Because this question implies a certain level of inexperience with SQL, I feel obliged to say, don't forget that you still need to parameterize your queries when you write dynamic SQL. Parameters aren't just for stored procedures.
DS is more flexible. SP approach makes your system more manageable.
Should queries live inside the classes that need the data? Should queries live in stored procedures in the database so that they are reusable?
In the first situation, changing your queries won't affect other people (either other code or people generating reports, etc). In the second, the queries are reusable by many and only exist in one place, but if someone breaks them, they're broken for everyone.
I used to be big on the stored proc solution but have changed my tune in the last year as databases have gotten faster and ORMs have entered the main stream. There is certainly a place for stored procs. But when it comes to simple CRUD sql: one liner insert/update/select/delete, using an ORM to handle this is the most elegant solution in my opinion, but I'm sure many will argue the other way. An ORM will take the SQL and DB connection building plumbing out of your code and make the persistence logic much more fluidly integrated with your app.
I suggest placing them as stored procedures in the database. Not only will you have the re-use advantage but also you'll make sure the same query (being a select, update, insert, etc...) it is the same because you are using the same stored procedure. If you need to change it, you'll only change it in one place. Also, you'll be taking advantage of the database server's processing power instead of the server/computer where your application resides. That is my suggestion.
Good luck!
It depends / it's situational. There are very strong arguments for and against either option. Personally, I really don't like seeing business logic get split between the database (in stored procedures) and in code, so I usually want all the queries in code to the maximum extent possible.
In the Microsoft world, there are things like Reporting Services that can retrieve data from classes/objects instead of (and in addition to) a database / stored procedures. Also, there are things like Linq that give you more strongly typed queries in code. There are also strong, mature ORMs like NHibernate that allow for writing pretty much all types of queries from code.
That said, there are still times when you are doing "rowset" types of things with your queries that work much better in a stored procedure than they work from code.
In the majority of situations, either/both options work just fine.
From my perspective, I think that stored procs are the the way to go. First, they are easier to maintain as a quick change to one means just running the script not recompiling the application. Second they are far better for security. You can set permissions at the sp level and not directly on the tables and views. This helps prevent fraud because the users cannot do anything directly to the datbase that isn't specified in a stored proc. They are easier to performance tune. When you use stored procs, you can use the database dependency metadata to help determine the affect of database changes on the code base. In many systems, not all data access or or even CRUD operations will take place through the application, having the code there seems to me to be counterproductive. If all the data access is in one place (an idea I support), it should be in the database where it is accessible to all applications or processes that might need to use it.
I've found that application programmers often don't consider the best way for a database to process information as they are focused on the application not the backend. By putting the code for the database in the database where it belongs, it is more likely to be seen and reviewed by database specialists who do consider the database and it's perfomance. We support hundreds of databases and applications here. I can look in any database and find the code that I need to find when something is slow. I don't have to upload the application code for each of hundreds of different applications just to see the part I need to do my job.
Is there such a thing as too many stored procedures?
I know there is not a limit to the number you can have but is this any performance or architectural reason not to create hundreds, thousands??
To me the biggest limitation to that hundreds or thousands store procedure is maintainability. Even though that is not a direct performance hit, it should be a consideration. That is an architectural stand point, you have to plan not just for the initial development of the application, but future changes and maintenance.
That being said you should design/create as many as your application requires. Although with Hibernate, NHibernate, .NET LINQ I would try to keep as much store procedures logic in the code, and only put it in the database when speed is a factor.
Yes you can.
If you have more than zero, you have too many
Do you mean besides the fact that you have to maintain all of them when you make a change to the database? That's the big reason for me. It's better to have fewer ones that can handle many scenarios than hundreds that can only handle one.
I would just mention that with all such things maintenance becomes an issue. Who knows what they all are and what purpose they serve? What happens years down the way when they are just artifacts of a legacy system that no one can recall what they are for but are scared to mess with?
I think the main thing is the question of not is it possible, but should it be done. Thats just one thought anyway.
I have just under 200 in a commercial SQL Server 2005 product of mine, which is likely to increase by about another 10-20 in the next few days for some new reports.
Where possible I write 'subroutine' sprocs, so that anytime I find myself putting 3 or 4 identical statements together in more than a couple of sprocs, it's time to turn those few statements into a subroutine, if you see what I mean.
I don't tend to use the sprocs to perform all the business logic as such, I just prefer to have sprocs doing anything that could be seen as 'transactional' - so where as my client code (in Delphi but whatever) might do the odd insert or update itself, as soon as something requires a couple of things to be updated or inserted 'at once', it's time for a sproc.
I have a simple, crude naming convention to assist in the readability (and in maintenance!)
The product's code name is 'Rachel', so we have
RP_whatever - these are general sprocs that update/insert data,
- or return small result sets
RXP_whatever - these are subroutine sprocs that server as 'functions'
- or workers to the RP_ type procs
REP_whatever - these are sprocs that simply act as glorified views almost
- they don't alter data, they return potentially complex result sets
- for reporting purposes, etc
XX_whatever - these are internal test/development/maintenance sprocs
- that the client application (or the local dba) would not normally use
the naming is arbitrary, it's just to help distinguish from the sp_ prefix that SQL Server uses.
I guess if I found I had 400-500 sprocs in the database I might become concerned, but a few hundred isn't a problem at all as long as you have a system for identifying what kind of activity each sproc is responsible for. I'd rather chase down schema changes in a few hundred sprocs (where the SQL Server tools can help you find dependencies etc) than try to chase down schema changes in my high-level programming language.
I suppose the deeper question here is- where else should all of that code go? Views can be used to provide convenient access to data through standard SQL. More powerful application languages and libraries can be used to generate SQL. Stored procedures tend to obscure the nature of the data and introduce an unnecessary layer of abstraction, complexity, and maintenance to a system.
Given the power of object relational mapping tools to generate basic SQL, any system that is overly dependent on stored procedures is going to be less efficient to develop. With ORM, there is simply a lot less code to write.
Hell yes, you can have too many. I worked on a system a couple of year ago that had something like 10,000 procs. It was insane. The people who wrote the system didn't really know how to program but they did know how to right badly structured procs, so they put almost all of the application logic in the procs. Some of the procs ran for thousands of line. Managing the mess was a nightmare.
Too many (and I can't draw you a specific line in the sand) is probably an indicator of poor design. Besides, as others have pointed out, there are better ways to achieve a granular database interface without resorting to massive numbers of procs.
My question is why aren't you using some sort of middleware platform if you're talking about hundreds or thousands of stored procedures?
Call me old fashioned but I thought the database should just hold the data and the program(s) should be the one making the business logic decisions.
in Oracle database you can use Packages to group related procedures together.
a few of those and your namespace would free up.
Too much of any particular thing probably means you're doing it wrong. Too many config files, too many buttns on the screen ...
Can't speak for other RDBMSs but an Oracle application that uses PL/SQL should be logically bundling procedures and functions into packages to prevent cascading invalidations and manage code better.
To me, using lots of stored procedures seems to lead you toward something equivalent to the PHP API. Thousands of global functions which may have nothing to do with eachother all grouped together. The only way to relate between them is to have some naming convention where you prefix each function with a module name, similar to the mysql_ functions in PHP. I think this is very hard to maintain, and very hard to to keep everything consistent. I think stored procedures work well for things that really need to take place on the server. A stored procedure for a simple select query, or even a select query with a join is probably going to far. Only use stored procedures where you actually require advanced logic to be handled on the database server.
As others have said, it really comes down to the management aspect. After a while, it turns into finding the proverbial needle in the haystack. That's even more so when there's poor naming standards in place...
No, not that I know of. However, you should have a good naming convention for them. For example, starting them with usp_ is something that a lot of poeple like to do (fastr than starting with sp_).
Maybe your reporting sprocs should be usp_reporting_, and your bussiness objects should be usp_bussiness_, etc. As long s you can manage them, there shouldn't be a problem with having a lot of them.
If they get too big you might want to split the database up into multiple databases. This might make more logical sense and can help with database size and such.
Yep, you can have toooooo many stored procedures.
Naming conventions can really help with this. For instance, if you have a naming convention where you always have the table name and InsUpd or Get, it's pretty easy to find the right stored procedure when you need it. If you don't have some kind of standard naming convention it would be really easy to come up with two (or more) procedures that do almost exactly the same thing.
It would be easy to find a couple of the following without having a standardized naming convention... usp_GetCustomersOrder, CustomerOrderGet_sp, GetOrdersByCustomer_sp, spOrderDetailFetch, GetCustOrderInfo, etc.
Another thing that starts happening when you have a lot of stored procedures is that you'll have some stored procedures that are never used and some that are just rarely used... If you don't have some way of tracking stored procedure usage you'll either end up with a lot of unused procedures... or worse, getting rid of one you think is never used and finding out afterwards that it's only used once a year or once a quarter. :(
Jeff
Not while you have sane naming conventions, up-to-date documentation and enough unit tests to keep them organized.
My first big project was developed with an Object Relational Mapper that abstracted the database so we did everything object oriented and it was easier to mantain and fix bugs and it was especially easy to make changes since all the data access and business logic was C# code, however, when it came to do complex stuff the system felt slow so we had to work around those things or re-architect the project, especially when the ORM had to do complex joins in the database.
Im now working on a different company that has a motto of "our applications are just frontends of the database" and it has its advantages and disadvantages. We have really fast applications since all of the data operations are done on stored procedures, this is using mainly SQL Server 2005, but, when it comes to make a change, or a fix to the software its harder because you have to change both, the C# code and the SQL stored procedures so its like working twice, contrary to when I used an ORM, you dont have refactoring or strongly typed objects in sql, Management Studio and other tools help a lot but normally you spend more time going this way.
So I would say that deppending on the needs of the project, if you are not going to keep really complex business data than maybe you could even avoid using stored procedures at all and use an ORM that makes developer life's easier. If you are concerned about performance and need to save all of the resources than you should use stored procedures, of course, the quantity deppends uppon the architecture that you design, so for exmaple if you always have stored procedures for CRUD operations you would need one for inserts, one for update, one for deletion, one for selecting and one for listing since these are the most common operations, if you have 100 business objects, multiply these 4 by that and you get 400 stored procedures just to manage the most basic operations on your objects so, yes, they could be too many.
If you have a lot of stored procedures, you'll find you are tying yourself to one database - some of them may not easily be transferred.
Tying yourself to one database isn't good design.
Moreover, if you have business logic on the database and in a business layer then maintenance becomes a problem.
i think as long as you name them uspXXX and not spXXX the lookup by name will be direct, so no downside - though you might wonder about generalizing the procs if you have thousands...
You have to put all that code somewhere.
If you are going to have stored procedures at all (I personally do favour them, but many do not), then you might as well use them as much as you need to. At least all the database logic is in one place. The downside is that there isn't typically much structure to a bucket full of stored procs.
Your call! :)