I have read many strong views (both for and against) SPs or DS.
I am writing a query engine in C++ (mySQL backend for now, though I may decide to go with a C++ ORM). I cant decide whether to write a SP, or to dynamically creat the SQL and send the query to the db engine.#
Any tips on how to decide?
Here's the simple answer:
If your programmers do both database and coding work, keep the SQL with the app. It's easier to maintain that way. Otherwise, let the DB guys handle it in SPs.
You have more control over the mechanisms outside the database. The biggest win for taking care of this outside the database is simply maintenance (in my mind). It'd be slightly hard to version control the SP vs the code you generate outside the database. One more thing to keep track of.
While we're on the topic, it's similar to handling data/schema migrations. It's annoyingly complex to version/handle schema migrations, if you don't already have a mechanism for this, you will have yet another thing you'll need to manage. It comes down to simply being easier to manage/version these things outside the database.
Consider the scenario where you have a bug in your SP. Now it needs to be changed, but then you hop over to another developers database/sandbox. What version is the sandbox and the SP? Now you have to track multiple versions.
One of the main differentiators is whether you are writing the "one true front end" or whether the database is the central piece of your application.
If you are going to have multiple front ends stored procedures make a lot of sense because you reduce your maintenance overhead. If you are writing only one interface, stored procedures are a pain, because you lose a lot of flexibility in changing your data set as your front end needs change, plus you now have to do code maintenance, version control, etc. in two places. Databases are a real pain to keep in sync with code repositories.
Finally, if you are coding for multiple databases (Oracle and SQL compatible code, for example), I'd avoid stored procedures completely.
You may in certain rare circumstances, after profiling, determine that some limited stored procedures are useful to you. This situation comes up way less than people think it does.
The main scenarios when you MUST have the SP is:
1) When you have very complex set of queries with heavy compile overhead and data drift low enough that recompiling is not needed on a regular basis.
2) When the "Only True" logic for accessing the specific data set is VERY complicated, needs to be accessed from several different codebases on different platforms (so writing multiple APIs in code is much more expensive).
Any other scenario, it's debatable, and can be decided one way or another.
I must also say that the other posters' arguments about versioning are not really such a big deal in my experience - having your SPs in version control is as easy as creating a "sql/db_name" directory structure and having easy basic "database release" script which releases the SP code from the version control location to the database. Every company I worked for had some kind of setup like this, central one run by DBAs or departmental one run by developers.
The one thing you want to avoid is to have your business logic spread across multiple tiers of your application. Database DDL and DML are difficult enough to keep in sync with an application code base as it is.
My recommendation is to create a good relational schema, but all your constraints and triggers so that the data retains integrity even if somebody goes to the database and tries to do something through some command line SQL.
Put all your business logic in an application or service that calls (static/dynamic) SQL then wraps the business functionality you are are trying to expose.
Stored-procedures have two purposes that I can think of.
An aid to simplifying data access.
The Stored Procedure does not have
any business logic in it, it just
knows about the structure of the
data and exposes an interface to
isolate accessing three tables and a
view just to get a single piece of
information.
Mapping the Domain Model to the Data
Model, Stored Procedures can assist
in making the Data Model look like a
given Domain Model.
After the program has been completed and has been profiled there are often performance issues with the pre 1.0 release. Stored procedures do offer batching of SQL without traffic needing to go back and forth between the DBMS and the Application. That being said in rare and extreme cases due to performance a few business rules might need to be migrated to the Stored-Procedure side. Make sure to document any exceptions to the architectural philosophy in multiple prominent places.
Stored Procedures are ideal for:
Creating reusable abstractions over complex queries;
Enforcing specific types of insertions/updates to tables (if you also deny permissions to the table);
Performing privileged operations that the logged-in user wouldn't normally be allowed to do;
Guaranteeing a consistent execution plan;
Extending the capabilities of an ORM (batch updates, hierarchy queries, etc.)
Dynamic SQL is ideal for:
Variable search arguments or output columns:
Optional search conditions
Pivot tables
IN clauses with user-specified values
ORM implementations (most can use SPs, but can't be built entirely on them);
DDL and administrative scripts.
They solve different problems, really. Use whichever one is more appropriate to the task at hand, and don't restrict yourself to just one or the other. After you work on database code for a while you'll start to get a more intuitive feel for these things; you'll find yourself banging together some rat's nest of strings for a query and think, "this should really go in a stored procedure."
Final note: Because this question implies a certain level of inexperience with SQL, I feel obliged to say, don't forget that you still need to parameterize your queries when you write dynamic SQL. Parameters aren't just for stored procedures.
DS is more flexible. SP approach makes your system more manageable.
The problem: we have one application that has a portion which is used by a very small subset of the total users, and that part of the application is running off of a separate database as well. In a perfect world, the schemas of the two databases would be synced up, but such is not the case. Some migrations have been run on the smaller database, most haven't; and furthermore, there is nothing such as revision number to be able to easily identify which have and which haven't. We would like to solve this quandary for future projects. During a discussion we've come up with the following possible plan of action, and I am wondering if anyone knows of any project which has already solved this problem:
What we would like to do is create an empty database from the schema of the large fully-migrated database, and then move all of the data from the smaller non-migrated database into that empty one. If it makes things easier, it can probably be assumed for the sake of this problem specifically that no migrations have ever removed anything, only added.
Else, if there are other known solutions, I'd like to hear them as well.
You could use a schema comparison tool like Red-Gate's SQL Compare. You can synchronize the changes and not lose any data. I wrote about this and many alternative tools ranging widely in price here:
http://bertrandaaron.wordpress.com/2012/04/20/re-blog-the-cost-of-reinventing-the-wheel/
The nice thing is that most tools have trial versions. So, you can try them our for 14 days (fully functional) and only buy it if it meets your expectations. I can't speak for the other tools, but I've been using RG for years and it is a very capable and reliable tool.
(Updated 2012-06-23 to help prevent link-rot.)
Red-Gate's SQL Compare as Aaron Bertrand mentions in his answer is a very good option. However, if you are not permitted to purchase something, an option is to try something like:
1) For each database, script out all the tables, constraints, indexes, views, procedures, etc.
2) run a DIFF, and go through all the differences and make sure that the small DB can accept them. If not implement any changes (including data) necessary onto the small DB so it can accept the changes.
3) create a new empty database from the schema of the large DB
4) import the data from the small DB into the nee DB.
You could also reverse engineer your database into Visual Studio as a database project. Visual Studio Team Suite Database Edition GDR R2 (I know long name) has the capability to do a schema comparison and data comparison, but the beauty of this approach is that you get all of your database into a nice database project where you can manage change and integrate with source control. This would allow you to build from a common source and deploy consistent changes.
I had a (friendly but heated) argument with my lead developer the other day because our project has TSQL Scripts that I code directly into SQL files which I then run against the database. I find that when I do this, it's easy to work out the schema in advance without fiddly pointing and clicking and then there's no opportunity to forget to generate a script to put into source control as generating the script no longer becomes a chore you have to do after the fact, but is an implicit part of the process (and also leads to cleaner scripts without the extra crap that SQL Server Management Studio inserts into the scripts it generates).
My lead developer insists that having to manually script it out is a pain in the arse and that he absolutely refuses to write his scripts by hand when there are perfectly good tools to do it without coding. I've noticed that the copying of his changes into the actual scripts tends to get delayed a bit as a result though.
What are your thoughts on the pros and/or cons of doing it one way vs the other? Am I being too rigid/old-school in my sticking to hand coding schema scripts or is he being too reliant on third party tools and losing something in the process?
I always script stuff myself because the wizards sometimes don't script things in a way that I like it and will also give funky names to defaults
scripting things yourself is also good in case you get laid off and you have to go for an interview where they ask you to script DDL on the whiteboard
As I usually collaborate with a colleague during the schema design, I tend to design the schema using the GUI tools, as its easier to discuss it with a diagram of the tables in front of you. I then generate the scripts, being careful to select the exact options that I want to avoid having to make manual changes post-export.
I think a decision on the relative merits of the two approaches might take into account factors such as
the frequency of changes to the schema
the frequency with which changes need to be propagated to other schemas (test, user acceptance, production, clients * n, etc)
the degree to which the schema may vary across development branches
how well-known in advance your various changes can be scheduled
whether or not you can generate SQL "diff" scripts between schemas.
On balance, I tend to prefer to work with a script for each change (or "migration"). It lets me resequence change releases as priorities shift.
Just because you can create tables in a graphical tool doesn't necessarily mean you should.
I find its as quick to write a script as it is to use SQLMS. You still have to type names in SQLMS, and the time spent moving from keyboard and mouse could be used writing the proper script anyway.
The two of you are almost working with two sets of code. Consistency seems to be a key factor on these types of decisions. In your case, if you create a script, your boss uses the gui to add a field, how do you stay in sync? You can't use your script to rebuild the table without editing it (Chance for error.).
Maybe he should pull rank and force you to format your scripts the same way the GUI creates them - just kidding.
I think you should flip on it..........
First off, I am not a DBA, but I do work in an environment where DBAs do tune/make changes in the production database from time to time in ways that do not cause the need for an application rebuild/redeployment. Usually these changes consist of reworking indexes, changing procs, and sometimes changing the table structure in minor ways (usually abstracted from the app via procs).
Obviously, a team should strive to catch performance problems with NHibernate before they get into production using things like NHProf, SQL Profiler, and load tests. That being said, are there certain strategies that can be used to allow some amount of tweaking once the code is built and out running in production? Using stored procedures 100% of the time seems like it would allow the most flexibility for the DBA's, but obviously that would really kill the efficiency of NHibernate. From what I've read, updatable views (in SQL Server) don't really work that well with NHibernate either (this may-or-may-not be true).
I've read quite a bit about NHibernate and experimented with it over the years, but I have never put it into practice in a production environment. I have yet to come across a set of "best practices" to allow for maximum tweaking once deployed.
As an NHibernate user, how are you and your team dealing with issues if they arise in production? My production environment is made up of ASP.NET apps and SQL server, but I don't think the answers need to be restricted to that platform.
I am in a similar position, and in order to keep our DBA happy, I did the following:
Wrote some of the queries in HQL, some others in SQL (especially those perf-sensitive)
Externalized those queries to files, one file per query.
When your app needs to execute of these queries, it just loads the appropriate file, optionally running it through a pre-processor, and runs it.
With this approach, the DBA could theoretically tweak the queries just by modifying those files. That's quite similar to having stored procedures.
In practice, it's up to you to decide if you'll really give the DBA access to those files (if you catch my drift...)
IMHO the DBA should just use the DBMS's profiling tools and report her findings back to the devs (as in "there's this query that is running 20 times/sec and does 10 joins. is that really necessary? can it be cached? do you really need all those joins? can we denormalize this?" etc.
I'm not in the deploy phase yet, but on my current project I've come up against this already and my solution presently has been to replace my queries with stored procs. As long as the shape of the data coming back from the DB remains the same it's not a big deal. Yeah you do lose some of that agility you enjoyed during development but I'm not sure it's as bad as it initially sounds. You'll have a code push when you first make the change of course, and then from that point it's just proc changes.
You can use a profiler like NHProf to see the sql queries executed, so you can show them to a DBA. This tool can also detect some problem like n+1 select.
Using a second level of cache can be useful : http://web.archive.org/web/20110514214657/http://blogs.hibernatingrhinos.com/nhibernate/archive/2008/11/09/first-and-second-level-caching-in-nhibernate.aspx
When should I be using stored procedures instead of just writing the logic directly in my application? I'd like to reap the benefits of stored procedures, but I'd also like to not have my application logic spread out over the database and the application.
Are there any rules of thumb that you can think of in reference to this?
Wow... I'm going to swim directly against the current here and say, "almost always". There are a laundry list of reasons - some/many of which I'm sure others would argue. But I've developed apps both with and without the use of stored procs as a data access layer, and it has been my experience that well written stored procedures make it so much easier to write your application. Then there's the well-documented performance and security benefits.
This depends entirely on your environment. The answer to the question really isn't a coding problem, or even an analysis issue, but a business decision.
If your database supports just one application, and is reasonably tightly integrated with it, then it's better, for reasons of flexibility, to place your logic inside your application program. Under these circumstances handling the database simply as a plain data repository using common functionality looses you little and gains flexibility - with vendors, implementation, deployment and much else - and many of the purist arguments that the 'databases are for data' crowd make are demonstratively true.
On the other hand if your are handling a corporate database, which can generally be identified by having multiple access paths into it, then it is highly advisable to screw down the security as far as you can. At the very least all appropriate constraints should enabled, and if possible access to the data should be through views and procedures only. Whining programmers should be ignored in these cases as...
With a corporate database the asset is valuable and invalid data or actions can have business-threatening consequences. Your primary concern is safeguarding the business, not how convenient access is for your coders.
Such databases are by definition accessed by more than one application. You need to use the abstraction that stored procedures offer so the database can be changed when application A is upgraded and you don't have the resource to upgrade application B.
Similarly the encapsulation of business logic in SPs rather than in application code allows changes to such logic to be implemented across the business more easily and reliably than if such logic is embedded in application code. For example if a tax calculation changes it's less work, and more robust, if the calculation has to be changed in one SP than multiple applications. The rule of thumb here is that the business rule should be implemented at the closest point to the data where it is unique - so if you have a specialist application then the logic for that app can be implemented in that app, but logic more widely applicable to the business should be implemented in SPs.
Coders who dive into religious wars over the use or not of SPs generally have worked in only one environment or the other so they extrapolate their limited experience into a cast-iron position - which indeed will be perfectly defensible and correct in the context from which they come but misses the big picture. As always, you should make you decision on the needs of the business/customers/users and not on the which type of coding methodology you prefer.
I tend to avoid stored procedures. The debugging tools tend to be more primitive. Error reporting can be harder (vs your server's log file) and, to me at least, it just seems to add another language for no real gain.
There are cases where it can be useful, particularly when processing large amounts of data on the server and of course for database triggers that you can't do in code.
Other than that though, I tend to do everything in code and treat the database as a big dump of data rather than something I run code on.
Consider Who Needs Stored Procedures, Anyways?:
For modern databases and real world
usage scenarios, I believe a Stored
Procedure architecture has serious
downsides and little practical
benefit. Stored Procedures should be
considered database assembly language:
for use in only the most performance
critical situations.
and Why I do not use Stored Procedures:
The absolute worst thing you can do,
and it's horrifyingly common in the
Microsoft development world, is to
split related functionality between
sproc's and middle tier code.
Grrrrrrrr. You just make the code
brittle and you increase the
intellectual overhead of understanding
a system.
I said this in a comment, but I'm going to say it again here.
Security, Security, SECURITY.
When sql code is embedded in your application, you have to expose the underlying tables to direct access. This might sound okay at first. Until you get hit with some sql injection that scrambles all the varchar fields in your database.
Some people might say that they get around this by using magic quotes or some other way of properly escaping their embedded sql. The problem, though, is the one query a dev didn't escape correctly. Or, the dev that forgot to not allow code to be uploaded. Or, the web server that was cracked which allowed the attacker to upload code. Or,... you get the point. It's hard to cover all your bases.
My point is, all modern databases have security built in. You can simply deny direct table access (select, insert, update, and deletes) and force everything to go through your s'procs. By doing so generic attacks will no longer work. Instead the attacker would have to take the time to learn the intimate details of your system. This increases their "cost" in terms of time spent and stops drive by and worm attacks.
I know we can't secure ourselves against everything, but if you take the time to architect your apps so that the cost to crack it far outweighs the benefits then you are going to serious reduce your potential of data loss. That means taking advantage of all the security tools available to you.
Finally, as to the idea of not using s'procs because you might have to port to a different rdbms: First, most apps don't change database servers. Second, in the event that it's a real possibility, you have to code using ANSI sql anyway; which you can do in your procs. Third, you would have to reevaluate all of your sql code no matter what and it's a whole lot easier when that code is in one place. Fourth, all modern databases now support s'procs. Fifth, when using s'proc's you can custom tune your sql for the database it's running under to take advantage of that particular database's sql extensions.
Basically when you have to perform operations involving data that do not need to get out of the database. For example, you want to update one table with data from another, it makes little sense to get the data out and then back in if you can do it all in one single shot to the db.
Another situation where it may be acceptable to use stored procedures is when you are 100% sure you will never deploy your application to another database vendor. If you are an Oracle shop and you have lots of applications talking to the same database it may make sense to have stored procedures to make sure all of them talk to the db in a consistent manner.
Complicated database queries for me tend to end up as stored procs. Another thought to consider is that your database might be completely separate and distinct from the application. Lets say you run an Oracle DB and you essentially are building an API for other application developers at your organization to call into. You can hide the complicated stuff from them and provide a stored proc in its place.
A very simple example:
registerUser(username, password)
might end up running a few different queries (check if it exists, create entries in a preference table, etc) and you might want to encapsulate them.
Of course, different people will have different perspectives (a DBA versus a Programmer).
I used stored procs in 1 of 3 scenarios:
Speed
When speed is of the utmost importance, stored procedures provide an excellent method
Complexity
When I'm updating several tables and the code logic might change down the road, I can update the stored proc and avoid a recompile. Stored procedures are an excellent black box method for updating lots of data in a single stroke.
Transactions
When I'm working an insert, delete or update that spans multiple tables. I wrap the whole thing in a transaction. If there is an error, it's very easy to roll back the transaction and throw an error to avoid data corruption.
The bottom 2 are very do-able in code. However, stored procedures provide an black-box method of working when complex and transaction level operations are important. Otherwise, stick with code level database operations.
Security used to be one of the reasons. However, with LINQ and other ORMs out there, code level DAL operations are much more secure than they've been in the past. Stored procs ARE secure but so are ORMs like LINQ.
We use stored procedures for all of our reporting needs. They can usually retrieve the data faster and in a way that the report can just spit out directly instead of having to do any kind of calculations or similar.
We also will use stored procedures for complex or complicated queries we need to do that would be difficult to read if they were otherwise inside of our codebase.
It can also be very useful as a matter of encapsulation and in the philosophy of DRY. For instance I use stored functions for calculations inside a table that I need for several queries inside the code. This way I use the better performance as well as the ensuring that the calculation is always done the same way.
I would not use it for higher functionality or logic the should be in the business logic layer of an architecture, but focused on the model layer, where the functionality is clearly focused on the database design and possible flexibility of changing the database design without breaking the API to the other layers.
I tend to always use stored procedures. Personally, I find it makes everything easier to maintain. Then there is the security and performance considerations.
Just make sure you write clean, well laid out and well documented stored procedures.
When all the code is in a stored proc, it is far easier to refactor the database when needed. Changes to logic are far easier to push as well. It is also far far easier to performance tune and sooner or later performance tuning becomes necessary for most database applications.
From my experience, stored procedures can be very useful for building reporting databases/pipelines, however, I'd argue that you should avoid using stored procedures within applications as they can impede a team's velocity and any security risks of building queries within an application can be mitigated by the use of modern tooling/frameworks.
Why might we avoid it?
To avoid tight-coupling between applications and databases. If we use stored procedures, we won't be able to easily change our underlying database in the future because we'd have to either:
Migrate stored procedures from one database (e.g. DB2) to another (e.g. SQL Server) which could be painstakingly time-consuming or...
Migrate all the queries to the applications themselves (or potentially in a shared library)
Because code-first is a thing. There a several ORMs which can enable us to target any database and even manage the table schemas without ever needing to touch the database. ORMs such as Entity Framework or Dapper allow developers to focus on building features instead of writing stored procedures and wiring them up in the application.
It's yet another thing that developers need to learn in order to be productive. Instead, they can write the queries as part of the applications which makes the queries far simpler to understand, maintain, and modify by the developers who are building new features and/or fixing bugs.
Ultimately, it depends on what developers are most comfortable with.
If a developer has a heavy SQL background, they might go with Stored Procs.
If a developer has lots of app development experience, they might prefer queries in code. Personally, I think having queries in code can enable developers to move much faster and security concerns can be mitigated by ensuring teams are following best practices (e.g. parameterized queries, ORM). Stored procs aren't a "silver bullet" for system security.
Does the use of procedures still make sense in 202X?
Maybe in low level and rare scenarios or if we write code for a legacy companies with unfounded restrictions, stored procedure should be an option.
If entire logic is in the database, should I need a dba to change it?
No. In modern platforms, the requirement of a DBA to change the business logic is not an option.
Hot modification of stored procedures without dev or staging phases, area a crazy idea.
How easy is to maintain a procedure with dozens of lines, cursors and other low level database features vs a OOP objects in any modern language in which a junior developer is able to maintain?
This answers itself
Hide tables from my development team for security reasons sounds very crazy for me, in these times in which agility and well documentation are everything.
Modern development team with a modern database, should not worry about security. What's more, they need access to sandbox version of database to reduce the time of its deliverables.
With modern ORMs, ESBs, ETLs and the constant increase of cpu power, stored procedures are not an option anymore. Should I invest time and money in these tools, to create at final: one big stored procedure?
Of course, not.
On top of the speed and security considerations, I tend to stick as much in Stored Procedures as possible for ease of maintenance and alterations. If you put the logic in your application, and find later that sql logic has an error or needs to work differently in some manner, you have to recompile and redeploy the whole app in many cases (especially if it's a client side app such as WPF, Win-Forms, etc). If you keep the logic in the stored proc, all you have to do is update the proc and you never have to touch the application.
I agree that they should be used often and well.
The use case I think is extremely compelling and extremely useful is if you are taking in a lot of raw information that should be separated out into several tables, where the some of the data may have records that already exist and need to be connected by foreign key id, then you can just IF EXISTS checks and insert if it doesn't or return key if it does, which makes everything more uniform, succinct, and maintainable in the long run.
The only case where I would suggest against using them is if you are doing a lot of logic or number crunching between queries which is best done in the app server OR if you are working for a company where keeping all of the logic in the code is important for maintainability/understanding what is happening. If you have a git repository full of everything anyone would need and is easily understandable, that can be very valuable.
The stored procedures are a method of collecting operations that should be done together on database side, while still keeping them on database side.
This includes:
Populating several tables from one rowsource
Checking several tables against different business rules
Performing operations that cannot be efficiently performed using set-based approach
etc.
The main problem with stored procedures is that they are hard to maintain.
You, therefore, should make stored procedures as easy to maintain as all your other code.
I have an article on this in my blog:
Schema junk
I've had some very bad experiences with this.
I'm not opposed to stored procedures in their place, but gratuitous use of stored procedures can be very expensive.
First, stored procedures run on the database server. That means that if you have a multi-server environment with 50 webservers and one database server, instead of spreading workloads over 50 cheap machines, you load up one expensive one (since the database server is commonly built as a heavyweight server). And you're risking creating a single-point-of-failure.
Secondly, it's not very easy to write an application solely in stored procedures, although I ran into one that made a superhuman effort to try to. So you end up with something that's expensive to maintain: It's implemented in 2 different programming languages, and the source code is often not all in one place either, since stored procedures are definitively stored in the DBMS and not in a source archive. Assuming that someone ever managed/bothered o pull them out of the database server and source-archive them at all.
So aside from a fairly messy app architecture, you also limit the set of qualified chimpanzees who can maintain it, as multiple skills are required.
On the other hand, stored procedures are extremely useful, IF:
You need to maintain some sort of data integrity across multiple systems. That is, the stored logic doesn't belong to any single app, but you need consistent behavior from all participating apps. A certain amount of this is almost inevitable in modern-day apps in the form of foreign keys and triggers, but occasionally, major editing and validation may be warranted as well.
You need performance that can only be achieved by running logic on the database server itself and not as a client. But, as I said, when you do that, you're eating into the total system resources of the DBMS server. So it behooves you to ensure that if there are significant bits of the offending operation that CAN be offloaded onto clients, you can separate them out and leave the most critical stuff for the DBMS server.
A particular scenario you're likely to benefit involves the situation around the "(n+1)" scalability problem. Any kind of multidimensional/hierarchical situation is likely to involve this scenario.
Another scenario would involve use cases where it does some protocol when handling the tables (hint: defined steps which transactions are likely to be involved), this could benefit from locality of reference: Being in the server, queries might benefit. OTOH, you could supply a batch of statements directly into the server. Specially when you're on a XA environment and you have to access federated databases.
If you are talking business logic rather than just "Should I use sprocs in general" I would say you should put business logic in sprocs when you are carrying out large set based operations or any other time executing the logic would require a large number of calls to the db from the app.
It also depends on your audience. Is ease of installation and portability across DBMSs important to you?
If your program should be easy to install and easy to run on different database systems then you should stay away from stored procedures and also look out for non-portable SQL in your code.