How do I slow down a SQL Server VIEW? - sql

Is it possible to use WAITFOR in a SQL Server View?
I have a view that is being referenced by a few applications and I need to introduce some slowness to the view to do some application testing (to test how well they handle it).
Unfortunately, I can't find a way to edit this view and make it run slowly.

There's no way that you can make a SQL view have a delay. It doesn't even really make sense to do so.
If you really want to introduce some wait time, you could theoretically build a stored procedure that sleeps, then returns the results of your query.
Alternatively, you could introduce the slow down into your application logic (which makes more sense).

A VIEW is not a program - in fact SQL DML statements themselves are not programs at all: they do not represent a series of instructions. Instead they're a representation of relational-algebra, so the idea of having a delay in them is meaningless.
I note that a VIEW is ultimately always referenced by a SELECT statement. You can add delays around the SELECT if it's inside a PROCEDURE or non-inline FUNCTION, but as others have said, you cannot add a delay between rows in a SQL Server result set.
I have a view that is being referenced by a few applications and I need to introduce some slowness to the view to do some application testing (to test how well they handle it).
I think a better way of testing this is to use actual testing tools, such as a mocking, stub, or fakes system.
Assuming it's a .NET system you're targeting, you could subclass SqlDataReader that has an await Task.Delay(100) inside the virtual MoveNextAsync method, for example.
If you don't have any way of modifying the application's source code, you could use a network speed limiter to artificially reduce your computer's network speed to less than a kilobyte-per-second.
Another approach might be to write a custom script for Wireshark that detects and parses TDS (Table Data Stream, SQL Server's wire protocol) and proxies it as a new server while inserting delays of its own. This may be the best approach for a long-term solution as it lends itself well to other projects - you or your company could sell it as a database latency testing tool and make a nice little earning from it.
Finally, you could switch from a VIEW to a CURSOR that has a WAITFOR DELAY step between each FETCH instruction - but this may require changing the application source code extensively, and for little gain as queries generated from stateless components (inline functions, views, and SELECT) will always have a better runtime execution plan and provable correctness. (Cursors are a legacy from xBase-style databases - avoid them, only use them if your query cannot be expressed as a SELECT).

No, this is not possible.
You could use a stored procedure instead to achieve this functionality.

Related

SQL Reporting Use stored proc or query/view

I'm a software developer not a TSQL or DBA expert, just background. One of my applications uses allot of SQL views for reporting purposes, at this stage (might change) the windows application execute the view and I display the data in a grid/table for reporting purposes. The views are becoming more and more complex and slower, that's one problem. I'm in the process of re-designing the application to use a web front-end for reporting. But my question is what is the best approach with reports in terms of SQL, should my reports be based on Stored Procedure or Views? Any other comments or advice on SQL reporting welcome, like I mentioned I'm a software developer and I try to stay clear of SQL work, but this has become an issue and I thought this is a good time to sharpen my SQL knowledge.
Thank you for reading.
Stored procedures (SPs) are a better choice than views, but views are much better than SQL queries embedded in reports. I know you didn't mention embedded SQL but I'm going to discuss it briefly to give a more rounded answer.
When you embed a SQL query in a report (or an application or anything outside of the database) you are assuming that all of the objects referenced are not going to change in any way. This is firstly a big assumption (and assumptions are bad) and secondly a crippling restriction on the database owner - they can't change anything because it might break something somewhere.
When you use an SP or a view to access a database you make the reasonable assumption that the name of the object you are calling (the SP or view) won't change and that any parameter set will remain constant or at least stay compatible. Both approaches hide away the logic of the query from the caller - the logic can be corrected and improved over time without affecting the caller. The entire database can be refactored or even redeisigned as long as the name of the exposed object (and any parameters) remain the same and the caller will never know.
The advantage of using an SP over a view is that you can do far more. For example it's a good idea to validate that parameter values are within expected ranges. If you have a particularly complex query you can break it down into smaller steps, using temp tables for example. Moving on to very heavy queries you could even do interim maintenance steps in an SP, updating stats for example.
I would recommend using SPs for all database access. You might not need to now, but it will give you much more scope to change things in the future if you need to.

How to avoid SQL statements spreading everywhere in your app?

I have a medium-sized app written in Ruby, which makes pretty heavy use of a RDBMS. As our code grows, I found the ugly SQL statements are spreading to all modules and methods in my app and embedded in many application logic. I am not sure if this is bad, however, my gut tells me this is quite ugly...
So generally in any languages, how do you manage your SQL statements? Or do you think it is harmful for maintainibility to let many SQL statements embedded in the application logic? Why or why not?
Thanks.
SQL is a language for accessing databases. Often, it gets confused as being the API into the data store for a larger application. In fact, you should design a real API between the data store and the app.
The means several things.
For accessing data stored in tables, you want to go through views in the database, rather than directly access the tables.
For data modification steps, you want to wrap insert/update/delete in stored procedures. This has secondary benefits, where you can handle constraints and triggers in the stored procedure and better log what is happening.
For security, you want to include database security as part of your security architecture. Giving all users full access may not be the best approach.
Unfortunately, it is easy to write a simple app that uses a database directly, whether in java or ruby or VBA or whatever. This grows into a bigger app, and then the maintenance problems arise.
I would suggest an incremental approach to fixing this. Go through the code and create views where you have nasty select statements. You'll probably find you need many fewer views than selects (the views can be re-used -- a good thing).
Find places where code is being modified, and change these to stored procedures. I always return status from the stored procedure for error checking and put log information into a table called someting like splog or _spcalls.
If you want to limit permissions for different users of your app, then you might be interested in this.
Leaving the raw SQL statements in the code is a problem. Just wait until you want to rename a column and you have to find all the places where this breaks the code.
Yes, this is not optimal - maintenance becomes a nightmare; it's hard to forecast and determine which code must change when underlying DB changes occur. This is why it is good practice to create a data access layer (DAL) to encapsulate CRUD operations from the application logic. There is often an business logic layer (BLL) between the application logic and DAL to enforce business rules/logic.
Google "data access layer" "business logic layer" and even "n-tier architecture" to learn more.
If you are concerned about the SQL statements littered around your application logic, maybe consider implementing them as Stored Procedures?
That way you will only be including the procedure name and any parameters that need to be passed to it in your code.
It has other benefits too, a common one being easier to re-use in multiple files.
There is much debate about speed and security of Stored Procedure and you will never get a definitive answer about that so I won't even open that can of worms.
Here is how you do this with Java: Create a class that encapsulates all access to the database. Add a method to the class for each query you need to run.
The answer for ruby will be similar to this.
It depends on the architecture of your application but a simple solution is to keep each sql in a file, qry.sql. For each Ruby module (or whatever is used in Ruby to aggregate related code) you can keep a folder SQL with these files. So, the collection of SQL folder/files form the data access layer of your application. The Ruby code provides the business layer. If your data model changes (field names, etc), you can do greps to identify the sql files that need changes. Anyway, definitely separate SQL from your logic code.

Micro ORM - maintaining your SQL query strings

I will not go into the details why I am exploring the use of Micro ORMs at this stage - except to say that I feel powerless when I use a full blown ORM. There are too many things going on in the background that happens automatically, and not all of them are the best possible choices. I was quite ready to go back to raw database access, but I found out about the three new guys on the block: Dapper, PetaPoco and Massive. So I decided to give the low-level approach a go with a pet project. It is not relevant, but so far, I am using PetaPoco.
In any case, I am having trouble deciding how to go about maintaining the SQL strings that I will use from the higher levels. There are three main solutions that I can think of:
Sprinkle the SQL queries wherever I need them. This is the least infrastructure heavy method. However, it suffers in both maintainability and testability areas.
Limit the query usage to some service classes. This helps maintainability, is still low on infrastructure I need to implement. It may also be possible to build these service classes such that it would be easy to mock for testing purposes.
Prepare some classes to make the system somewhat flexible. I have started on this path. I implemented a Repository interface, and a database dependent Repository class. I have also build some tiny interfaces to capture SQL queries that can be passed to my Repository's GetMany() method. All the queries are implemented as individual classes right now, and I will probably need a little more interface around this to add some level of database independence - and maybe for some flexibility in decorating queries into paged and sorted queries (again, this would also make them a little bit more flexible in handling different databases).
What I am mainly worried about right now is that I have entered the slippery slope of writing all the functions needed for a full blown ORM, but badly. For example, it feels sensible right now that I write or find a library to convert linq calls into SQL statements so that I can massage my queries easily or write extenders that can decorate any query I pass to it, etc. But that is a large task, and is already done by the big guys, so I am resisting the urge to go there. I also want to retain control over what queries I send to the database - by explicitly writing them.
So what is the suggestion? Should I go #2 option, or try to stumble along on option #3? I am certain I cannot show any code written in the first option to anyone without blushing. Is there any other approach you can recommend?
EDIT: After I've asked the question, I realized there is another option, somewhat orthogonal to these three options: stored procedures. There seems to be a few advantages to putting all your queries inside the database as stored procedures. They are kept in a central location, and not spread through the code (though maintenance is an issue - the parameters may get out of sync). The reliance on database dialect is solved automatically: if you move databases, you port all your stored procedures, and you are done. And there is also the security benefits.
With the stored procedure option, the alternatives 1 and 2 seem a little bit more suitable. There seems to be not enough entities to warrant option 3 - but it is still possible to separate the procedure call commands from database accessing code.
I've implemented option 3 without stored procedures, and option 2 with stored procedures, and it seems like the latter is more suitable for me (in case anyone is interested with the outcome of the question).
I would say put the sql where you would have put the equivalent LINQ query, or the sql for DataContext.ExecuteQuery. As for where that is... well, that is up to you and depends on how much separation you want. - Marc Gravell, creator on Dapper
See Marc's opinion on the matter
I think the key point is, you shouldn't really be re-using the SQL. If your logic is re-used then it should be wrapped in a method called that can then be called from multiple places.
I know you've accepted your answer already but I still wanted to show you a nice alternative that may be helpful in your case as well. Now or in the future.
When using stored procedures it's wise to use T4
I tend to use stored procedures on my project even though it's not using PetaPoco, Dapper or Massive (project started before these were here). It uses BLToolkit instead. Anyway. Instead of writing my methods to run stored procedures and write code to provide stored procedure parameters, I've written a T4 template that generates the code for me.
Whenever stored procedures change (some may be added/removed, parameters added/removed/renamed/retyped), my code will break on compilation because method calls will not match their signature any more.
I keep my stored procedures in a file (so they get version controlled). If you work in a multi-developer team it may be sensible to have stored procedures each in its own file. It makes updates much less painful. I've experienced that on some project and it worked ok as long as number of SPs is not huge. You can restructure them into folders based on the entity they're related to.
Anyway. Maintenance is related to stored procedures, code change is just a simple click of a button in Visual Studio that converts all T4s at once. You don't have to search your methods that use those procedures. You'll be reported errors while compiling. One thing less to worry about.
So instead of writing
using (var db = new DbManager())
{
return db
.SetSpCommand(
"Person_SaveWithRelations",
db.Parameter("#Name", name),
db.Parameter("#Email", email),
db.Parameter("#Birth", birth),
db.Parameter("#ExternalID", exId))
.ExecuteObject<Person>();
}
and having a bunch of magic strings I can just simply write:
using (var db = new DataManager())
{
return db
.Person
.SaveWithRelations(name, email, birth, exId)
.ExecuteObject<Person>();
}
This is nicer, cleaner breaks on compile and provides intellisense so it's also faster to while developing.
The good thing is that stored procedures may become very complex and may do many things. In my upper example I check some data, insert person record and some related one as well and in the end return the newly inserted Person record. Inserts and updated should usually return data that was added/changed to reflect actual state.

Dynamic sql vs stored procedures - pros and cons?

I have read many strong views (both for and against) SPs or DS.
I am writing a query engine in C++ (mySQL backend for now, though I may decide to go with a C++ ORM). I cant decide whether to write a SP, or to dynamically creat the SQL and send the query to the db engine.#
Any tips on how to decide?
Here's the simple answer:
If your programmers do both database and coding work, keep the SQL with the app. It's easier to maintain that way. Otherwise, let the DB guys handle it in SPs.
You have more control over the mechanisms outside the database. The biggest win for taking care of this outside the database is simply maintenance (in my mind). It'd be slightly hard to version control the SP vs the code you generate outside the database. One more thing to keep track of.
While we're on the topic, it's similar to handling data/schema migrations. It's annoyingly complex to version/handle schema migrations, if you don't already have a mechanism for this, you will have yet another thing you'll need to manage. It comes down to simply being easier to manage/version these things outside the database.
Consider the scenario where you have a bug in your SP. Now it needs to be changed, but then you hop over to another developers database/sandbox. What version is the sandbox and the SP? Now you have to track multiple versions.
One of the main differentiators is whether you are writing the "one true front end" or whether the database is the central piece of your application.
If you are going to have multiple front ends stored procedures make a lot of sense because you reduce your maintenance overhead. If you are writing only one interface, stored procedures are a pain, because you lose a lot of flexibility in changing your data set as your front end needs change, plus you now have to do code maintenance, version control, etc. in two places. Databases are a real pain to keep in sync with code repositories.
Finally, if you are coding for multiple databases (Oracle and SQL compatible code, for example), I'd avoid stored procedures completely.
You may in certain rare circumstances, after profiling, determine that some limited stored procedures are useful to you. This situation comes up way less than people think it does.
The main scenarios when you MUST have the SP is:
1) When you have very complex set of queries with heavy compile overhead and data drift low enough that recompiling is not needed on a regular basis.
2) When the "Only True" logic for accessing the specific data set is VERY complicated, needs to be accessed from several different codebases on different platforms (so writing multiple APIs in code is much more expensive).
Any other scenario, it's debatable, and can be decided one way or another.
I must also say that the other posters' arguments about versioning are not really such a big deal in my experience - having your SPs in version control is as easy as creating a "sql/db_name" directory structure and having easy basic "database release" script which releases the SP code from the version control location to the database. Every company I worked for had some kind of setup like this, central one run by DBAs or departmental one run by developers.
The one thing you want to avoid is to have your business logic spread across multiple tiers of your application. Database DDL and DML are difficult enough to keep in sync with an application code base as it is.
My recommendation is to create a good relational schema, but all your constraints and triggers so that the data retains integrity even if somebody goes to the database and tries to do something through some command line SQL.
Put all your business logic in an application or service that calls (static/dynamic) SQL then wraps the business functionality you are are trying to expose.
Stored-procedures have two purposes that I can think of.
An aid to simplifying data access.
The Stored Procedure does not have
any business logic in it, it just
knows about the structure of the
data and exposes an interface to
isolate accessing three tables and a
view just to get a single piece of
information.
Mapping the Domain Model to the Data
Model, Stored Procedures can assist
in making the Data Model look like a
given Domain Model.
After the program has been completed and has been profiled there are often performance issues with the pre 1.0 release. Stored procedures do offer batching of SQL without traffic needing to go back and forth between the DBMS and the Application. That being said in rare and extreme cases due to performance a few business rules might need to be migrated to the Stored-Procedure side. Make sure to document any exceptions to the architectural philosophy in multiple prominent places.
Stored Procedures are ideal for:
Creating reusable abstractions over complex queries;
Enforcing specific types of insertions/updates to tables (if you also deny permissions to the table);
Performing privileged operations that the logged-in user wouldn't normally be allowed to do;
Guaranteeing a consistent execution plan;
Extending the capabilities of an ORM (batch updates, hierarchy queries, etc.)
Dynamic SQL is ideal for:
Variable search arguments or output columns:
Optional search conditions
Pivot tables
IN clauses with user-specified values
ORM implementations (most can use SPs, but can't be built entirely on them);
DDL and administrative scripts.
They solve different problems, really. Use whichever one is more appropriate to the task at hand, and don't restrict yourself to just one or the other. After you work on database code for a while you'll start to get a more intuitive feel for these things; you'll find yourself banging together some rat's nest of strings for a query and think, "this should really go in a stored procedure."
Final note: Because this question implies a certain level of inexperience with SQL, I feel obliged to say, don't forget that you still need to parameterize your queries when you write dynamic SQL. Parameters aren't just for stored procedures.
DS is more flexible. SP approach makes your system more manageable.

When should I use stored procedures?

When should I be using stored procedures instead of just writing the logic directly in my application? I'd like to reap the benefits of stored procedures, but I'd also like to not have my application logic spread out over the database and the application.
Are there any rules of thumb that you can think of in reference to this?
Wow... I'm going to swim directly against the current here and say, "almost always". There are a laundry list of reasons - some/many of which I'm sure others would argue. But I've developed apps both with and without the use of stored procs as a data access layer, and it has been my experience that well written stored procedures make it so much easier to write your application. Then there's the well-documented performance and security benefits.
This depends entirely on your environment. The answer to the question really isn't a coding problem, or even an analysis issue, but a business decision.
If your database supports just one application, and is reasonably tightly integrated with it, then it's better, for reasons of flexibility, to place your logic inside your application program. Under these circumstances handling the database simply as a plain data repository using common functionality looses you little and gains flexibility - with vendors, implementation, deployment and much else - and many of the purist arguments that the 'databases are for data' crowd make are demonstratively true.
On the other hand if your are handling a corporate database, which can generally be identified by having multiple access paths into it, then it is highly advisable to screw down the security as far as you can. At the very least all appropriate constraints should enabled, and if possible access to the data should be through views and procedures only. Whining programmers should be ignored in these cases as...
With a corporate database the asset is valuable and invalid data or actions can have business-threatening consequences. Your primary concern is safeguarding the business, not how convenient access is for your coders.
Such databases are by definition accessed by more than one application. You need to use the abstraction that stored procedures offer so the database can be changed when application A is upgraded and you don't have the resource to upgrade application B.
Similarly the encapsulation of business logic in SPs rather than in application code allows changes to such logic to be implemented across the business more easily and reliably than if such logic is embedded in application code. For example if a tax calculation changes it's less work, and more robust, if the calculation has to be changed in one SP than multiple applications. The rule of thumb here is that the business rule should be implemented at the closest point to the data where it is unique - so if you have a specialist application then the logic for that app can be implemented in that app, but logic more widely applicable to the business should be implemented in SPs.
Coders who dive into religious wars over the use or not of SPs generally have worked in only one environment or the other so they extrapolate their limited experience into a cast-iron position - which indeed will be perfectly defensible and correct in the context from which they come but misses the big picture. As always, you should make you decision on the needs of the business/customers/users and not on the which type of coding methodology you prefer.
I tend to avoid stored procedures. The debugging tools tend to be more primitive. Error reporting can be harder (vs your server's log file) and, to me at least, it just seems to add another language for no real gain.
There are cases where it can be useful, particularly when processing large amounts of data on the server and of course for database triggers that you can't do in code.
Other than that though, I tend to do everything in code and treat the database as a big dump of data rather than something I run code on.
Consider Who Needs Stored Procedures, Anyways?:
For modern databases and real world
usage scenarios, I believe a Stored
Procedure architecture has serious
downsides and little practical
benefit. Stored Procedures should be
considered database assembly language:
for use in only the most performance
critical situations.
and Why I do not use Stored Procedures:
The absolute worst thing you can do,
and it's horrifyingly common in the
Microsoft development world, is to
split related functionality between
sproc's and middle tier code.
Grrrrrrrr. You just make the code
brittle and you increase the
intellectual overhead of understanding
a system.
I said this in a comment, but I'm going to say it again here.
Security, Security, SECURITY.
When sql code is embedded in your application, you have to expose the underlying tables to direct access. This might sound okay at first. Until you get hit with some sql injection that scrambles all the varchar fields in your database.
Some people might say that they get around this by using magic quotes or some other way of properly escaping their embedded sql. The problem, though, is the one query a dev didn't escape correctly. Or, the dev that forgot to not allow code to be uploaded. Or, the web server that was cracked which allowed the attacker to upload code. Or,... you get the point. It's hard to cover all your bases.
My point is, all modern databases have security built in. You can simply deny direct table access (select, insert, update, and deletes) and force everything to go through your s'procs. By doing so generic attacks will no longer work. Instead the attacker would have to take the time to learn the intimate details of your system. This increases their "cost" in terms of time spent and stops drive by and worm attacks.
I know we can't secure ourselves against everything, but if you take the time to architect your apps so that the cost to crack it far outweighs the benefits then you are going to serious reduce your potential of data loss. That means taking advantage of all the security tools available to you.
Finally, as to the idea of not using s'procs because you might have to port to a different rdbms: First, most apps don't change database servers. Second, in the event that it's a real possibility, you have to code using ANSI sql anyway; which you can do in your procs. Third, you would have to reevaluate all of your sql code no matter what and it's a whole lot easier when that code is in one place. Fourth, all modern databases now support s'procs. Fifth, when using s'proc's you can custom tune your sql for the database it's running under to take advantage of that particular database's sql extensions.
Basically when you have to perform operations involving data that do not need to get out of the database. For example, you want to update one table with data from another, it makes little sense to get the data out and then back in if you can do it all in one single shot to the db.
Another situation where it may be acceptable to use stored procedures is when you are 100% sure you will never deploy your application to another database vendor. If you are an Oracle shop and you have lots of applications talking to the same database it may make sense to have stored procedures to make sure all of them talk to the db in a consistent manner.
Complicated database queries for me tend to end up as stored procs. Another thought to consider is that your database might be completely separate and distinct from the application. Lets say you run an Oracle DB and you essentially are building an API for other application developers at your organization to call into. You can hide the complicated stuff from them and provide a stored proc in its place.
A very simple example:
registerUser(username, password)
might end up running a few different queries (check if it exists, create entries in a preference table, etc) and you might want to encapsulate them.
Of course, different people will have different perspectives (a DBA versus a Programmer).
I used stored procs in 1 of 3 scenarios:
Speed
When speed is of the utmost importance, stored procedures provide an excellent method
Complexity
When I'm updating several tables and the code logic might change down the road, I can update the stored proc and avoid a recompile. Stored procedures are an excellent black box method for updating lots of data in a single stroke.
Transactions
When I'm working an insert, delete or update that spans multiple tables. I wrap the whole thing in a transaction. If there is an error, it's very easy to roll back the transaction and throw an error to avoid data corruption.
The bottom 2 are very do-able in code. However, stored procedures provide an black-box method of working when complex and transaction level operations are important. Otherwise, stick with code level database operations.
Security used to be one of the reasons. However, with LINQ and other ORMs out there, code level DAL operations are much more secure than they've been in the past. Stored procs ARE secure but so are ORMs like LINQ.
We use stored procedures for all of our reporting needs. They can usually retrieve the data faster and in a way that the report can just spit out directly instead of having to do any kind of calculations or similar.
We also will use stored procedures for complex or complicated queries we need to do that would be difficult to read if they were otherwise inside of our codebase.
It can also be very useful as a matter of encapsulation and in the philosophy of DRY. For instance I use stored functions for calculations inside a table that I need for several queries inside the code. This way I use the better performance as well as the ensuring that the calculation is always done the same way.
I would not use it for higher functionality or logic the should be in the business logic layer of an architecture, but focused on the model layer, where the functionality is clearly focused on the database design and possible flexibility of changing the database design without breaking the API to the other layers.
I tend to always use stored procedures. Personally, I find it makes everything easier to maintain. Then there is the security and performance considerations.
Just make sure you write clean, well laid out and well documented stored procedures.
When all the code is in a stored proc, it is far easier to refactor the database when needed. Changes to logic are far easier to push as well. It is also far far easier to performance tune and sooner or later performance tuning becomes necessary for most database applications.
From my experience, stored procedures can be very useful for building reporting databases/pipelines, however, I'd argue that you should avoid using stored procedures within applications as they can impede a team's velocity and any security risks of building queries within an application can be mitigated by the use of modern tooling/frameworks.
Why might we avoid it?
To avoid tight-coupling between applications and databases. If we use stored procedures, we won't be able to easily change our underlying database in the future because we'd have to either:
Migrate stored procedures from one database (e.g. DB2) to another (e.g. SQL Server) which could be painstakingly time-consuming or...
Migrate all the queries to the applications themselves (or potentially in a shared library)
Because code-first is a thing. There a several ORMs which can enable us to target any database and even manage the table schemas without ever needing to touch the database. ORMs such as Entity Framework or Dapper allow developers to focus on building features instead of writing stored procedures and wiring them up in the application.
It's yet another thing that developers need to learn in order to be productive. Instead, they can write the queries as part of the applications which makes the queries far simpler to understand, maintain, and modify by the developers who are building new features and/or fixing bugs.
Ultimately, it depends on what developers are most comfortable with.
If a developer has a heavy SQL background, they might go with Stored Procs.
If a developer has lots of app development experience, they might prefer queries in code. Personally, I think having queries in code can enable developers to move much faster and security concerns can be mitigated by ensuring teams are following best practices (e.g. parameterized queries, ORM). Stored procs aren't a "silver bullet" for system security.
Does the use of procedures still make sense in 202X?
Maybe in low level and rare scenarios or if we write code for a legacy companies with unfounded restrictions, stored procedure should be an option.
If entire logic is in the database, should I need a dba to change it?
No. In modern platforms, the requirement of a DBA to change the business logic is not an option.
Hot modification of stored procedures without dev or staging phases, area a crazy idea.
How easy is to maintain a procedure with dozens of lines, cursors and other low level database features vs a OOP objects in any modern language in which a junior developer is able to maintain?
This answers itself
Hide tables from my development team for security reasons sounds very crazy for me, in these times in which agility and well documentation are everything.
Modern development team with a modern database, should not worry about security. What's more, they need access to sandbox version of database to reduce the time of its deliverables.
With modern ORMs, ESBs, ETLs and the constant increase of cpu power, stored procedures are not an option anymore. Should I invest time and money in these tools, to create at final: one big stored procedure?
Of course, not.
On top of the speed and security considerations, I tend to stick as much in Stored Procedures as possible for ease of maintenance and alterations. If you put the logic in your application, and find later that sql logic has an error or needs to work differently in some manner, you have to recompile and redeploy the whole app in many cases (especially if it's a client side app such as WPF, Win-Forms, etc). If you keep the logic in the stored proc, all you have to do is update the proc and you never have to touch the application.
I agree that they should be used often and well.
The use case I think is extremely compelling and extremely useful is if you are taking in a lot of raw information that should be separated out into several tables, where the some of the data may have records that already exist and need to be connected by foreign key id, then you can just IF EXISTS checks and insert if it doesn't or return key if it does, which makes everything more uniform, succinct, and maintainable in the long run.
The only case where I would suggest against using them is if you are doing a lot of logic or number crunching between queries which is best done in the app server OR if you are working for a company where keeping all of the logic in the code is important for maintainability/understanding what is happening. If you have a git repository full of everything anyone would need and is easily understandable, that can be very valuable.
The stored procedures are a method of collecting operations that should be done together on database side, while still keeping them on database side.
This includes:
Populating several tables from one rowsource
Checking several tables against different business rules
Performing operations that cannot be efficiently performed using set-based approach
etc.
The main problem with stored procedures is that they are hard to maintain.
You, therefore, should make stored procedures as easy to maintain as all your other code.
I have an article on this in my blog:
Schema junk
I've had some very bad experiences with this.
I'm not opposed to stored procedures in their place, but gratuitous use of stored procedures can be very expensive.
First, stored procedures run on the database server. That means that if you have a multi-server environment with 50 webservers and one database server, instead of spreading workloads over 50 cheap machines, you load up one expensive one (since the database server is commonly built as a heavyweight server). And you're risking creating a single-point-of-failure.
Secondly, it's not very easy to write an application solely in stored procedures, although I ran into one that made a superhuman effort to try to. So you end up with something that's expensive to maintain: It's implemented in 2 different programming languages, and the source code is often not all in one place either, since stored procedures are definitively stored in the DBMS and not in a source archive. Assuming that someone ever managed/bothered o pull them out of the database server and source-archive them at all.
So aside from a fairly messy app architecture, you also limit the set of qualified chimpanzees who can maintain it, as multiple skills are required.
On the other hand, stored procedures are extremely useful, IF:
You need to maintain some sort of data integrity across multiple systems. That is, the stored logic doesn't belong to any single app, but you need consistent behavior from all participating apps. A certain amount of this is almost inevitable in modern-day apps in the form of foreign keys and triggers, but occasionally, major editing and validation may be warranted as well.
You need performance that can only be achieved by running logic on the database server itself and not as a client. But, as I said, when you do that, you're eating into the total system resources of the DBMS server. So it behooves you to ensure that if there are significant bits of the offending operation that CAN be offloaded onto clients, you can separate them out and leave the most critical stuff for the DBMS server.
A particular scenario you're likely to benefit involves the situation around the "(n+1)" scalability problem. Any kind of multidimensional/hierarchical situation is likely to involve this scenario.
Another scenario would involve use cases where it does some protocol when handling the tables (hint: defined steps which transactions are likely to be involved), this could benefit from locality of reference: Being in the server, queries might benefit. OTOH, you could supply a batch of statements directly into the server. Specially when you're on a XA environment and you have to access federated databases.
If you are talking business logic rather than just "Should I use sprocs in general" I would say you should put business logic in sprocs when you are carrying out large set based operations or any other time executing the logic would require a large number of calls to the db from the app.
It also depends on your audience. Is ease of installation and portability across DBMSs important to you?
If your program should be easy to install and easy to run on different database systems then you should stay away from stored procedures and also look out for non-portable SQL in your code.