SQL Performance - Views vs Stored Procedure - sql

I have a statement that takes around 15 seconds to load, which is way too long.. I would like to see what is the best way to 'Cache' this data into the memory. Would I use somekind of View or Stored Procedure for this? I'm aware i can use triggers and another table, but I would like to avoid that at all costs, there is quite a bit of memory to spare.
Any suggestions?

You could check out indexed views (usually called materialized views in other RDBMS).

Do you know why your query is taking 15 seconds to run? Is the query working off the correct indexes? As others have mentioned, running the same query within a stored procedure is going to produce the same performance as the execution plan will be the same.
You might get better mileage out of using the SQL Query Optimizer and optimizing out the bottlenecks in your query. This is a good article on using the SQL Query Optimizer.

IT ALL DEPEND ON YOU, MAKE SURE TO CHECK UR EXECUTION PLAIN AND TRY TO AFFORD TOO MUCH OF SCAN, U WILL GET BETTER PERFROMANCE. I HOPE THIS HELP

Related

stored procedure - function performance difference

i have a table valued function with quite some code inside, doing multiple join selects and calling sub-functions and returns a result set. during the development of this function, at some point, i faced a performance degradation when executing the function. normally it shouldn't take more than 1 sec but it started taking about 10 sec. I played a bit with joins and also indexes but nothing changed dramatically.
after some time of changes and research, I wanted to see the results with another way. I created the same exact code with same exact parameters as a stored procedure. then i executed the sp. boom! it takes less then 1 sec. the same exact code takes about 10 sec with a function.
i really cannot figure out what this all about and i have no time to do more research. I need it as a function for some reasons but i don't know what to do at this point. I thought i could create it as a proc then call it within the function but then i realized it's not possible to do it for functions.
i wanted to hear some good views and advice here from experts.
thanks in advance
ps:i did not add any code here as the code is not in a good format and quite dirty. i would share it if anybody is interested. server is sql 2014 enterprise 64 bit
edit: i saw the possible duplicate question before but it did not satisfy me as my question is specifically about performance hit. the other question has many answers about general differences between procedures and functions. i want to make it more clear about possible performance related differences.
These are the differences from my experience:
When you first started writing the function, you are likely to run it with the same parameters again & again until it works correctly. This enables page caching in which SQL Server keeps the relevant data in memory.
Functions do not cache their execution plans. As you add more data, it takes longer to come up with a plan. SET STATISTICS TIME ON to see query compilation time vs. execution time.
Functions can only use table variables and there's no stats on those. That can make for some horrendous JOIN decisions later.
Some people prefer table-valued functions because they are easier to query:
SELECT * FROM fcn_myfunc(...) WHERE <some_conditions>
Instead of creating a temp table, exec the stored procedure then filter off that temp table. If your code is performance critical, turn it into a stored procedure.

SQL alternative to a View

I don't really know how to even ask for what i need.
So i try to explain my situation.
I have a rather simple sql query that joins various tables but I need to execute this query with slightly different conditions over and over again.
The execusion time of the query is somewhere around 0.25 seconds.
But all the queries i need to execute take easily 15 seconds.
This is way to long.
What i need is a table or view that holds the query results for me so that i only need to select from this one table instead of joining large tables over and over again.
A view wouldn't really help because it would just execute the same query over and over again. As far as i know.
Is there a way to have something like a view which holds its data as long as its source tables doesn't change ? And will only update and execute the query if it is really necessary?
I think what you described very good fits to
materialized view
usage with fast refresh on commit. However your query need to be eligible for fast refresh.
Another way to use
result_cache
it is automatically invalidates when one of the source tables is changed. I would try both to decide which one suites better for this particular task.
I would suggest table-valued functions for this purpose. Defining such a function requires coding in PL/SQL, but it is not that hard if the function is based on a single query.
You can think of such functions as a way of parameterizing views.
Here is a good place to start learning about them.

Difference in execution of correlated sub query and in-line view?

While compare correlated sub query and in-line view. which executes faster?. Can any one explain which is best while consider performance of SQL execution.
Thanks in advance
Views are faster. They query is "pre-optimised" and it can cache your results. This will make the DBMS (in most cases) pull the results from a buffer and not access the hard disk to run the query.
If you index the view you will see even greater performance boosts.
Cheers.

Best to use SQL + JOINS or Stored Proc to combine results from multiple queries?

I am creating a table to summarize data that is gathered from about 8 or so queries that have very light logic/WHERE clauses and all select against different tables.
I was wondering what the best option would be to fetch the summarized data:
One query with multiple JOINS to gather all relevant information
A stored proc that encapsulates the logic and maybe executes the 8 queries and does the "joining" in some other way? This seems more modular and maintainable to me...but I'm not sure.
I am using SQL Server 2008 for this. Any suggestions?
If you can, then use usual SQL methods. Db's are optimized to run them. This "joining in some other way" would probably require the use of cursor which slows down everything. Just let the db do its job. If you need more performance then you should examine execution plan and do what has to be done there(eg. adding indexes).
Databases are pretty good at figuring out the optimal way of executing SQL. It is what they are designed to do. Using stored procedures to load the data in chunks and combining it yourself will be more complex to write, and likely to be less efficient than letting the database just do it for you.
If you are concerned about reusing a complex query in multiple places, consider creating a view of it instead.
Depending on the size of the tables, joining 8 of them could be pretty hairy. I would try it that way first - as others have said, the db is pretty good at figuring this stuff out. If the performance is not as good as you would like, I would try a stored proc which creates a table variable (or a temp table) and inserts the data from each of the 8 tables separately. Then you can return the contents of the table variable to your app.
This method also makes it a little easier to add the 9th, 10th, etc tables in the future. And it gives you an easy way to do any processing you may need on the summarized data before returning it to your app.

Performance Tuning SQL - How?

How does one performance tune a SQL Query?
What tricks/tools/concepts can be used to change the performance of a SQL Query?
How can the benefits be Quantified?
What does one need to be careful of?
What tricks/tools/concepts can be used to change the performance of a SQL Query?
Using Indexes? How do they work in practice?
Normalised vs Denormalised Data? What are the performance vs design/maintenance trade offs?
Pre-processed intermediate tables? Created with triggers or batch jobs?
Restructure the query to use Temp Tables, Sub Queries, etc?
Separate complex queries into multiples and UNION the results?
Anything else?
How can performance be Quantified?
Reads?
CPU Time?
"% Query Cost" when different versions run together?
Anything else?
What does one need to be careful of?
Time to generate Execution Plans? (Stored Procs vs Inline Queries)
Stored Procs being forced to recompile
Testing on small data sets (Do the queries scale linearly, or square law, etc?)
Results of previous runs being cached
Optimising "normal case", but harming "worst case"
What is "Parameter Sniffing"?
Anything else?
Note to moderators:
This is a huge question, should I have split it up in to multiple questions?
Note To Responders:
Because this is a huge question please reference other questions/answers/articles rather than writing lengthy explanations.
I really like the book "Professional SQL Server 2005 Performance Tuning" to answer this. It's Wiley/Wrox, and no, I'm not an author, heh. But it explains a lot of the things you ask for here, plus hardware issues.
But yes, this question is way, way beyond the scope of something that can be answered in a comment box like this one.
Writing sargable queries is one of the things needed, if you don't write sargable queries then the optimizer can't take advantage of the indexes. Here is one example Only In A Database Can You Get 1000% + Improvement By Changing A Few Lines Of Code this query went from over 24 hours to 36 seconds
Of course you also need to know the difference between these 3 join
loop join,
hash join,
merge join
see here: http://msdn.microsoft.com/en-us/library/ms173815.aspx
Here some basic steps that need to follow:
Define business requirements first
SELECT fields instead of using SELECT *
Avoid SELECT DISTINCT
Create joins with INNER JOIN (not WHERE)
Use WHERE instead of HAVING to define filters
Proper indexing
Here are some basic steps which we can follow to increase the performance:
Check for indexes in pk and fk for the tables involved if it is still taking time index the columns present in the query.
All indexes are modified after every operation so kindly do not index each and every column
Before batch insertion delete the indexes and then recreate the indexes.
Select sparingly
Use if exists instead of count
Before accusing dba first check network connections