SQL View Question - sql

Are there any negative affects of creating Views, specifically large ones (50+ columns) on the database?

From a performance perspective I would say there isn't really a negative effect per se. If the underlying query is inefficient and likely to causing undesirable locking, the view will inherit those problems, though somewhat mitigated by the fact it is more likely to be cached than an arbitrary statement. I'd say where the risk for negative effects comes in is that by creating a view and abstracting the complexity of a query you run the risk of it being used without a firm understanding of what the characteristics of the underlying query are. Then on the positive side views also open up some different optimization possibilities, (indexed views, etc) and there are times when being able to de-normalize a set of data can significantly reduce the need to write redundant queries.
So, as with most tools, whether they help or harm is in the hands of the implementer.

Depends if it's 1 column from 50 inner-joined tables or 50 columns from 1 table.
To be honest, either is fine, as long as you aren't using lots of scalar functions in them.
Come to think of it, this is a very subjective question. Paste some code ;)

A SELECT on a view (non-indexed) does something like:
SELECT Xyz FROM
(
SELECT Abc FROM yourbigtable
)
So check the performance of the queries you want to achieve first I'd say.
Try to resolve the problem without views first, then simplify with it.
Mike

In my experience a view gives you exactly the same performance as if you queried the physical tables directly.

If it's an indexed view it will take up more space in your db and slow down updates to records in the base tables.

Related

are performance/code-maintainability concerns surrounding SELECT * on MS SQL still relevant today, with modern ORMs?

summary: I've seen a lot of advice against using SELECT * in MS SQL, due to both performance and maintainability concerns. however, many of these posts are very old - 5 to 10 years! it seems, from many of these posts, that the performance concerns may have actually been quite small, even in their time, and as to the maintainability concerns ("oh no, what if someone changes the columns, and you were getting data by indexing an array! your SELECT * would get you in trouble!"), modern coding practices and ORMs (such as Dapper) seem - at least in my experience - to eliminate such concerns.
and so: are there concerns with SELECT * that are still relevant today?
greater context: I've started working at a place with a lot of old MS code (ASP scripts, and the like), and I've been helping to modernize a lot of it, however: most of my SQL experience is actually from MySQL and PHP frameworks and ORMs - this is my first time working with MS SQL - and I know there are subtle differences between the two. ALSO: my co-workers are a little older than I am, and have some concerns that - to me - seem "older". ("nullable fields are slow! avoid them!") but again: in this particular field, they definitely have more experience than I do.
for this reason, I'd also like to ask: whether SELECT * with modern ORMs is or isn't safe and sane to do today, are there recent online resources which indicate such?
thanks! :)
I will not touch maintainability in this answer, only performance part.
Performance in this context has little to do with ORMs.
It doesn't matter to the server how the query that it is running was generated, whether it was written by hand or generated by the ORM.
It is still a bad idea to select columns that you don't need.
It doesn't really matter from the performance point of view whether the query looks like:
SELECT * FROM Table
or all columns are listed there explicitly, like:
SELECT Col1, Col2, Col3 FROM Table
If you need just Col1, then make sure that you select only Col1. Whether it is achieved by writing the query by hand or by fine-tuning your ORM, it doesn't matter.
Why selecting unnecessary columns is a bad idea:
extra bytes to read from disk
extra bytes to transfer over the network
extra bytes to parse on the client
But, the most important reason is that optimiser may not be able to generate a good plan. For example, if there is a covering index that includes all requested columns, the server will usually read just this index, but if you request more columns, it would do extra lookups or use some other index, or just scan the whole table. The final impact can vary from negligible to seconds vs hours of run time. The larger and more complicated the database, the more likely you see the noticeable difference.
There is a detailed article on this topic Myth: Select * is bad on the Use the index, Luke web-site.
Now that we have established a common understanding of why selecting
everything is bad for performance, you may ask why it is listed as a
myth? It's because many people think the star is the bad thing.
Further they believe they are not committing this crime because their
ORM lists all columns by name anyway. In fact, the crime is to select
all columns without thinking about it—and most ORMs readily commit
this crime on behalf of their users.
I'll add answers to your comments here.
I have no idea how to approach an ORM that doesn't give me an option which fields to select. I personally would try not to use it. In general, ORM adds a layer of abstraction that leaks badly. https://en.wikipedia.org/wiki/Leaky_abstraction
It means that you still need to know how to write SQL code and how DBMS runs this code, but also need to know how ORM works and generates this code. If you choose not to know what's going on behind ORM you'll have unexplainable performance problems when your system grows beyond trivial.
You said that at your previous job you used ORM for a large system without problems. It worked for you. Good. I have a feeling, though, that your database was not really large (did you have billions of rows?) and the nature of the system allowed to hide performance questions behind the cache (it is not always possible). The system may never grow beyond the hardware capacity. If your data fits in cache, usually it will be reasonably fast in any case. It begins to matter only when you cross the certain threshold. After which suddenly everything becomes slow and it is hard to fix it.
It is common for a business/project manager to ignore the possible future problems which may never happen. Business always has more pressing urgent issues to deal with. If business/system grows enough when performance becomes a problem, it will either have accumulated enough resources to refactor the whole system, or it will continue working with increasing inefficiency, or if the system happens to be really critical to the business, just fail and give a chance to another company to overtake it.
Answering your question "whether to use ORMs in applications where performance is a large concern". Of course you can use ORM. But, you may find it more difficult than not using it. With ORM and performance in mind you have to inspect manually the SQL code that ORM generates and make sure that it is a good code from performance point of view. So, you still need to know SQL and specific DBMS that you use very well and you need to know your ORM very well to make sure it generates the code that you want. Why not just write the code that you want directly?
You may think that this situation with ORM vs raw SQL somewhat resembles a highly optimising C++ compiler vs writing your code in assembler manually. Well, it is not. Modern C++ compiler will indeed in most cases generate code that is better than what you can write manually in assembler. But, compiler knows processor very well and the nature of the optimisation task is much simpler than what you have in the database. ORM has no idea about the volume of your data, it knows nothing about your data distribution.
The simple classic example of top-n-per-group can be done in two ways and the best method depends on the data distribution that only the developer knows. If performance is important, even when you write SQL code by hand you have to know how DBMS works and interprets this SQL code and lay out your code in such a way that DBMS accesses the data in an optimal way. SQL itself is a high-level abstraction that may require fine-tuning to get the best performance (for example, there are dozens of query hints in SQL Server). DBMS has some statistics and its optimiser tries to use it, but it is often not enough.
And now on top of this you add another layer of ORM abstraction.
Having said all this, "performance" is a vague term. All these concerns become important after a certain threshold. Since modern hardware is pretty good, this threshold had been pushed rather far to allow a lot of projects to ignore all these concerns.
Example. An optimal query over a table with million rows returns in 10 milliseconds. A non-optimal query returns in 1 second. 100 times slower. Would the end-user notice? Maybe, but likely not critical. Grow the table to billion rows or instead of one user have 1000 concurrent users. 1 second vs 100 seconds. The end-user would definitely notice, even though the ratio (100 times slower) is the same. In practice the ratio would increase as data grows, because various caches would become less and less useful.
From a SQL-Server-Performance-Point-of-view, you should NEVER EVER use select *, because this means to sqlserver to read the complete row from disk or ram. Even if you need all fields, i would suggest to not do select *, because you do not know, who is appending any data to the table that your application does NOT need. For Details see answer of #sandip-patel
From a DBA-perspective: If you give exactly those columnnames you need the dbadmin can better analyse and optimize his databases.
From a ORM-Point-Of-View with changing column-names i would suggest to NOT use select *. You WANT to know, if the table changes. How do you want to give a guarantee for your application to run and give correct results if you do not get errors if the underlying tables change??
Personal Opinion: I really do not work with ORM in Applications needing to perform well...
This question is out some time now, and noone seems to be able to find, what Ben is looking for...
I think this is, because the answer is "it depends".
There just NOT IS THE ONE answer to this.
Examples
As i pointed out before, if a database is not yours, and it may be altered often, you cannot guarantee performance, because with select * the amount of data per row may explode
If you write an application using ITS OWN database, noone alters your DB (hopefully) and you need your columns, so whats wrong with select *
If you build some kind of lazy loading with "main properties" beeing loaded instantly and others beeing loaded later (of same entity), you cannot go with select * because you get all
If you use select * other developers will every time think about "did he think about select *" as they will try to optimize. So you should add enough comments...
If you build 3-Tier-Application building large caches in the middle-Tier and performance is a theme beeing done by cache, you may use select *
Expanding 3Tier: If you have many many concurrent users and/or really big data, you should consider every single byte, because you have to scale up your middle-Tier with every byte beeing wasted (as someone pointed out in the comments before)
If you build a small app for 3 users and some thousands of records, the budget may not give time to optimize speed/db-layout/something
Speak to your dba... HE will advice you WHICH statement has to be changed/optimized/stripped down/...
I could go on. There just is not ONE answer. It just depends on to many factors.
It is generally a better idea to select the column names explicitly. Should a table receive an extra column it would be loaded with a select * call, where the extra column is not needed.
This can have several implications:
More network traffic
More I/O (got to read more data from disk)
Possibly even more I/O (a covering index cannot be used - a table scan is performed to get the data)
Possibly even more CPU (a covering index cannot be used so data needs sorting)
EXCEPTION. The only place where Select * is OK, is in the sub-query after an Exists or Not Exists predicate clause, as in:
Select colA, colB
From table1 t1
Where Exists (Select * From Table2 Where column = t1.colA)
More Details -1
More Details -2
More Details -3
Maintainability point.
If you do a "Select * from Table"
Then I alter the Table and add a column.
Your old code will likely crash as it now has an additional column in it.
This creates a night mare for future revisions because you have to identify all the locations for the select *.
The speed differences is so minimal I would not be concerned about it. There is a speed difference in using Varchar vs Char, Char is faster. But the speed difference is so minimal it is just about not worth talking about.
Select *'s biggest issue is with changes (additions) to the table structure.
Maintainability nightmare. Sign of a Junior programmer, and poor project code. That being said I still use select * but intend to remove it before I go to production with my code.

Does column with lot of data cause issue

I have a table with say 10 columns and 3 of them have textFields that has some Html in them, now because of these 3 fields row size increases resulting in increase of size of datatable to more than 4GB.
My question is whether these field where we are storing large data affect performance of the application, these columns aren't there in joins but have there place in table.
Will normalizing them improve performance of application?
I have to take it to senior colleagues, but before I go to them with the suggestion just wanted to know if someone has tried doing so and whether or not it worked
A properly implemented database (PostgreSQL, for instance) would only store a limited amount of data directly in the table, where it could affect performance. The remainder is stored separately, keeping only a reference, of maybe the starting fragment directly in the table. Hence the impact on search performance may not be very big. Of course, when you retrieve the data, reading really large column surely will not be faster.
I am also facing same problem with one of table but I solved my problem to create indexes and separate out table . please learn about indexes and normalization there are many way to handle this.
Thanks.
You can only know if you try it out. In principle if you have proper indexes in the tables, it should be all fine. But it depends on your particular RDBMS implementation.

Is it bad to call views inside a View in sql

I have created 8 different views and i am using all these views inside a view.
So i was wondering before i go any further with this idea. i want to know does it affects performance too badly or not.
No, it's fine. In many cases I personally consider it preferable to writing one view with a giant and difficult to understand definition. In my opinion, using multiple views allows you to:
Encapsulate discrete logic in individual views.
Re-use logic in the individual views without having to repeat the logic (eliminating update problems later).
Name your logic so that it's easier for the next programmer to understand what you were trying to accomplish.
Views get "compiled" away during execution plan creation. Therefore there is only a very small penalty for using them: The extra time it takes SQL Server to look up the definition. Usually this delay is not measurable.
That means using views for the purposes mentioned by Larry Lustig is perfectly fine and encourage-able.
HOWEVER: Make sure that you do not introduce unnecessary JOINs using this technique. While SQL Server has mechanisms to eliminate unneeded tables from a query it quickly gives up if the query becomes to complex. Executing those additional JOINs can cause a significant slowdown. This is the reason that many companies have a no-view-rule in place.
So: Use views, but make sure to not misuse them.
It's not bad for performance just for being a view. It may add some complexity to maintain, and cause additional consideration when you want to change the schema of the underlying tables. If you were using views and they joined to the same tables, I think that would be less efficient than joining to the table once in one view.
I favour using nested views, with each view encapsulating and naming some cross section of data.
As for performance, it can actually improve performance if the alternative required that same data to be queried multiple times: A nested view is a bit like a temporary table - fired once.
The best, and recommended, way to discover performance implications is to try both options and examine the explain output.
The pure fact of querying a view from within a view does not have any negative performance implications. It is not different from querying a table from within a view.

Should I be concerned that ORMs, by default, return all columns?

In my limited experience in working with ORMs (so far LLBL Gen Pro and Entity Framework 4), I've noticed that inherently, queries return data for all columns. I know NHibernate is another popular ORM, and I'm not sure that this applies with it or not, but I would assume it does.
Of course, I know there are workarounds:
Create a SQL view and create models and mappings on the view
Use a stored procedure and create models and mappings on the result set returned
I know that adhering to certain practices can help mitigate this:
Ensuring your row counts are reasonably limited when selecting data
Ensuring your tables aren't excessively wide (large number of columns and/or large data types)
So here are my questions:
Are the above practices sufficient, or should I still consider finding ways to limit the number of columns returned?
Are there other ways to limit returned columns other than the ones I listed above?
How do you typically approach this in your projects?
Thanks in advance.
UPDATE: This sort of stems from the notion that SELECT * is thought of as a bad practice. See this discussion.
One of the reasons to use an ORM of nearly any kind is to delay a lot of those lower-level concerns and focus on the business logic. As long as you keep your joins reasonable and your table widths sane, ORMs are designed to make it easy to get data in and out, and that requires having the entire row available.
Personally, I consider issues like this premature optimization until encountering a specific case that bogs down because of table width.
First of : great question, and about time someone asked this! :-)
Yes, the fact an ORM typically returns all columns for a database table is something you need to take into consideration when designing your systems. But as you've mentioned - there are ways around this.
The main fact for me is to be aware that this is what happens - either a SELECT * FROM dbo.YourTable, or (better) a SELECT (list of all columns) FROM dbo.YourTable.
This is not a problem when you really want the whole object and all its properties, and as long as you load a few rows, that's fine, too - the convenience beats the raw performance.
You might need to think about changing your database structures a little bit - things like:
maybe put large columns like BLOBs into separate tables with a 1:1 link to your base table - that way, a select on the parent tables doesn't grab all those large blobs of data
maybe put groups of columns that are optional, that might only show up in certain situations, into separate tables and link them - again, just to keep the base tables lean'n'mean
Also: avoid trying to "arm-wrestle" your ORM into doing bulk operations - that's just not their strong point.
And: keep an eye on performance, and try to pick an ORM that allows you to change certain operations into e.g. stored procedures - Entity Framework 4 allows this. So if the deletes are killing you - maybe you just write a Delete stored proc for that table and handle that operation differently.
The question here covers your options fairly well. Basically you're limited to hand-crafting the HQL/SQL. It's something you want to do if you run into scalability problems, but if you do in my experience it can have a very large positive impact. In particular, it saves a lot of disk and network IO, so your scalability can take a big jump. Not something to do right away though: analyse then optimise.
Are there other ways to limit returned columns other than the ones I listed above?
NHibernate lets you add projections to your queries so you wouldn't need to use views or procs just to limit your columns.
For me this has only been an issue if the tables has LOTS of columns > 30 or if the column had alot of data for example a over 5000 character in a field.
The approach I have used is to just map another object to the existing table but with only the fields I need. So for a search that populates a table with 100 rows I would have a
MyObjectLite, but when I click to view the Details of that Row I would call a GetById and return a MyObject that has all the columns.
Another approach is to use custom SQL, Stroed procs but I only think you should go down this path if you REALLY need the performance gain and have users complaining. SO unless there is a performance problem do not waste your time trying to fix a problem that does not exist.
You can limit number of returned columns by using Projection and Transformers.AliasToBean and DTO here how it looks in Criteria API:
.SetProjection(Projections.ProjectionList()
.Add(Projections.Property("Id"), "Id")
.Add(Projections.Property("PackageName"), "Caption"))
.SetResultTransformer(Transformers.AliasToBean(typeof(PackageNameDTO)));
In LLBLGen Pro, you can return Typed Lists which not only allow you to define which fields are returned but also allow you to join data so you can pull a custom list of fields from multiple tables.
Overall, I agree that for most situations, this is premature optimization.
One of the big advantages of using LLBLGen and other ORMs as well (I just feel confident speaking about LLBLGen because I have used it since its inception) is that the performance of the data access has been optimized by folks who understand the issues better than your average bear.
Whenever they figure out a way to further speed up their code, you get those changes "for free" just by re-generating your data layer or by installing a new dll.
Unless you consider yourself an expert at writing data access code, ORMs probably improve most developers efficacy and accuracy.

Is a view faster than a simple query?

Is a
select * from myView
faster than the query itself to create the view (in order to have the same resultSet):
select * from ([query to create same resultSet as myView])
?
It's not totally clear to me if the view uses some sort of caching making it faster compared to a simple query.
Yes, views can have a clustered index assigned and, when they do, they'll store temporary results that can speed up resulting queries.
Microsoft's own documentation makes it very clear that Views can improve performance.
First, most views that people create are simple views and do not use this feature, and are therefore no different to querying the base tables directly. Simple views are expanded in place and so do not directly contribute to performance improvements - that much is true. However, indexed views can dramatically improve performance.
Let me go directly to the documentation:
After a unique clustered index is created on the view, the view's result set is materialized immediately and persisted in physical storage in the database, saving the overhead of performing this costly operation at execution time.
Second, these indexed views can work even when they are not directly referenced by another query as the optimizer will use them in place of a table reference when appropriate.
Again, the documentation:
The indexed view can be used in a query execution in two ways. The query can reference the indexed view directly, or, more importantly, the query optimizer can select the view if it determines that the view can be substituted for some or all of the query in the lowest-cost query plan. In the second case, the indexed view is used instead of the underlying tables and their ordinary indexes. The view does not need to be referenced in the query for the query optimizer to use it during query execution. This allows existing applications to benefit from the newly created indexed views without changing those applications.
This documentation, as well as charts demonstrating performance improvements, can be found here.
Update 2: the answer has been criticized on the basis that it is the "index" that provides the performance advantage, not the "View." However, this is easily refuted.
Let us say that we are a software company in a small country; I'll use Lithuania as an example. We sell software worldwide and keep our records in a SQL Server database. We're very successful and so, in a few years, we have 1,000,000+ records. However, we often need to report sales for tax purposes and we find that we've only sold 100 copies of our software in our home country. By creating an indexed view of just the Lithuanian records, we get to keep the records we need in an indexed cache as described in the MS documentation. When we run our reports for Lithuanian sales in 2008, our query will search through an index with a depth of just 7 (Log2(100) with some unused leaves). If we were to do the same without the VIEW and just relying on an index into the table, we'd have to traverse an index tree with a search depth of 21!
Clearly, the View itself would provide us with a performance advantage (3x) over the simple use of the index alone. I've tried to use a real-world example but you'll note that a simple list of Lithuanian sales would give us an even greater advantage.
Note that I'm just using a straight b-tree for my example. While I'm fairly certain that SQL Server uses some variant of a b-tree, I don't know the details. Nonetheless, the point holds.
Update 3: The question has come up about whether an Indexed View just uses an index placed on the underlying table. That is, to paraphrase: "an indexed view is just the equivalent of a standard index and it offers nothing new or unique to a view." If this was true, of course, then the above analysis would be incorrect! Let me provide a quote from the Microsoft documentation that demonstrate why I think this criticism is not valid or true:
Using indexes to improve query performance is not a new concept; however, indexed views provide additional performance benefits that cannot be achieved using standard indexes.
Together with the above quote regarding the persistence of data in physical storage and other information in the documentation about how indices are created on Views, I think it is safe to say that an Indexed View is not just a cached SQL Select that happens to use an index defined on the main table. Thus, I continue to stand by this answer.
Generally speaking, no. Views are primarily used for convenience and security, and won't (by themselves) produce any speed benefit.
That said, SQL Server 2000 and above do have a feature called Indexed Views that can greatly improve performance, with a few caveats:
Not every view can be made into an indexed view; they have to follow a specific set of guidelines, which (among other restrictions) means you can't include common query elements like COUNT, MIN, MAX, or TOP.
Indexed views use physical space in the database, just like indexes on a table.
This article describes additional benefits and limitations of indexed views:
You Can…
The view definition can reference one or more tables in the
same database.
Once the unique clustered index is created, additional nonclustered
indexes can be created against the view.
You can update the data in the underlying tables – including inserts,
updates, deletes, and even truncates.
You Can’t…
The view definition can’t reference other views, or tables
in other databases.
It can’t contain COUNT, MIN, MAX, TOP, outer joins, or a few other
keywords or elements.
You can’t modify the underlying tables and columns. The view is
created with the WITH SCHEMABINDING option.
You can’t always predict what the query optimizer will do. If you’re
using Enterprise Edition, it will automatically consider the unique
clustered index as an option for a query – but if it finds a “better”
index, that will be used. You could force the optimizer to use the
index through the WITH NOEXPAND hint – but be cautious when using any
hint.
EDIT: I was wrong, and you should see Marks answer above.
I cannot speak from experience with SQL Server, but for most databases the answer would be no. The only potential benefit that you get, performance wise, from using a view is that it could potentially create some access paths based on the query. But the main reason to use a view is to simplify a query or to standardize a way of accessing some data in a table. Generally speaking, you won't get a performance benefit. I may be wrong, though.
I would come up with a moderately more complicated example and time it yourself to see.
In SQL Server at least, Query plans are stored in the plan cache for both views and ordinary SQL queries, based on query/view parameters. For both, they are dropped from the cache when they have been unused for a long enough period and the space is needed for some other newly submitted query. After which, if the same query is issued, it is recompiled and the plan is put back into the cache. So no, there is no difference, given that you are reusing the same SQL query and the same view with the same frequency.
Obviously, in general, a view, by it's very nature (That someone thought it was to be used often enough to make it into a view) is generally more likely to be "reused" than any arbitrary SQL statement.
Definitely a view is better than a nested query for SQL Server. Without knowing exactly why it is better (until I read Mark Brittingham's post), I had run some tests and experienced almost shocking performance improvements when using a view versus a nested query. After running each version of the query several hundred times in a row, the view version of the query completed in half the time. I'd say that's proof enough for me.
It may be faster if you create a materialized view (with schema binding). Non-materialized views execute just like the regular query.
My understanding is that a while back, a view would be faster because SQL Server could store an execution plan and then just use it instead of trying to figure one out on the fly. I think the performance gains nowadays is probably not as great as it once was, but I would have to guess there would be some marginal improvement to use the view.
I would expect the two queries to perform identically. A view is nothing more than a stored query definition, there is no caching or storing of data for a view. The optimiser will effectively turn your first query into your second query when you run it.
It all depends on the situation. MS SQL Indexed views are faster than a normal view or query but indexed views can not be used in a mirrored database invironment (MS SQL).
A view in any kind of a loop will cause serious slowdown because the view is repopulated each time it is called in the loop. Same as a query. In this situation a temporary table using # or # to hold your data to loop through is faster than a view or a query.
So it all depends on the situation.
There should be some trivial gain in having the execution plan stored, but it will be negligible.
In my finding, using the view is a little bit faster than a normal query. My stored procedure was taking around 25 minutes (working with a different larger record sets and multiple joins) and after using the view (non-clustered), the performance was just a little bit faster but not significant at all. I had to use some other query optimization techniques/method to make it a dramatic change.
Select from a View or from a table will not make too much sense.
Of course if the View does not have unnecessary joins, fields, etc. You can check the execution plan of your queries, joins and indexes used to improve the View performance.
You can even create index on views for faster search requirements. http://technet.microsoft.com/en-us/library/cc917715.aspx
But if you are searching like '%...%' than the sql engine will not benefit from an index on text column. If you can force your users to make searches like '...%' than that will be fast
referred to answer on asp forums :
https://forums.asp.net/t/1697933.aspx?Which+is+faster+when+using+SELECT+query+VIEW+or+Table+
Against all expectation, views are way slower in some circumstances.
I discovered this recently when I had problems with data which was pulled from Oracle which needed to be massaged into another format. Maybe 20k source rows. A small table. To do this we imported the oracle data as unchanged as I could into a table and then used views to extract data.
We had secondary views based on those views. Maybe 3-4 levels of views.
One of the final queries, which extracted maybe 200 rows would take upwards of 45 minutes! That query was based on a cascade of views. Maybe 3-4 levels deep.
I could take each of the views in question, insert its sql into one nested query, and execute it in a couple of seconds.
We even found that we could even write each view into a temp table and query that in place of the view and it was still way faster than simply using nested views.
What was even odder was that performance was fine until we hit some limit of source rows being pulled into the database, performs just dropped off a cliff over the space of a couple of days - a few more source rows was all it took.
So, using queries which pull from views which pull from views is much slower than a nested query - which makes no sense for me.
There is no practical different and if you read BOL you will find that ever your plain old SQL SELECT * FROM X does take advantage of plan caching etc.
The purpose of a view is to use the query over and over again. To that end, SQL Server, Oracle, etc. will typically provide a "cached" or "compiled" version of your view, thus improving its performance. In general, this should perform better than a "simple" query, though if the query is truly very simple, the benefits may be negligible.
Now, if you're doing a complex query, create the view.
No. view is just a short form of your actual long sql query. But yes, you can say actual query is faster than view command/query.
First view query will tranlate into simple query then it will execute, so view query will take more time to execute than simple query.
You can use sql views when you are using joins b/w multiple tables, to reuse complicated query again and again in simple manners.
I ran across this thread and just wanted to share this post from Brent Ozar as something to consider when using availability groups.
Brent Ozar bug report