PERSISTED Computed Column vs. Regular Column - sql

I have a high performance DB (on SQL Server 2012). One of my views had a virtual column represented by an inline scalar valued UDF computing custom hash from the previous column value. I moved this calculation from the view into one of the underlying tables to improve the performance. I added a new computed column to the table and persisted the data. Then, created index on this column and referenced it back in the corresponding view (basically migrating the logic from the view into the table).
Now I am wondering why wouldn't I just add a regular VARCHAR(32) column to the table instead of the computed PERSISTED column? I could create a DEFAULT on this column with the above mentioned UDF for all new inserts and recalculate the historical records.
Is there any advantage of the indexed computed column with PERSISTED data vs. regular NC indexed column?
Thx.

The computed column will keep your field up to date if the field data it is based on is changed. Adding just a default will update the field on insert if no value is provided for the field.
If you know that your data is not going to change (which i think you are implying but did not specify in your question), then they are functionally the same for you. The computed column would probably be preferred though to prevent accidental update of the field with an incorrect value (bypassing the default). Also it is clear to any other developers what the field is to be used for.

You could switch to a "normal" column with a default value or insert trigger. The one potential issue is that unlike a computed column anyone with insert/update access could (potentially accidentally) change the value of the column.
Performance is going to be the same either way. In essence that is what the db is doing behind the scenes with a persisted computed column. As a developer a column that is persisted computed is clearer in the intent than a default value. Default value implies it is one of many possible values not the only possible value.
Be sure to declare the UDF With SchemaBinding. This will allow SQL Server to determine if the function is deterministic and flag it as so. That can improve query plan optimization in some cases.

There is no performance difference. However, in terms of database design it is more elegant when you have your pre-calculated column in the persisted view.

Related

How do you change a scalar function used by a computed column

I have a persisted computed column that uses a scalar function to assign the data to it. I noticed that I can't change the function once it's used by the column. In my test environment, I deleted and re-added the column. Once in production, that won't be an option. The table has 24 million rows now and no one will want to drop and re-add the column.
Is there some way to change the function without dropping the column? If the function calls another function, are all functions in the chain not allowed to be changed?
I realize doing this can create inconsistencies in the existing data versus the data being calculated going forward. I don't care about that.
Thanks,
Kevin

SQL Server: Persisting computed column

I have some order related tables. Order, OrderLines, OrderWarehouse etc.
The Orders table contains some computed columns (TotalNetPrice, TotalVATPrice etc).
I need them to be persisted columns as I need to include them in some indexes.
My question:
The columns themselves call functions which take the order tables Id field and return the calculated value.
Is there a way of forcing the values to be recalculated when ANY of the Order related tables change (as they all effect the calculations)?
Edit: (...without writing triggers)
I am not sure about persisting the computed column while allowing it to recalculate whenever one of the variables are changed, however I would almost approach this one using a View. when you select from the view it almost acts like a table and it could have your computed columns updated on the fly, however when you need to save new data you would save it to the originating column rather than the view.

When are computed columns appropriate?

I'm considering designing a table with a computed column in Microsoft SQL Server 2008. It would be a simple calculation like (ISNULL(colA,(0)) + ISNULL(colB,(0))) - like a total. Our application uses Entity Framework 4.
I'm not completely familiar with computed columns so I'm curious what others have to say about when they are appropriate to be used as opposed to other mechanisms which achieve the same result, such as views, or a computed Entity column.
Are there any reasons why I wouldn't want to use a computed column in a table?
If I do use a computed column, should it be persisted or not? I've read about different performance results using persisted, not persisted, with indexed and non indexed computed columns here. Given that my computation seems simple, I'm inclined to say that it shouldn't be persisted.
In my experience, they're most useful/appropriate when they can be used in other places like an index or a check constraint, which sometimes requires that the column be persisted (physically stored in the table). For further details, see Computed Columns and Creating Indexes on Computed Columns.
If your computed column is not persisted, it will be calculated every time you access it in e.g. a SELECT. If the data it's based on changes frequently, that might be okay.
If the data doesn't change frequently, e.g. if you have a computed column to turn your numeric OrderID INT into a human-readable ORD-0001234 or something like that, then definitely make your computed column persisted - in that case, the value will be computed and physically stored on disk, and any subsequent access to it is like reading any other column on your table - no re-computation over and over again.
We've also come to use (and highly appreciate!) computed columns to extract certain pieces of information from XML columns and surfacing them on the table as separate (persisted) columns. That makes querying against those items just much more efficient than constantly having to poke into the XML with XQuery to retrieve the information. For this use case, I think persisted computed columns are a great way to speed up your queries!
Let's say you have a computed column called ProspectRanking that is the result of the evaluation of the values in several columns: ReadingLevel, AnnualIncome, Gender, OwnsBoat, HasPurchasedPremiumGasolineRecently.
Let's also say that many decentralized departments in your large mega-corporation use this data, and they all have their own programmers on staff, but you want the ProspectRanking algorithms to be managed centrally by IT at corporate headquarters, who maintain close communication with the VP of Marketing. Let's also say that the algorithm is frequently tweaked to reflect some changing conditions, like the interest rate or the rate of inflation.
You'd want the computation to be part of the back-end database engine and not in the client consumers of the data, if managing the front-end clients would be like herding cats.
If you can avoid herding cats, do so.
Make Sure You Are Querying Only Columns You Need
I have found using computed columns to be very useful, even if not persisted, especially in an MVVM model where you are only getting the columns you need for that specific view. So long as you are not putting logic that is less performant in the computed-column-code you should be fine. The bottom line is for those computed (not persisted columns) are going to have to be looked for anyways if you are using that data.
When it Comes to Performance
For performance you narrow your query to the rows and the computed columns. If you were putting an index on the computed column (if that is allowed Checked and it is not allowed) I would be cautious because the execution engine might decide to use that index and hurt performance by computing those columns. Most of the time you are just getting a name or description from a join table so I think this is fine.
Don't Brute Force It
The only time it wouldn't make sense to use a lot of computed columns is if you are using a single view-model class that captures all the data in all columns including those computed. In this case, your performance is going to degrade based on the number of computed columns and number of rows in your database that you are selecting from.
Computed Columns for ORM Works Great.
An object relational mapper such as EntityFramework allow you to query a subset of the columns in your query. This works especially well using LINQ to EntityFramework. By using the computed columns you don't have to clutter your ORM class with mapped views for each of the model types.
var data = from e in db.Employees
select new NarrowEmployeeView { Id, Name };
Only the Id and Name are queried.
var data = from e in db.Employees
select new WiderEmployeeView { Id, Name, DepartmentName };
Assuming the DepartmentName is a computed column you then get your computed executed for the latter query.
Peformance Profiler
If you use a peformance profiler and filter against sql queries you can see that in fact the computed columns are ignored when not in the select statement.
Computed columns can be appropriate if you plan to query by that information.
For instance, if you have a dataset that you are going to present in the UI. Having a computed column will allow you to page the view while still allowing sorting and filtering on the computed column. if that computed column is in code only, then it will be much more difficult to reasonably sort or filter the dataset for display based on that value.
Computed column is a business rule and it's more appropriate to implement it on the client and not in the storage. Database is for storing/retrieving data, not for business rule processing. The fact that it can do something doesn't mean you should do it that way. You too you are free to jump from tour Eiffel but it will be a bad decision :)

Quick way to reset all column values to a default

I'm converting data from one schema to another. Each table in the source schema has a 'status' column (default NULL). When a record has been converted, I update the status column to 1. Afterwards, I can report on the # of records that are (not) converted.
While the conversion routines are still under development, I'd like to be able to quickly reset all values for status to NULL again.
An UPDATE statement on the tables is too slow (there are too many records). Does anyone know a fast alternative way to accomplish this?
The fastest way to reset a column would be to SET UNUSED the column, then add a column with the same name and datatype.
This will be the fastest way since both operations will not touch the actual table (only dictionary update).
As in Nivas' answer the actual ordering of the columns will be changed (the reset column will be the last column). If your code rely on the ordering of the columns (it should not!) you can create a view that will have the column in the right order (rename table, create view with the same name as old table, revoke grants from base table, add grants to view).
The SET UNUSED method will not reclaim the space used by the column (whereas dropping the column will free space in each block).
If the column is nullable (since default is NULL, I think this is the case), drop and add the column again?
While the conversion routines are still under development, I'd like to be able to quickly reset all values for status to NULL again.
If you are in development why do you need 70 million records? Why not develop against a subset of the data?
Have you tried using flashback table?
For example:
select current_scn from v$database;
-- 5607722
-- do a bunch of work
flashback table TABLE_NAME to scn 5607722;
What this does is ensure that the table you are working on is IDENTICAL each time you run your tests. Of course, you need to ensure you have sufficient UNDO to hold your changes.
hm. maybe add an index to the status column.
or alterately, add a new table with the primary key only in it. then insert to that table when the record is converted, and TRUNC that table to reset...
I like some of the other answers, but I just read in a tuning book that for several reasons it's often quicker to recreate the table than to do massive updates on the table. In this case, it seems ideal, since you would be writing the CREATE TABLE X AS SELECT with hopefully very few columns.

Sql Server : Lower function on Indexed Column

I found one big issue.
I have added the Lower function to indexed column of one of the table to fetch the data.
The table contains more than 100K records.
While fetching the records, the cpu usage goes to 100%.
I could not understand, how this much drastic change can happen just because of Lower() function.
Please Help!
What you could do, if you really need this query a lot, is create a persisted computed column that uses the LOWER() function. Index that column and you should be fine again:
ALTER TABLE dbo.YourTableName
ADD LowerFieldName AS LOWER(YourFieldName) PERSISTED
CREATE NONCLUSTERED INDEX IX_YourTableName_LowerFieldName
ON dbo.YourTableName(YourFieldName)
That would keep a lower-case representation of your field in your table, it's always up to date, and since it's persisted, it's part of your table and doesn't incur the penalty of the LOWER() function. Put an index on it and your search should be as fast as it was before.
Links:
MSDN docs on SQL Server 2008 computed columns
Complex Computed Columns
Using Computed Columns in SQL Server with Persisted Values
Top 10 Hidden Gems in SQL Server 2005 - persisted computed columns are item #3
SQL Server Computed Columns
Working with computed columns
When you add LOWER() (or any function) around a column it is no longer possible to use an index (it is no longer SARG-able).
By default, SQL Server is not case sensitive, so you should be able to remove it.
I believe SQL Server is not case sensitive, so should remove it the Lower function, It should behave normally.