I have an MxN table of input fields with the last column containing a reactive calculation based on every row's data. Rows are independent i.e. data in one row does not affect calculation in another.
The reactive calculation results for all rows are stored in a single computed property which is looped over when constructing the table.
This way any change to any input field in the table triggers recomputation of the whole property. At some point this approach is bound to create performance issues.
I am wondering if there is a better general strategy which allows to limit the reactivity to only the relevant fields in such cases.
At some point this approach is bound to create performance issues.
Well maybe. Did you hit that point? If not, it is premature optimization.
If yes, this can be solved by creating an array of computed properties representing the column (each value in an array is computed from specific row)
I have a high performance DB (on SQL Server 2012). One of my views had a virtual column represented by an inline scalar valued UDF computing custom hash from the previous column value. I moved this calculation from the view into one of the underlying tables to improve the performance. I added a new computed column to the table and persisted the data. Then, created index on this column and referenced it back in the corresponding view (basically migrating the logic from the view into the table).
Now I am wondering why wouldn't I just add a regular VARCHAR(32) column to the table instead of the computed PERSISTED column? I could create a DEFAULT on this column with the above mentioned UDF for all new inserts and recalculate the historical records.
Is there any advantage of the indexed computed column with PERSISTED data vs. regular NC indexed column?
Thx.
The computed column will keep your field up to date if the field data it is based on is changed. Adding just a default will update the field on insert if no value is provided for the field.
If you know that your data is not going to change (which i think you are implying but did not specify in your question), then they are functionally the same for you. The computed column would probably be preferred though to prevent accidental update of the field with an incorrect value (bypassing the default). Also it is clear to any other developers what the field is to be used for.
You could switch to a "normal" column with a default value or insert trigger. The one potential issue is that unlike a computed column anyone with insert/update access could (potentially accidentally) change the value of the column.
Performance is going to be the same either way. In essence that is what the db is doing behind the scenes with a persisted computed column. As a developer a column that is persisted computed is clearer in the intent than a default value. Default value implies it is one of many possible values not the only possible value.
Be sure to declare the UDF With SchemaBinding. This will allow SQL Server to determine if the function is deterministic and flag it as so. That can improve query plan optimization in some cases.
There is no performance difference. However, in terms of database design it is more elegant when you have your pre-calculated column in the persisted view.
I have some order related tables. Order, OrderLines, OrderWarehouse etc.
The Orders table contains some computed columns (TotalNetPrice, TotalVATPrice etc).
I need them to be persisted columns as I need to include them in some indexes.
My question:
The columns themselves call functions which take the order tables Id field and return the calculated value.
Is there a way of forcing the values to be recalculated when ANY of the Order related tables change (as they all effect the calculations)?
Edit: (...without writing triggers)
I am not sure about persisting the computed column while allowing it to recalculate whenever one of the variables are changed, however I would almost approach this one using a View. when you select from the view it almost acts like a table and it could have your computed columns updated on the fly, however when you need to save new data you would save it to the originating column rather than the view.
I'm considering designing a table with a computed column in Microsoft SQL Server 2008. It would be a simple calculation like (ISNULL(colA,(0)) + ISNULL(colB,(0))) - like a total. Our application uses Entity Framework 4.
I'm not completely familiar with computed columns so I'm curious what others have to say about when they are appropriate to be used as opposed to other mechanisms which achieve the same result, such as views, or a computed Entity column.
Are there any reasons why I wouldn't want to use a computed column in a table?
If I do use a computed column, should it be persisted or not? I've read about different performance results using persisted, not persisted, with indexed and non indexed computed columns here. Given that my computation seems simple, I'm inclined to say that it shouldn't be persisted.
In my experience, they're most useful/appropriate when they can be used in other places like an index or a check constraint, which sometimes requires that the column be persisted (physically stored in the table). For further details, see Computed Columns and Creating Indexes on Computed Columns.
If your computed column is not persisted, it will be calculated every time you access it in e.g. a SELECT. If the data it's based on changes frequently, that might be okay.
If the data doesn't change frequently, e.g. if you have a computed column to turn your numeric OrderID INT into a human-readable ORD-0001234 or something like that, then definitely make your computed column persisted - in that case, the value will be computed and physically stored on disk, and any subsequent access to it is like reading any other column on your table - no re-computation over and over again.
We've also come to use (and highly appreciate!) computed columns to extract certain pieces of information from XML columns and surfacing them on the table as separate (persisted) columns. That makes querying against those items just much more efficient than constantly having to poke into the XML with XQuery to retrieve the information. For this use case, I think persisted computed columns are a great way to speed up your queries!
Let's say you have a computed column called ProspectRanking that is the result of the evaluation of the values in several columns: ReadingLevel, AnnualIncome, Gender, OwnsBoat, HasPurchasedPremiumGasolineRecently.
Let's also say that many decentralized departments in your large mega-corporation use this data, and they all have their own programmers on staff, but you want the ProspectRanking algorithms to be managed centrally by IT at corporate headquarters, who maintain close communication with the VP of Marketing. Let's also say that the algorithm is frequently tweaked to reflect some changing conditions, like the interest rate or the rate of inflation.
You'd want the computation to be part of the back-end database engine and not in the client consumers of the data, if managing the front-end clients would be like herding cats.
If you can avoid herding cats, do so.
Make Sure You Are Querying Only Columns You Need
I have found using computed columns to be very useful, even if not persisted, especially in an MVVM model where you are only getting the columns you need for that specific view. So long as you are not putting logic that is less performant in the computed-column-code you should be fine. The bottom line is for those computed (not persisted columns) are going to have to be looked for anyways if you are using that data.
When it Comes to Performance
For performance you narrow your query to the rows and the computed columns. If you were putting an index on the computed column (if that is allowed Checked and it is not allowed) I would be cautious because the execution engine might decide to use that index and hurt performance by computing those columns. Most of the time you are just getting a name or description from a join table so I think this is fine.
Don't Brute Force It
The only time it wouldn't make sense to use a lot of computed columns is if you are using a single view-model class that captures all the data in all columns including those computed. In this case, your performance is going to degrade based on the number of computed columns and number of rows in your database that you are selecting from.
Computed Columns for ORM Works Great.
An object relational mapper such as EntityFramework allow you to query a subset of the columns in your query. This works especially well using LINQ to EntityFramework. By using the computed columns you don't have to clutter your ORM class with mapped views for each of the model types.
var data = from e in db.Employees
select new NarrowEmployeeView { Id, Name };
Only the Id and Name are queried.
var data = from e in db.Employees
select new WiderEmployeeView { Id, Name, DepartmentName };
Assuming the DepartmentName is a computed column you then get your computed executed for the latter query.
Peformance Profiler
If you use a peformance profiler and filter against sql queries you can see that in fact the computed columns are ignored when not in the select statement.
Computed columns can be appropriate if you plan to query by that information.
For instance, if you have a dataset that you are going to present in the UI. Having a computed column will allow you to page the view while still allowing sorting and filtering on the computed column. if that computed column is in code only, then it will be much more difficult to reasonably sort or filter the dataset for display based on that value.
Computed column is a business rule and it's more appropriate to implement it on the client and not in the storage. Database is for storing/retrieving data, not for business rule processing. The fact that it can do something doesn't mean you should do it that way. You too you are free to jump from tour Eiffel but it will be a bad decision :)
Sorry for the long question title.
I guess I'm on to a loser on this one but on the off chance.
Is it possible to make the calculation of a calculated field in a table the result of an aggregate function applied to a field in another table.
i.e.
You have a table called 'mug', this has a child called 'color' (which makes my UK head hurt but the vendor is from the US, what you going to do?) and this, in turn, has a child called 'size'. Each table has a field called sold.
The size.sold increments by 1 for every mug of a particular colour and size sold.
You want color.sold to be an aggregate of SUM size.sold WHERE size.colorid = color.colorid
You want mug.sold to be an aggregate of SUM color.sold WHERE color.mugid = mug.mugid
Is there anyway to make mug.sold and color.sold just work themselves out or am I going to have to go mucking about with triggers?
you can't have a computed column directly reference a different table, but you can have it reference a user defined function. here's a link to a example of implementing a solution like this.
http://www.sqlservercentral.com/articles/User-Defined+functions/complexcomputedcolumns/2397/
No, it is not possible to do this. A computed column can only be derived from the values of other fields on the same row. To calculate an aggregate off another table you need to create a view.
If your application needs to show the statistics ask the following questions:
Is it really necessary to show this in real time? If so, why? If it is really necesary to do this, then you would have to use triggers to update a table. This links to a short wikipedia article on denormalisation. Triggers will affect write performance on table updates and relies on the triggers being active.
If it is only necessary for reporting purposes, you could do the calculation in a view or a report.
If it is necessary to support frequent ad-hoc reports you may be into the realms of a data mart and overnight ETL process.