New to DW concepts and SSAS. I'm reading alot that normalized relational dbs are optimal for OLTP due to a typical workload of many one-transaction batches. And denormalization is generally better for DW/BI applications because the nature of queries used for reporting are more batch-based... there were other reasons that I don't recall right now.
It sounds like the advice says to create a denormalized model and populate it from the base relationship model and then build your cubes off the denormalized model. Assuming you're using MOLAP storage type, your cube will store and incrementally update your data in a multidimensional model that it builds behind the scenes.
So now we have essentially the same data stored three times!
Am I reading that right? Why do we even need that intermediate denormalized table? It can't be to optimize report queries because those are being run against the multidimensional SSAS data store. Why not just build your cubes against a dsv whose definition is basically a view of the relational db?
The multidimensional model needs the relational model to be available in star schemas (that is what you call "denormalized model") for loading the data. And in many cases, there is some processing like combining data from different sources, keeping the data for reporting longer than it is needed in the OLTP world, keeping historical views like old regional or department structures available for analyzing which are not necessary and hence overwritten in the OLTP world. Hence, this intermediate step makes sense in many cases. You might also want to have clear cut of times, i. e. always report data for complete days (or, in some cases, months), and not have some data for the last day available and some not, which makes comparison of numbers for a day easier than comparing e. g. the sales of today containing only the data up to 10 o'clock with the sales of the whole day yesterday.
In some simple cases, the intermediate relational data structure need not be available physically. A few days ago I prepared a prototype cube where the star schema was just a set of views on the source data. In this case, of course, the data was only physically available in the original source form and in the cube. The structure of the source data did not make the views that inefficient, and thus data loading to the cube was fast enough for the prototype.
Related
When building a Data Warehouse I usually see two main approaches for the ETL-process:
1. View - View of views - View of views of views - ...
Approach one is obviously in the database and has the advantage that you don't have that much redundant data, but could lead to performance issues.
2. Stage table (copy of data) - clear table (copy of data) - dwh table (copy of data) - ...
Approach two could be done with many tools as stored procedures and jobs or a ETL-tool like SSIS.
The advantage here is that it's easy to understand the process as you can visualise it pretty good. You usually also have a very good overall ETL-performance and many predefined tasks etc.
A problem could be for example, that a change of the process is more complex as persistent tables have to be changed.
In the real world you usually see a mix of both, especially when many people have worked on the process.
Of course it also depends on the situation (size of tables, how are similar processes designed in this company, how complex is the ETL-process, ...).
I personally prefer to copy tables, keep the ETL-process simple and if possible do everything in the ETL-tool (usually SSIS in my case) which is designed for this purpose.
But what is best practice and why?
views an views of views would not scale with volume of data in DWH. When it comes of dwh i mean we are talking about huge volumes of data. Integration of data from multiple sources is a common usecase for dwh. Stage->tranform-->fact/dim is the one of the most common way how dwh are built to store data. Yes this would change somewhat when we talk about hdfs and other technologies, but views would not be able to give you desired performance in dwh.
I have seen many systems and all of them have a multi step etl process where you first get data into dwh from sources and then clean/process/conform/transform this data via ETL/others in to your dimensional/other model.
If you want to know point-in-time reference data relationships, implemented in a dimensional DW as type-2 or type-3 slowly changing dimensions, you probably won't find this in a source system.
The scale issue mentioned by garpitmzn is not just about data volumes, but also the joins necessary to restructure and denormalise the data for dimensional analysis. Using views (unless materialised) you'd repeat potentially complex joins for every query. Better to do it once at the time the dimension is loaded.
I have created a dimensional model which is similar in structure to the financial reporting design in the AdventureworksDW environment, where the value of each account is held as a single value column in the fact table and the dimensions give the data its semantic meaning.
There are over a thousand columns in this model so it works well for adding or deleting additional columns. Here is a really good blog on this design: http://garrettedmondson.wordpress.com/2011/10/26/dimensional-modeling-financial-data-in-ssas/
Although this model works well for querying the dimensional model, and there are examples supporting this model for dimensional analysis, I'm concerned that this model is not standard for cube development or data mining which seem to prefer wider tables.
Questions:
Is this design categorized as Entity-Attribute-Value (EAV)?
Would a design using multiple fact tables be better? So many wide fact tables (up to 10) with up to 200-300 columns each, but fewer rows.
Should I expect more performance issues with the much wider tables?
You are right that specific design is considered as EAV model.
By using such a design, you can easily add new accounts, hierarchies etc. You dont need to update your model.
I would not recommend one column per measure aproach. Most account will be null in most of the rows. Also with such a design, you need to read all of your measures even if you need to retrieve only one of them.
We heavily use account dimension in our cubes. Unfortunately things like shared members are not easy to handle in SSAS like in Essbase.
You need to create an Account dimension which is parent-child and also you need to have the key of this account dimension in the fact table as usual.
By using account dimension, you get nice support for time balance functionality. Using time balance functionality of SSAS supposed to be faster than custom MDX code.
We are converting unary operators and parent-child relationships to formulas at the moment.
So basically we have normal formulas, and parents in hierarchies also works as formulas.
At the end we are flattening the hierarchy. So it is not possible to drill down in account dimension. We are using account dimension as a calculation engine only.
It is possible to have proper hierarchies as well, but we decided not to mix custom rollup members and unary operators at the same time.
Shared members and all our formulas implemented as custom rollup members.
I'm new to Analysis Services
My first cube has been deployed and it seems to work.
Dimension tables are ok and fact tables are ok.
My question is very simple : If I add a new record in the related datasource table,
Browsing the cube, I don't see the new record until process again the cube.
In my mind I think if new records are addedd, then cube must reflect the changes.
How to solve this issue? Do I need to reprocess the cube every time a new record is added? This is impossible of course.
You understand that essentially your cube represents a bunch of aggregated measures? That means that when the cube is processed it looks at all the data that is in your fact tables and processes the Measures (according to the dimensions).
The result of this is that you're able to access the data in the cube quickly and efficiently. The downside is as you have mentioned is that when new data is added to the fact table the cube isn't updated.
Typically there will be a daily batch job that will update the cube with the latest fact data, depending on the amount of data you have and the "real-time" requirements this could be done more than once p/day. A lot of people do this out of hours.
If you look closely in BIDS you will notice on the Partitions tab that for each partition it has a Storage Mode which you can define.
I would recommend you read this this article http://sqlblog.com/blogs/jorg_klein/archive/2008/03/27/ssas-molap-rolap-and-holap-storage-types.aspx
Basically, there are a few different modes you can use:
MOLAP (Multi dimensional Online Analytical Processing)
MOLAP is the most used storage type. Its designed to offer maximum query performance to the users. Data AND aggregations are stored in optimized format in the cube. The data inside the cube will refresh only when the cube is processed, so latency is high.
ROLAP (Relational Online Analytical Processing)
ROLAP does not have the high latency disadvantage of MOLAP. With ROLAP, the data and aggregations are stored in relational format. This means that there will be zero latency between the relational source database and the cube.
Disadvantage of this mode is the performance, this type gives the poorest query performance because no objects benefit from multi dimensional storage.
HOLAP (Hybrid Online Analytical Processing)
HOLAP is a storage type between MOLAP and ROLAP. Data will be stored in relational format(ROLAP), so there will also be zero latency with this storage type.
Aggregations, on the other hand, are stored in multi dimensional format(MOLAP) in the cube to give better query performance. SSAS will listen to notifications from the source relational database, when changes are made, SSAS will get a notification and will process the aggregations again.
With this mode it’s possible to offer zero latency to the users but with medium query performance compared to MOLAP and ROLAP.
To get the real-time reporting without having to reprocess your cube you will need to try out ROLAP, but beware, the performance will suffer (depending on the size of your cube and server!).
I'm new to OLAP, so perhaps I don't know the right terminology to use for this question, but bear with me here.
I work with lots of hierarchical, multidimensional data where parent/aggregated cells mostly have data, but child/leaf cells are often missing data (attribute values are unknown but non-zero). I currently use a combination of scripting and SQL to work with it, but that's getting unwieldy. It seems like OLAP cubes and MDX are better suited to the structure of the data, but not necessarily to tasks I need to do with it. For example:
OLAP seems mainly designed for read-only reporting; I do a lot of modifications to the data in batch processes
OLAP seems to like having complete leaf-level data to calculate aggregates; my data has missing values at various levels
Examples of what I want to do:
Load original multi-level data into cube and preserve known parents; don't overwrite or display their values as calculated aggregates of children (which may be incomplete).
Create/update/delete cells in a cube based on results from complicated queries/joins of other cubes. Sometimes a cube needs to be transformed to use a slightly different dimension definition.
Users require estimates for unknown values. I can create decent estimates, but need to adjust them so they conform to known parents/children across all dimensions and levels (this is much harder than it sounds). I am already doing this, but it involves pulling the data out of the RDBMS into a custom executable.
Queries and calculations need to be able to handle the unknowns properly. Ideally be able to easily query how much of an aggregated cell's value is made up of estimated vs. known values, possibly compute confidence/error statistics, or check whether we can derive an exact value for an unknown when it has a known parent and all known siblings, etc.
Data can be large... up to tens of millions of fact table rows. Performance needs to be decent for batch jobs (minutes are ok, hours not so much).
Could an OLAP server and MDX be a good tool for this type of work? Are there any other tools that would work well for manipulating hierarchical/multidimensional/gap-filled data?
That's some needs for an OLAP system, interesting and challenging :-) :
- Load original multi-level data into cube and preserve known parents; don't overwrite or display their values as calculated aggregates of children (which may be incomplete).
You can change the way cubes aggregate values in a hierarchy. Doing this in one hierarchy is fine doing this using in multiple hierarchies might start to get complicated. It's worth checking twice if there is a mathematical 'unique' solution to the problem with multiple 'special' hierarchies.
Create/update/delete cells in a cube based on results from complicated queries/joins of other cubes. Sometimes a cube needs to be transformed to use a slightly different dimension definition.
Here you can use writeback (MDX function Update cube), but I think it's a bit too simple for your needs. Implementation depend on the vendors. Pay attention creating cells can kill your memory as for large cubes you can quickly have millions of cells in a subcube.
What is the sparsity of your model ? -> number of cells with data / number of total cells
Some models have sparsities of 1e-30, here it's easy to explode if you're updating all cells ;-).
Users require estimates for unknown values. I can create decent estimates, but need to adjust them so they conform to known parents/children across all dimensions and levels (this is much harder than it sounds). I am already doing this, but it involves pulling the data out of the RDBMS into a custom executable.
This is looking complicated The issue here is the complexity of the algos, a possible solution using MDX language and how they match with the OLAP engige (fast enough). You're taking the risk it explodes, but have a look at Scope function
Data can be large... up to tens of millions of fact table rows. Performance needs to be decent for batch jobs (minutes are ok, hours not so much).
That should not be a real challenge..
To answer your question, I don't think so. We've a similar problem - on the genetical field - and we are going to solve the problem 'adding' a dedicated calculation module to our OLAP solution. It's an interesting on going project
I'm starting to study SQL Server Analysis Services and I'm working my way through the training book, as well as the Developer Training Kit. In both, I find suggestions that the number of tables used in an OLAP database (ideally, star schema) is greatly reduced from the production OLTP database.
From the training kit:
We followed the data dimensional methodology to architect the data mart schema. From some 200 tables in the operational database, the data mart schema contained about 10 dimension tables and 2 fact tables.
From what I understand, the operational databases are usually (somewhat) normalised and the data mart schemas are heavily denormalised. I also believe that denormalising data usually involves adding more tables, not less.
I can't see how you can go from 200 tables to 12, unless you only need to report on a subset of data. And if you do only need to report on a subset of data, why can't you just use the appropriate tables in the operational database (unless there are significant performance gains to be made by using a denormalised star schema)?
Denormalizing is exactly the opposite of Normalizing a database. In a normalized database everything is spit apart into different tables to support concurrent writes to the data. This also has the side effect of generating any given subset of data exactly once (In an ideal 3rd normal form data structrure). A draw back of normalizing is that reads take a lot longer because of the fact that the data is scattered and we need to join tables to make sense of it again (Joins are pretty expensive operations).
When we denormalize, we are taking the data from multiple tables and merging them in to one table. So now we have repeating data in these tables. The repeating data is useful because we don't have to make joins to any other table to get it anymore. Writing to the data store is normally a bad idea because it would mean alot of writes to change all of the data in a table, whereas it would only take one in a normalized database.
OLTP stands for Online Transactional Processing, notice the word Transactional. Transactions are write operations and the OLTP model is optimiized for this. OLAP stands for Online Analytical Processing, Analysis being the keyword meaning lots of reads.
Going from 200 tables to 12 in an OLTP to OLAP process will suprisingly hold nearly all of the data in the OLTP database plus more. The OLTP is unable to record all of the changes over time, but OLAP specializes in this so you get all of your historical data as well as current data.
The star schema is probably the most common for OLAP data stores, the snowflake schema is also pretty common. You should learn about both and how to properly use them. It's just another great tool in your arsenal.
These two books from IBM will answer your questions much more thouroughly and they are free pdf's.
http://www.redbooks.ibm.com/abstracts/sg247138.html
http://www.redbooks.ibm.com/abstracts/sg242238.html