When building a Data Warehouse I usually see two main approaches for the ETL-process:
1. View - View of views - View of views of views - ...
Approach one is obviously in the database and has the advantage that you don't have that much redundant data, but could lead to performance issues.
2. Stage table (copy of data) - clear table (copy of data) - dwh table (copy of data) - ...
Approach two could be done with many tools as stored procedures and jobs or a ETL-tool like SSIS.
The advantage here is that it's easy to understand the process as you can visualise it pretty good. You usually also have a very good overall ETL-performance and many predefined tasks etc.
A problem could be for example, that a change of the process is more complex as persistent tables have to be changed.
In the real world you usually see a mix of both, especially when many people have worked on the process.
Of course it also depends on the situation (size of tables, how are similar processes designed in this company, how complex is the ETL-process, ...).
I personally prefer to copy tables, keep the ETL-process simple and if possible do everything in the ETL-tool (usually SSIS in my case) which is designed for this purpose.
But what is best practice and why?
views an views of views would not scale with volume of data in DWH. When it comes of dwh i mean we are talking about huge volumes of data. Integration of data from multiple sources is a common usecase for dwh. Stage->tranform-->fact/dim is the one of the most common way how dwh are built to store data. Yes this would change somewhat when we talk about hdfs and other technologies, but views would not be able to give you desired performance in dwh.
I have seen many systems and all of them have a multi step etl process where you first get data into dwh from sources and then clean/process/conform/transform this data via ETL/others in to your dimensional/other model.
If you want to know point-in-time reference data relationships, implemented in a dimensional DW as type-2 or type-3 slowly changing dimensions, you probably won't find this in a source system.
The scale issue mentioned by garpitmzn is not just about data volumes, but also the joins necessary to restructure and denormalise the data for dimensional analysis. Using views (unless materialised) you'd repeat potentially complex joins for every query. Better to do it once at the time the dimension is loaded.
Related
New to DW concepts and SSAS. I'm reading alot that normalized relational dbs are optimal for OLTP due to a typical workload of many one-transaction batches. And denormalization is generally better for DW/BI applications because the nature of queries used for reporting are more batch-based... there were other reasons that I don't recall right now.
It sounds like the advice says to create a denormalized model and populate it from the base relationship model and then build your cubes off the denormalized model. Assuming you're using MOLAP storage type, your cube will store and incrementally update your data in a multidimensional model that it builds behind the scenes.
So now we have essentially the same data stored three times!
Am I reading that right? Why do we even need that intermediate denormalized table? It can't be to optimize report queries because those are being run against the multidimensional SSAS data store. Why not just build your cubes against a dsv whose definition is basically a view of the relational db?
The multidimensional model needs the relational model to be available in star schemas (that is what you call "denormalized model") for loading the data. And in many cases, there is some processing like combining data from different sources, keeping the data for reporting longer than it is needed in the OLTP world, keeping historical views like old regional or department structures available for analyzing which are not necessary and hence overwritten in the OLTP world. Hence, this intermediate step makes sense in many cases. You might also want to have clear cut of times, i. e. always report data for complete days (or, in some cases, months), and not have some data for the last day available and some not, which makes comparison of numbers for a day easier than comparing e. g. the sales of today containing only the data up to 10 o'clock with the sales of the whole day yesterday.
In some simple cases, the intermediate relational data structure need not be available physically. A few days ago I prepared a prototype cube where the star schema was just a set of views on the source data. In this case, of course, the data was only physically available in the original source form and in the cube. The structure of the source data did not make the views that inefficient, and thus data loading to the cube was fast enough for the prototype.
I'm not a trained DBA, but perform some SQL tasks and have this question:
In SQL databases I've noticed the use archive tables that mimic another table with the exact same fields and which are used to accept rows from the original table when that data is deemed for archiving. Since I've seen examples where those tables reside in the same database and on the same drive, my assumption is that this was done to increase performance. Such tables didn't have more than a about 10 million rows in them...
Why would this be done instead of using a column to designate the status of the row, such as a boolean for an in/active flag?
At what point would this improve performance ?
What would be the best pattern to structure this correctly, given that the data may still need to be queried (or unioned with current data) ?
What else is there to say about this ?
The notion of archiving is a physical, not logical, one. Logically the archive table contains the exact same entity and ought to be the same table.
Physical concerns tend to be pragmatic. The overarching notion is that the "database is getting too (big/slow"). Archiving records makes it easier to do things like:
Optimize the index structure differently. Archive tables can have more indexes without affecting insert/update performance on the working table. In addition, the indexes can be rebuilt with full pages, while the working table will generally want to have pages that are 50% full and balanced.
Optimize storage media differently. You can put the archive table on slower/less expensive disk drives that maybe have more capacity.
Optimize backup strategies differently. Working tables may require hot backups or log shipping while archive tables can use snapshots.
Optimize replication differently, if you are using it. If an archive table is only updated once per day via nightly batch, you can use snapshot as opposed to transactional replication.
Different levels of access. Perhaps you want different security access levels for the archive table.
Lock contention. If you working table is very hot you'd rather have your MIS developers access the archive table where they are less likely to halt your operations when they run something and forget to specify dirty read semantics.
The best practice would not to use archive tables but to move the data from the OLTP database to an MIS database, data warehouse, or data marts with denormalized data. But some organizations will have trouble justifying the cost of an additional DB system (which aren't cheap). There are far fewer hurdles to adding an additional table to an existing DB.
I say this frequently, but...
Multiple tables of identical structure almost never makes sense.
A status flag is a much better idea. There are proper ways to increase performance (partitioning/indexing) without denormalizing data or otherwise creating redundancies. 10 million records is pretty small in the world of modern rdbms, so what you're seeing is the product of poor planning or misunderstanding of databases.
I'm trying to figure it if one can build an SSAS cube quickly for prototyping from just one huge and wide table without doing any ETL and custom SQL. Is it even possible?
What we are trying to do, we have a bunch of these tables for different subject areas which were denormalized and a lot of efforts were put to create them and test them. We need a quick way to access this data now and run analytical queries but before we spend time on ETL/dimensional design, we wanted to build a quick cube.
Please do not suggest PowerPivot or any other in-memory tools - these tables are really big and we have very limited RAM at our disposal,
Yes, it's possible. Simply use the same table for creating both dimensions and cubes (measure groups). It's not ideal to do it like this for production, but you should be fine for prototyping.
Another alternative I always use in situations like this, create SQL views on top of the wide table to mimic the dimension and facts (dimensional model). And use views in the data source view. If you've time you can spend on creating the views, this is the best method. Because at the end of the prototype you know the model and functionality is working, and you just need to create physical data warehouse and ETL when you're ready to implement in production.
I am building a data warehouse. I need to get data from different sources and put it together so that I can generate reports. I will do lots of joining of tables. I am talking about maybe 20 tables total and each table is going to be anywhere from 100mb to 5 gigs.
I would like to know if I should be creating different databases for each table since each table might have an entirely different TYPE of dataset.
For example, I might have one table that has 1 GB of data about design of cars. And I will have another table with 3 GBs of sales data on these cars.
Would it be appropriate to separate these into different databases?
Please let me know what additional information is needed to advise me on this situation.
If there's a logical or business separation, by all means put them in different databases. That's just clean data application development. However, if you're going to be joining or merging the different data sets, then you can save some overhead and admin costs by having a single database. 20 tables total isn't a lot (I'm working on a system that has about 3700 tables, though ~1600 are audits). Keep in mind SQL Server is meant to scale up to terabytes of data, provided you have a decent model, indexes, etc.
If you're concerned with performance of the warehouse, you can jam that server full of RAM and harddrives. To leverage the harddrives properly you'd want to look at leveraging multiple files / filegroups and doling the tables out appropriately.
Splitting into different databases would normally be in order to spread I/O load. In SQL Server you can have different filegroups within the database itself if you want to spread I/O across multiple disks groups/disks. In Warehousing scenarios you often deal with SAN solutions for Database storage, and depending on your scenario, these won't really care performance wise one way or the other, while others might give you additional performance if planned properly.
You also have table partitioning which you can look at for your growing database, but in my opinion, just make sure you have plenty of good old memory, it will benefit you more than spending time and effort in worrying about databases and files.
We are running 100gig databases in a single database file and the performance is stellar. Much of the frequently accesse data is residing in memory though, but with decent table structure and logical indexes you'll have a responsive warehouse in no time.
If you planning on having foreign key relationships between these tables (and it sounds like you would) then I would keep it all in one database. Typically I use separate databases for totally separate bodies of data.
If you do separate them then you will run into some interesting challenges when you try to query both at the same time.
I'm trying to determine how I should store historical transactional data.
Should I store it in a single table where the record just gets reinserted with a new timestamp each time?
Should I break out the historical data into a separate 'history' table and only keep current data in the 'active' table.
If so, how do I best do that? With a trigger that automatically copies the data to the history table? Or with logic in my application?
Update per Welbog's comment:
There will be large amounts of historical data (hundreds of thousands of rows - eventually potentially millions)
Primarily searches and reporting operations will be run on the historical data.
Performance is a concern. The searches shouldn't have to run all night to produce results.
If the requirement is solely for reporting, consider building a separate data warehouse. This lets you use data structures like slowly changing dimensions that are much better for historical reporting but don't work well in a transactional system. The resulting combination also moves the historical reporting off your production database which will be a performance and maintenance win.
If you need this history to be available within the application then you should implement some sort of versioning or logical deletion feature or make everything fully contra and restate (i.e. transactions never get deleted, just reversed out and restated). Think very carefully about whether you really need this as it will add a lot of complexity. Making a transactional application that can reconstruct historical state correctly is considerably harder than it looks. Financial software (e.g. insurance underwriting sytems) fails to do this a lot more than you might think.
If you need the history solely for audit logging, make shadow tables and audit logging triggers. This is much simpler and more robust than trying to correctly and comprehensively implement audit logging within the application. The triggers will also pick up changes to the database from sources outside the application.
This question goes along the line of Business Logic. Know your business requirements first then start from there. A Data Warehouse is a nice solution for this kind of situation. ETL will give you lots of options in dealing with data flows. Your basic concept of 'History' vs 'Active' is quite correct. Your history data will be more efficient and flexible if kept in a data warehouse with all their dimension and fact tables.