I am working with a system, which has 4 databases:
Account (Storing bank accounts, transactions, etc)
Client (Client related info)
Credit (getting rates from 3rd party system)
Quality (Further internal calculation)
I want to create 4 facts tables, one fact table for each database... for example, I will have an Account Fact table with ClientAccount, Transaction, Provider as its dimension table. I will have 3 similar Fact Tables for other databases.
My Question is: does it make sense to include each corresponding fact table in that database? i.e. Create Accounting Fact and Dimension tables in the Account database? Or is it a better to create a new database for all of our star schema, and include all the dimension and fact tables in their own database?
Without knowing too much about the system, I would suggest these are dimension tables rather than fact tables.
A dimension table represents an entity or an object that you can use to construct a fact. Accounts and clients seem like a good fit for this. I'm not sure what Credit and Quality are but they may be dimensions as well.
Your fact table should represent transaction-like records. This could be sales, transactions, phone calls, or whatever your data warehouse is reporting on. This fact table would then have foreign keys to each of the dimension tables.
Regarding a single or multiple databases: I would suggest storing it in a single database. It's easier to use that way, and you don't have to worry about database links when querying your data. Your ETL process for populating these fact and dimension tables can extract the data from these four databases and load it into one database, and from there, you can build the cubes in a single database.
Unless your data volume is very small, your data warehouse should be housed in a separate database from the transactional data. A DW has a different usage pattern (OLTP vs OLAP) and will generally have a different maintenance window.
I would recommend creating all of your Dims and Facts in a single dedicated DW database. I can't think of any benefit to separating them and it would reduce your DBA overhead by not having extra databases to manage/secure/audit/document.
As for Dimensions vs Facts, data from the OLTP Account table would be used to create a Dim and a Fact. DimAccount at the very least would be a degenerate dimension containing just the account number. You'd have to review your data to determine if any of the other records are generic attributes of the Account specifically. FactAccount would contain references to the other Dimensions (DimAccountType, DimCustomer, DimLocation, etc)
Think of the dimensions as the values from lookup tables/dropdown lists, which exist prior to any events happening. For example, a bank can offer Checking & Savings accounts, even if they do not yet have any accounts.
Facts document an event. When an account is created, the fact record will reference all of the dimensions that describe the event, and record the measurable values associated with the event, if any.
Related
I have a table that stores different software services a company offers. The services are tagged by the Industry it serves, the LoB it belongs to, and the technology involved in the service. The service can have multiple tags on each of Industry,LOB, and Technology.
For eg: Following could be the master data:
And a transaction data could look like this :
I need to create a view that can be used to query data by Industry/LoB and Technology tags. For time being I've Left outer joined all tagtoService relation tables(service-technology, service-LoB, Service-Industry tables) to the services transaction table. but this goes for a huge number of records as it is possible to typically have one service tagged to up to 10-15 industries and technologies.
Just wanted to know what is the optimal way to model this data so that I have provision to query for service by all of the three tags right from within one view.
I am not a Data modeling expert and this is more of my first venture into the data modeling side- so please pardon the 'noob'ness of my question :). I use SAP HANA as the database and expose data via an OData service for which I want to use this view as a datasource.
If you're asking modeling the data: Normally in your transaction table, you keep the foreign keys, not the text columns that can be obtained via foreign keys from the master tables. I bet that's what you meant as well but the example shows text values in the transaction tables.
Other than that, I think what you have is sound and reasonable. These "tag" tables represent different level of granularity for the "services" table and it can be counterproductive if you combine them in a single table (examples: single column with comma separated tags, XML / JSON columns, multiple columns [LOBTag1, LOBTag2, ...] ) b/c that will make these columns non-indexable and/or hard to query. You may have optimization with XML and JSON columns but those are should not be considered unless the columns are too many and sparse.
I'm trying to Architecture creating a data warehouse in the Star Schema model... any idea would be appreciated.
Any idea what I should do to create a Star Schema? Some day that I should have a linking table with DimProjects going to the fact tables. What about Project hours? What is the right approach to this or do I need other tables to link? Employee's can work on multiple projects, projects require man hours... etc.
What is the best approach on modeling?
So far I have tables:
[CODE]
Dimension Tables Measure Tables
---------------- --------------
DimEmployee FactCRM
DimProjects FactTargets
DimSalesDetails FactRevenue
DimAccounts
DimTerritories
DimDate
DimTime
[/CODE]
Dimensions in a schema of a datewarehouse means independent entities like for say
Dim_Employee
Empid(pk)
Name
Address etc likewise all other
dimensions
With each dimension keys linked to your fact like in above case
FactCRM would include only crm
related measures and would be linled
To their specific dimensions depending
upon the requirements
Without knowing the columns noone would be able to tell what you want in actual. Also remember linking a dimension to a fact is obviously a partial star schema itself so that doesnt lead to any issues. The only thing is if your dimensions are itself normalized in a schema then it becomes snowflake.
Another thing about fact related if you want to perform manipulation of othwr facts based on somw existing facts then you have to link fact table as well with a unique factid. This is called fact constellation. Then the schema would become star/snowflake schema with facy constellation
I have a table that stores general information about a customer (name, address, etc) that is common to all customers. I have a field called CustomerType (list of types) that drives what other fields I need to capture. So if they are a government customer then they will see a different set of custom fields than a non-profit customer would see. I need to create forms that each different CustomerType will be fill out. On the SQL side, I need to figure out the best way to store the data so that when I do reporting it is simple. I don't know the best way to attack this problem.
On the SQL side, I need to figure out the best way to store the data so that
when I do reporting it is simple. I don't know the best way to attack this
problem.
There are many possible approaches each with different strengths and weaknesses, here's some to think about:
Create separate customer detail tables for each of the customer types, each containing the fields specific to that customer type. Each detail table keyed on the customer Id. The customer type does not have to be an attribute of the detail table, only of the parent Customer table.
(+) The correctly normalized solution (although you may find awkward situations
where attributes are common to a subset of customer types). The tables will be fairly easy to maintain.
(-) Reports harder to write - you may find yourself using a LOT of unions or outer joins. Development against this schema is more complex, the extra logic to insert/update attributes in the correct tables for particular customer types must be encoded somewhere. This might become unmanageable if you have many customer types, or if you're adding/changing them frequently.
Expand the customer table to contain the super-set of columns required by all customer types, keyed on the customer Id.
(+) Simple, very easy to report on, simple programming logic.
(-) The customer-type specific fields are only partially dependent on the key of the customer table (customer Id) - they are really dependent on the combination of customerId/customerType.
If there are many extra fields, and if there are few fields common between customer types then this denormalization may result in a very wide table with an unmanageable number of columns. It could be a maintenance nightmare - the table must be modified every time a new customer type is added/change.
You might find this a good solution if the number of unique fields required by each customer type is small and they don't change often and if ease of programming and reporting is an overriding concern.
Store the customer specific values as name/value pairs in a generic customer Details table, keyed on customerId/customerType/key.
(+) Very simple to maintain - No data model changes are required to add a new customer type.
(-) Non-relational, makes pure SQL reporting near impossible and makes integrity constraints very difficult to add. You might see this in specialized use cases e.g. where the data will only ever consumed as JSON and direct reporting will never be a requirement, or in some corporate environments where it may be appealing if database changes are very hard to push through.
First of all, have a look at some good tutorials on database design and object relational modelling (ORM) A beginner's guide to SQL database design
My personal suggestion for your design would be to create one table to store all costumers, together with some kind of unique customer id and the CustomerType. Next create a separate table for each of the CustomerTypes and for each user that belongs to that type, store that users unique id in a column together with its customertype specific fields.
Consider a data structure such as the below where the user has a small number of fixed settings.
User
[Id] INT IDENTITY NOT NULL,
[Name] NVARCHAR(MAX) NOT NULL,
[Email] VNARCHAR(2034) NOT NULL
UserSettings
[SettingA],
[SettingB],
[SettingC]
Is it considered correct to move the user's settings into a separate table, thereby creating a one-to-one relationship with the users table? Does this offer any real advantage over storing it in the same row as the user (the obvious disadvantage being performance).
You would normally split tables into two or more 1:1 related tables when the table gets very wide (i.e. has many columns). It is hard for programmers to have to deal with tables with too many columns. For big companies such tables can easily have more than 100 columns.
So imagine a product table. There is a selling price and maybe another price which was used for calculation and estimation only. Wouldn't it be good to have two tables, one for the real values and one for the planning phase? So a programmer would never confuse the two prices. Or take logistic settings for the product. You want to insert into the products table, but with all these logistic attributes in it, do you need to set some of these? If it were two tables, you would insert into the product table, and another programmer responsible for logistics data would care about the logistic table. No more confusion.
Another thing with many-column tables is that a full table scan is of course slower for a table with 150 columns than for a table with just half of this or less.
A last point is access rights. With separate tables you can grant different rights on the product's main table and the product's logistic table.
So all in all, it is rather rare to see 1:1 relations, but they can give a clearer view on data and even help with performance issues and data access.
EDIT: I'm taking Mike Sherrill's advice and (hopefully) clarify the thing about normalization.
Normalization is mainly about avoiding redundancy and relateded lack of consistence. The decision whether to hold data in only one table or more 1:1 related tables has nothing to do with this. You can decide to split a user table in one table for personal information like first and last name and another for his school, graduation and job. Both tables would stay in the normal form as the original table, because there is no data more or less redundant than before. The only column used twice would be the user id, but this is not redundant, because it is needed in both tables to identify a record.
So asking "Is it considered correct to normalize the settings into a separate table?" is not a valid question, because you don't normalize anything by putting data into a 1:1 related separate table.
Creating a new table with 1-1 relationships is not a reasonable solution. You might need to do it sometimes, but there would typically be no reason to have two tables where the user id is the primary key.
On the other hand, splitting the settings into a separate table with one row per user/setting combination might be a very good idea. This would be a three-table solution. One for users, one for all possible settings, and one for the junction table between them.
The junction table can be quite useful. For instance, it might contain the effective and end dates of the setting.
However, this assumes that the settings are "similar" to each other, in a SQL sense. If the settings are different such as:
Preferred location as latitude/longitude
Preferred time of day to receive an email
Flag to be excluded from certain contacts
Then you have a data-type problem when storing them in a table. So, the answer is "it depends". A lot of the answer depends on what the settings look like, how they will be used, and the type of constraints on them.
You're all wrong :) Just kidding.
On a very high load, high volume, heavily updated system splitting a table by 1:1 helps optimize I/O.
For example, this way you can place heavily read columns onto separate physical hard-drives to speed-up parallel reads (the 1-1 tables have to be in different "filegroups" for this). Or you can optimize table-level locks. Etc. Etc.
But this type of optimization usually does not happen until you have millions of rows and huge read/write concurrency
Splitting tables into distinct tables with 1:1 relationships between them is usually not practiced, because :
If the relationship is really 1:1, then integrity enforcement boils down to "inserts being done in all concerned tables, or none at all". Achieving this on the server side requires systems that support deferred constraint checking, and AFAIK that's a feature of the rather high-end systems. So in many cases the 1:1 enforcement is pushed over to the application side, and that approach has its own obvious downsides.
A case when splitting tables is nonetheless advisable, is when there are security perspectives, i.e. when not all columns can be updated by one user. But note that by definition, in such cases the relationship between the tables can never be strictly 1:1 .
(I also suggest you read carefully the discussion between Thorsten/Mike. You used the word 'normalization' but normalization has very little to do with your scenario - except if you were considering 6NF, which I think is rather unlikely.)
It makes more sense that your settings are not only in a separate table, but also use a on-to-many relationship between the ID and Settings. This way, you could potentially have a as many (or as few) setting as required.
UserSettings
[Settings_ID]
[User_ID]
[Settings]
In fact, one could make the same argument for the [Email] field.
I have a question about best practices related to de-normalization or table hierarchy relationships.
For a simple example, let's say I have an app that allows a user to make a payment for an order. I save the order information in the orders table, and I have another table for the payment called payments. Payments has a foreign key to the orders table.
Let's assume that I can pay with a credit card, check, or paypal, and I want to save the information about the payment.
My question is what is the best way to handle this relationship between the different payment data and the payment table. The types of payment all have different data associated with them. So do I denormalize the payments table, putting credit card, check, and paypal information fields in there and then just use the fields as necessary. Alternately I could specify a payment type, and store the information in their own tables, but then I would have to use logic on an application level to get the data out of the correct credit card, check or paypal information tables...
I would choose to keep the database normalized.
but then I would have to use logic on an application level to get the data out of the correct credit card, check or paypal information tables...
You have to use logic (or at least mapping) in either case. Whether its what table to pull the data from or what fields in the table to access.
What about keeping it denormalized and then making a view to put the data back together again. You get the best of both worlds. IIRC, MySQL introduced views in version 5.
So do I denormalize the payments
table, putting credit card, check, and
paypal information fields in there and
then just use the fields as necessary.
yes. but this is not "denormalizing". if you stored order information in the client table, that would be denormalizing. adding nullable columns to accurately describe a payment in the payments table is not.
You can use the idea of table per subclass as the ORM tools do. This would require a join for each query against the payment table but...
Create tables for each payment type so you will have a creditcardpayment and a checkpayment table. The common fields go in the payment table, the specific fields go in the sub tables. The sub tables primary keys are foreign keys to the payment table's id.
To add a new payment you have to first insert the common fields into the payment table, get the id generated, then insert the specific fields into the specific sub table.
To query you have to join the subtables with the payment table. You could use a view to make that easier.
This way the database is still normalized and you have no null columns.
It partially depends on the framework (if any) that you are using. For instance: the Ruby on Rails way would generally be to store the type of the payment in the payments table and then have different, separate tables for each payment type (PayPal, Credit Card, etc).
Alternatively, if you notice that you are repeating the same data in many of the tables, Rails has a way to store all of the data in the same table, using only the fields you need, but still allowing you to have separate objects. For instance, you would have an AbstractPayment object with an abstract_payments table, but you would also have PayPalPayment and CreditCardPayment objects that both inherit from AbstractPayment and use the abstract_payments table. All you need to determine the payment type is a column in abstract_payments that tells you which type it is (probably a string, but could be an integer if you so choose). This is called STI.
No matter what framework/language you use, the same ideas can definitely apply and I think the right solution will depend on how many different types of payments you have, compared with how simple you want your database to be.
Keep it as normalized as possible. Only de-normalize when the performance of a fully normalized schema requires denormalization to improve response time, and do that only on a case by case basis to deal with specific performance issues associated with individual querys within your application.
These are complex problems. Database Normalization requires intimate domain knowledge, and a skilled analysis of how that domain model will be manipulated and utilized within your application. Denormalizing for performance requires that you understand your application's usage patterns well enough to predict performance issues before they occur (waiting till they actually occur in production is too late - by then making fundemental schema changes in the database is very expensive) and know what denormalization techniques to use to address them.
You need to weight the following factors:
How much space will you waste if you put all data into a single table
How complex the SQL queries will become in either case.
If you use different tables, you'll have to use joins. If you put everything into a single table, you'll need to find some magic to "ignore" the rows which don't matter (say when you want to find all credit card payments: Your query must then ignore everything that's something else).
The latter part gets more easy when you move the special data into special tables at the cost of more complex joins.