Data warehousing Model approach - sql

We are in a process of building a health data warehouse. And have been having discussions over the basic structure of the data warehouse. I need your suggestions on pros and cons of the below structures. DWH will be used for reporting and research purpose. It will be a near real time data warehouse with latency time of around 5-10 minutes.
The Source database has one Encounter/visit table. Everything is saved in this table. It's the central table which links everything. So If I need to get a patient's journey in the production database, I just go to the encounter/visit table and see how many times a patient has come for a treatment/has been admitted or went back from emergency, has been admitted from emergency etc.
model 1 ->
Encounter/visit table having the common fields (like encounter_id,arrival_date,care_type etc)
and then further tables can be built as per the encounter types with encounter specific fields :
Encounter_Emergency (Emergency specific fields such as emergency diagnosis, triage category etc)
Encounter_Inpatient
Encounter_outpatient
Model 2 ->
Having separate tables as base tables and then create a view on the top which then includes all the encounter types together.
Encounter_Emergency (Emergency specific fields such as emergency diagnosis,triage category etc)
Encounter_Inpatient
Encounter_outpatient
model 3 ->
Encounter/visit table having all the fields as the source database
and views are created as per the encounter types with encounter specific fields :
view_Encounter_Emergency
view_Encounter_Inpatient
view_Encounter_outpatient
these views can be further combined with the emergency_diagnosis table to get the diagnosis or emergency_alerts table to access the alerts etc.

A prime consideration would be how often there will be additions, deletions, or alterations to Encounter Types.
Model B will require extensive rework in advance of any such change just to make sure the data continues to be captured. Either of the other two models will continue to capture reclassed data, but will require rework to report on it.
As between A and C, the question becomes traffic. Views are comparatively easy to spin up/down, but they'll be putting load on that big base table. That might be acceptable if the DW won't have tons of load on it. But if there will be extensive reporting (Pro Tip there's always more extensive reporting than the business tells you there will be), it may be more advantageous to break the data out into stand alone tables.
There is, of course, ETL overhead to maintaining all of those tables.
For speed of delivery, perhaps build Model C, but architect Model A in case consumption requires the more robust model. For the record, you could build Views that don't have any kind of vw_ prefix, or any other identifier in their names that lets users know that they're views. Then, later, you can replace them with tables of the same name, and legacy queries against the old views will continue to work. I've done just the same thing in the opposite direction, sneaking in views to replace redundant tables.

Related

Capacity Management database

I am designing database using Microsoft Access for capacity management. I have different types of equipment and need to track the power usage by every month. For instance, I have HVAC system and Chiller System. Withing each system there are different equipments like AHU_1, AHU_2 ,AHU_3, MAU_1, MAU_2 and etc in HVAC system and CHWP_1, CHWP_2, CWP_1, CWP_2 and etc in Chiller system.
I need to track the power or usage by every month. For each system i have separate tables containing their respective equipments. What will be suitable way to track the usage? This is what I'm planning to do which I believe there are three options as in the picture below:
Creating a main table called Chiller_usage Table which will have all the equipments and dates with usage value. The problem i see is that it will has repetitive of each equipments due to dates and the pro is not many tables.
Creating each equipment table which will have dates and usage. The problem is I have around 60 to 70 equipments with 5 different major systems and will lead to mass amount of table which will be very difficult when making queries and reports.
Creating date table with equipments and usage value. This looks promising for now because i will have few table initially and as times goes on there will be 12 tables each year which is alot in the future.
What I'm thinking of is the first option since it is easy to manage when making custom queries because I need to perform calculation in terms of costing, usage analysis of each equipment with graphs and etc. But that i believe will be clumsy due to repetitive name of equipments due to variable dates. Is there any other viable options? Thank you.
Assuming you need to store monthly energy usage for each piece of equipment. Normalize the tables. neither the person entering the data or the manager asking for reports needs to see the complexity of the underlying tables. The person entering the data sees both a form for adding systems/equipment and a form for entering energy usage per equipment per month. The manager sees a report with just the information he wants like energy costs per a system per a year. The normalized tables can be recombined into human readable tables with queries. Access trys to make making the forms and reports as simple as clicking on those appropriate queries and then clicking on create form/report. In practice some assembly is required. Behind the scenes the forms put the correct data in the correct tables and the report shows only the data the manager wants. For instance. here is a normalized table structure based on the data you provided and some assumptions:
The tables are normalized and have the relationships shown. Beneath there is a query to show the total power each system uses between any two dates such as for a yearly report. So data looking like this:
is turned into this:

How to decide on a data warehouse schema for clinical dataset?

I am trying to design a schema for the a clinical dataset, MIMIC-III. I am trying to figure out a way to store the events in a way such that users can query the dataset with ease using possibly a star schema. Almost all of the entries such as diagnoses, procedures, notes, charts etc. are related to a single admission. I had a few things in my mind but I have no experience so am finding it difficult to figure out what the best way to do this is.
Create multiple fact tables, for example I would have one fact table for diagnoses, one for procedures, one for lab notes but this just seems like too many fact tables with little to gain. Like I could have a fact table with an entry for each diagnosis per user per admission but would that give me more benefit than the OLTP schema already implemented?
Create one fact table with a row per admission with multiple columns/dimensions like diagnoses, procedures etc. But the issue with is that for most there are multiple diagnoses per admission so I will have to link to a bridge table in most fact tables and then it would look like the image below. The issue with this is the required joins for the queries.
The third way that I read about is using something like an entity-attribute-value fact table where I have a fact table with each row being one fact. For example, one row could be something like (patientid - addmissionid - (Attribute) Heart Rate Reading - (VALUE) 120bpm) this would create a single fact table with almost everything inside and fewer joins required but it would require the end user to pivot the table after querying due to the nature of EAV's thus making it more complicated for the end user. It would look like the i2b2 star schema.
The last way I thought of was doing an entry per event into the fact table but having many columns in the fact table to store dimensions, like (patientid, admissionid, icustay_id, diagnosis, procedure, labnote, labevent, microbiologyevent, etc.) in which patientid, and admissionid will be in all rows but the rest will depend on the entry so one entry could have just patientid, admissionid and a single procedure. I don't know how the end result of this will be like in terms of querying due to my lack of experience. I also don't know whether or not all these entries with almost every column being irrelevant is the way to go.
Any help would be greatly appreciated, I'm trying to have this implemented into BigQuery.
Some tests have concluded that reducing the number of joins can enhance the BigQuery's performance. In other words, if you denormalize most of your data in a single table you will probably get better results since you'll not need to perform many joins.
There are some points that you should think about when deciding the data modeling:
Is it easy to write queries in your model?
If you need many joins, your model will be probably difficult to build queries. If you use nested fields to reduce the number of joins you can simplify the queries you will write. However, if you create very complex nested structures, you will start having problems to query the model again.
Is it easy to join data in your model?
If you have many tables to be joined, your data will be difficult to join. The more denormalized the data is, the easier it is to join.
Easy to update data
If you need to update your data, you should consider that denormalization can help you. If you reduce the number of tables, you will need to update fewer tables. Its important to say that if you create very complex nested fields, it will also be difficult to be updated.
Make data understandable
This is the most important point. Will your data be understandable in a given model? All the previous points are technical and not exactly related to your data. You should design your model considering these points, but your data must make sense to you.
Finally, I would like to summarize some advice:
You can get a better performance denormalizing your data as long as the data keeps understandable.
Use nested fields to denormalize the model but try not creating very complex structures (more than 2 level of nesting).
Keep in mind that your model will need more disk space when you denormalize it so your billing for storage in BigQuery will probably be higher.
I hope it helps
Initial Reading
Trying reading The Data Warehouse Toolkit, chapter 14 has a section on healthcare data modelling.
Modelling vs Storage
But what you should try and figure out is what is the important and high value data, and what is the less important and low value data. Only model and load into the database the high value data. If you try and build the perfect data model, you are never going to deliver any useful reporting to your customers/clients.
What will be used on a daily or hourly basis? This information needs to be in your data model and loaded into the database for aggregation and slicing.
What will be used only occasionally? Once the slicing and dicing is complete, there will be further questions about the minutiae of a small cohort. This is when you go to the bulk storage and retrieve this data from your Data Lake.
Data Lake
Instrument measurements are great examples of low value information. Most individual measurements are not useful, but your ETL could review them and make an overall determination of some kind. E.G. Blood Pressure Normal, Heart Rate High, etc.
Prescription drugs are another example of low value information. In the database you may set a flag if multiple prescriptions are present or something similar. Freeform notes are another. Once some cohort has been decided on based on many other factors, you enable a data scientist to process the note with some machine learning but doing this for all notes is not very useful.
Group/Junk Dimensions
A number of these measurement determinations could be lumped together inside of a group/junk dimension as a way of resolving maintaining the grain of the fact and keeping the fact from exploding with low value rows. You could even hold off on modelling a junk dimension until your customers/clients start telling you about long executions for specific types of data extracts. You can then design a junk dimension to serve these higher value measurement determinations to enable additional aggregations or slicing prior to extracting the insightful data from the data lake.
File Structure
In your data lake I would have many file schemas for the low value data. These could be JSON, parquet, csv, or whatever you prefer. You would include the data needed to connect it back to the fact along with the data specific to the file type.
PatientId
AdmissionId
MeasureType
MeasureValue
Date
Time
The point is most of this data will never be looked at, but occasionally a few records are high value. You just don't know which they will be, so you store them as cheaply as possible until they are needed.
The data lake also allows you to change your file schema as new information becomes available and executing this change is trivial compared with changing a database star schema.
Use your favourite scripting language to create these files. Python, C#, Azure Function App, AWS Lamda, whatever. This will depend on your own skills and resources available.

Why would all tables not be temporal tables by default?

I'm creating a new database and plan to use temporal tables to log all changes. The data stored will be updated daily but will not be more than 5000 records per table
Is there any reason I shouldn't just make all tables temporal?
Ps. I am aware of the space usage of temporal tables, this is not as far as I understand a problem
I am aware of the space usage of temporal tables, this is not as far as I understand a problem
On the contrary - it's pretty big problem - and there are many other downsides too.
When you use Temporal Tables (at least in SQL Server), every UPDATE operation (even if the data is unchanged) results in a copy being made in the History table (granted, under-the-hood this may be a COW-optimized copy, but it's still another conceptual entity instance).
Secondly - from my personal experience working with LoB applications: most changes to databases are not important enough to justify creating an entire copy of a row, for example, imagine a table with 4 columns ( CREATE TABLE People ( FirstName nvarchar(50), LastName nvarchar(50), Address nvarchar(200), Biography nvarchar(max): whenever a typo in FirstName is fixed then all of the data in the other columns is copied-over, even if Biography contains a 4GB worth of text data - even if this is COW-optimized it's still creating copies for every user action that results in a change.
Is there any reason I shouldn't just make all tables temporal?
The main reason, in my experience, is that it makes changing your table schema much harder because the schemas (aka "table design") of the Active and History tables must be identical: so if you have a table with a NULL column that you want to change to a NOT NULL column and you have NULL values in your History table then you're stuck - at least until you write a data transformation step that will supply the History table with valid data - it's basically creating more work for yourself with little to gain.
Also, don't confuse Temporal Tables with Immutable, Append-only data-stores (like the Bitcoin Blockchain) - while they share similar design objectives (except true immutability) they exist to solve different problems - and if you consider the size requirements and scaling issues of the Ethereum block-chain (over a terabyte by now) then that should give you another idea why it's probably not a good idea.
Finally, even if Temporal Tables didn't have these issues - you still need to go through the effort to write your main software such that it can natively handle temporal data - and things like Entity Framework still don't have built-in support for querying Temporal Data.
...and even with all the historical records you've managed to save in the History table, what do you want it for? Do you really need to track every corrected typo and small, inconsequential change? How will your users react to needing to manually audit the changes to determine what's meaningful or not?
In short:
If your table design probably won't change much in the future...
AND small updates happen infrequently...
OR large updates happen regularly AND you need an audit record
...then go ahead and use Temporal Tables wherever you can.
if not, then you're just creating more future work for yourself with little to gain.
"log all changes" is not a good use case for the temporal features in SQL.
The use case for the SYSTEM TIME temporal feature is when there is a pressing requirement obligating you/the user to be able to easily and quickly reconstruct (should be in scare quotes actually) the state that your database was in at a given moment in time. That is difficult and error-prone and expensive if all you have is a true log of past changes. But if you can suffice with keeping just a log of the changes, then do that (it will be difficult and error-prone and expensive to recreate past database states from the current state and your log, but that's not a pressing problem if there's no pressing need).
Also note that the SQL temporal features encompass also the notion of BUSINESS TIME, which is a different time dimension than the SYSTEM TIME one. Business time is targeted to keeping a history of the world situation, system time is targeted at keeping a history of your database itself, that is, a history of your records of the world situation.

Best Practices: Redundant data or multiple joins?

Suppose you have the following tables: Orders, Customers, Events, Lines, and LineAssignments. The only table that I can modify is the LineAssignments table.
Event 1 <---> * Orders
Customer 1 <---> * Orders
Order 1 <---> * LineAssignments
Line 1 <---> * LineAssignments
Different pages display different combinations of info with the line assignments. For example, on some pages I only display the event info with the line assignments, while on other pages I display the order info with them, etc.
Basically, whenever I add a new line assignment, should I also store the EventID, CustomerID, and OrderID too, or should I only store the OrderID, then do multiple joins to get the other data. Would it be better to create a view that joins these tables?
I tend to follow the school of thought that data should only be represented once in a database. This means, in your place, I would attempt to get what I need from multiple joins and only store OrderID.
The reason why I would do this is if there's any chance that the data stored in the other tables (the data you copied over to the LineAssignments table) is updated, the copied data would be wrong. I don't see it being super likely that the data in the other tables would change, but in the off-chance that it does... You'd be better off with the joins than potentially incorrect data.
It is simply a question of performance. Generally, you should stick to the 3NF, i.e. no redundancy. Whereas this gives very tight and elegant data structures, it might also lead to heavy performance issues.
This is usually the case if your database is both for productive and historical data, i.e. grows over time.
When issuing the joined queries, your RDBMS will load as much information as possible into memory, usually index information to speed up your query. Now, if your indexes are so big that they don't fit into memory, your RDBMS (no, the OS in fact) will have to swap, which is a performance killer.
The real deal (in my eyes) is to completely separate productive data (open / unpaid orders for example) from historic data. The historic data can and should be optimized for fast retrieval as nothing changes anymore and hard discs are cheap.
Productive data should be nice and tight (3.NF). Whenever a piece of information is not productive anymore (order is paid, parts are delivered etc.) it will be removed from the productive database and transferred to the historical data.
Get information on the topic 'data warehouse' in case you're not yet familiar with it and read about the concepts. It's quite easy to understand.

How to build a proper DB schema to have "periodic snapshots" of a table for a selected day? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Problem to be solved:
Im new to DataBases and Im trying to find out the best way to store changes in a table, that is a daily snapshot of some statuses: eg. "hotel_room_rentals" table (with 20 columns - every can change).
Id like to be able to generate that table for a selected day (e.g. data inside changes on production, so I have to store it somewhere else), or do some other transformations on it (e.g. average number of days rented in a period)
My theoretical example - detailed:
Let's say that Im creating a DB for a hotel.
In the production system I have a table that shows info for all 10 000 rooms in the hotel.
This is a daily snapshot - let's assume that the table is updated once per day.
Some attributes of a room change often: e.g. is_rented; customer_number, rate_usd.
Some attributes dont change too often: e.g. disabled_room, room_color, type_of_furniture.
Room_number obviously does not change (primary key)
Now I want to find the best way to track changes in this table; the best way to create statistics on base of this table (e.g. average number of days rented in a period) and to be able to generate the table for selected date (e.g. 2013-01-01)
MY IDEA:
Since I have no clue about databases, my idea is to copy the whole table every day, with 1 more column, called "DB_dump_date" (with a date). This is a pretty straightforward approach, which will probably require a lot of space; since my 10k rooms table, will have to be copied 365 times in a year.
OTHER SOLUTIONS:
On some other website, I was recommended to create two tables:
"Reservation" table with these columns: Startdate Enddate Room Rate Occupant_name
Then to transform this table into a FactReservations table: Date Room Is_occupied Rate Occupant_name
I do not understand how does this help me... in fact I assume I would have to make 20 intermediary tables and then 20 Fact tables (since I have 20 columns in my database).
QUESTIONS:
What are the recommended ways to deal with such problems?
Is there any DB schema that is prepared to deal with it, without the user making magic ETLs? (e.g. a DB that can optimize the problem by itself)
What are the alternatives?
How would you, smart people, do this? (preferably in MS Access... or some freeware technology)
edit:
one more thing - everything can change in the table, not only room reservetions, everything; and I want to be able to track the changes
stop - slow down - and take a breath.
do not - repeat do not make copies of tables each day. this approach is way off base.
your problem is a normalization problem. as you indicate - you have other suggestions on how to normalize - this is the direction you want to go.
Your goal will be to find a structure that accommodates the SQL statements that can answer your questions (and hopefully many more that you haven't thought up yet) This will be one static model where the tables do not change or get copied, but are instead static - and the only thing that changes is the data inside the tables. (ideally - to me there will also be few to no updates, only inserts)
You will certainly need a ROOM table, and a CUSTOMER table, and then a relation between them possibly RESERVATION.
these can then fill up - and you can get all the answers to the questions you posed without any copying or materialization or anything.. just SQL.
You need to focus on the requirements and start there. So far for requirements I see are:
-Generate that table for a selected day
-average number of days rented in a period
If we consider two extremes of design, at the more complex end would be a datamart with SCD tables, tracking changes to rooms, and at the simple end would be some kind of log table, along the lines of what you have already mentioned.
Reading between the lines, I don't really see any requirement for knowing the attributes of a room on a given day, but I do see a requirement for analysis of historical transactions.
So my suggestion is have a good hard think about your requirements before you start designing the database.
There is no magic design to cover this automatically. Dimensional design is a standard way of modelling business data to allow for easy analysis, but it might be over the top for your requirement.
Welcome to the world of databases! With that in mind – take almost everything that you know about Excel and throw it out the window. Whereas it’s much more difficult in Excel to define relationships between two sheets of a workbook and report off of those two different sheets, so the majority of the time it’s easier to simply copy the same data down a single sheet, it’s trivially easy to do using Access or any other relational database.
Typically what you’d want to do is create several normalized tables and define a relationship between them. Then, when querying the view, you can easily join between the tables to get the data that you need.
So, working off of the assumption that you’re building this for simple reporting and not to create a property management system (if you are looking at that – I’d recommend that you look at some of the players in the industry, like Micros or Agilysys), based on my experience working in the industry, I’d recommend the following table layout:
Reservations – this holds the reservation information (guest name,
arrival date, departure date, check-in date, check-out date, rate if
you use a blended rate, etc.)
Rooms – this holds information on your rack (number, wing code, max
guests, # beds, smoking/non, view, type, etc.)
Room Status – Only if you need to track if a room is on
reserve/hold/OOO/OTM (Status type, date start, date end)
Room Status Types – Types of room status holds and how it affects
inventory (type, out of inventory flag)
Rates (if you don’t use a blended rate) – one entry per reservation
per night (guest, rate)
Personally, I’m a huge fan of using surrogate keys for the unique identifiers, because all too often I've been burned where something changes in the business process and a natural key that was previously unique all of a sudden can be duplicated. In that vein, each table would have a surrogate key and the joins would be as follows:
Reservations – Rooms (many to one)
Rooms – Room Status (one to many)
Room Status – Room Status Types (many to one)
Reservations – Rates (one to many)
If you define the relationships properly in Access (i.e. foreign key relationships in other DBMS), it should automatically use them to build your joins when creating your queries (called Views in just about every other DBMS) or reports.
For learning about databases I’d recommend that you review:
Wikipedia on Join types
Wikipedia on Slowly Changing Dimension (you could use some of
these techniques to record changes in room information over time)
Wikipedia on Relational Databases
Office documentation on Access
Kimball Group Design Tips (great for data warehouse/datamart
design)
if you need to use your existing table then the following is not applicable. If the data can be migrated to a new schema then this will readily address the challenge. TRE is an approach which uses the current view paradigm for development but fully supports the time dimensions of data (which are system time=when the data goes into the db and valid time=the business time which applies to the data). By working in the current view approach of TRE this sort of problem is straightforward. Take a look at:- http://youtu.be/V1EcsuJxUno