Handling properties as a dimension - ssas
Sample are collected from patients. Depending upon the type of the sample, samples could have different properties/attributes. There is a list of 5 properties/attributes which are standard across all samples, but they can have 2 dynamic properties. These properties/attributes vary for sample types. I am trying to model this scenario.
Following is what i was thinking from a design perspective
Sample Dimension
Sample Fact (related to Sample Dimension table)
I have also created a table called Sample Properties which has a single row for a property of the sample.
Sample Fact and Dimension are 1 to 1. Where as Sample Dimension to Sample Properties is 1 to many.
I am really confused how can i model this scenario, so that if any one wants to look at sample properties they can do that. The sample properties won't be analyzed but will be just looked at.
I would really appreciate if someone can guide me with the design of the cube. If any more detail is needed please let me know.
Related
How to populate all possible combination of values in columns, using Spark/normal SQL
I have a scenario, where my original dataset looks like below Data: Country,Commodity,Year,Type,Amount US,Vegetable,2010,Harvested,2.44 US,Vegetable,2010,Yield,15.8 US,Vegetable,2010,Production,6.48 US,Vegetable,2011,Harvested,6 US,Vegetable,2011,Yield,18 US,Vegetable,2011,Production,3 Argentina,Vegetable,2010,Harvested,15.2 Argentina,Vegetable,2010,Yield,40.5 Argentina,Vegetable,2010,Production,2.66 Argentina,Vegetable,2011,Harvested,15.2 Argentina,Vegetable,2011,Yield,40.5 Argentina,Vegetable,2011,Production,2.66 Bhutan,Vegetable,2010,Harvested,7 Bhutan,Vegetable,2010,Yield,35 Bhutan,Vegetable,2010,Production,5 Bhutan,Vegetable,2011,Harvested,2 Bhutan,Vegetable,2011,Yield,6 Bhutan,Vegetable,2011,Production,3 Image of the above csv: Now there is a very small country lookup table which has all possible countries the source data can come with, listed. PFB: I want to have the output data's number of columns always fixed (this is to ensure the reporting/visualization tool doesn't get dynamic number columns with every day's new source data ingestions depending on the varying distinct number of countries present). So, I've to somehow join the source data with the country_lookup csv and populate all those columns with default value as F. Every country column would be binary with T or F being the possible values. The original dataset from the above has to be converted into below: Data (I've kept the Amount field unsolved for column Type having Derived Yield as is, rather than calculating them below for a better understanding and for you to match with the formulae): Country,Commodity,Year,Type,Amount,US,Argentina,Bhutan,India,Nepal,Bangladesh US,Vegetable,2010,Harvested,2.44,T,F,F,F,F,F US,Vegetable,2010,Yield,15.8,T,F,F,F,F,F US,Vegetable,2010,Production,6.48,T,F,F,F,F,F US,Vegetable,2010,Derived Yield,(2.44+15.2)/(6.48+2.66),T,T,F,F,F,F US,Vegetable,2010,Derived Yield,(2.44+7)/(6.48+5),T,F,T,F,F,F US,Vegetable,2010,Derived Yield,(2.44+15.2+7)/(6.48+2.66+5),T,T,T,F,F,F US,Vegetable,2011,Harvested,6,T,F,F,F,F,F US,Vegetable,2011,Yield,18,T,F,F,F,F,F US,Vegetable,2011,Production,3,T,F,F,F,F,F US,Vegetable,2011,Derived Yield,(6+10)/(3+9),T,T,F,F,F,F US,Vegetable,2011,Derived Yield,(6+2)/(3+3),T,F,T,F,F,F US,Vegetable,2011,Derived Yield,(6+10+2)/(3+9+3),T,T,T,F,F,F Argentina,Vegetable,2010,Harvested,15.2,F,T,F,F,F,F Argentina,Vegetable,2010,Yield,40.5,F,T,F,F,F,F Argentina,Vegetable,2010,Production,2.66,F,T,F,F,F,F Argentina,Vegetable,2010,Derived Yield,(2.44+15.2)/(6.48+2.66),T,T,F,F,F,F Argentina,Vegetable,2010,Derived Yield,(15.2+7)/(2.66+5),F,T,T,F,F,F Argentina,Vegetable,2010,Derived Yield,(2.44+15.2+7)/(6.48+2.66+5),T,T,T,F,F,F Argentina,Vegetable,2011,Harvested,10,F,T,F,F,F,F Argentina,Vegetable,2011,Yield,90,F,T,F,F,F,F Argentina,Vegetable,2011,Production,9,F,T,F,F,F,F Argentina,Vegetable,2011,Derived Yield,(6+10)/(3+9),T,T,F,F,F,F Argentina,Vegetable,2011,Derived Yield,(10+2)/(9+3),F,T,T,F,F,F Argentina,Vegetable,2011,Derived Yield,(6+10+2)/(3+9+3),T,T,T,F,F,F Bhutan,Vegetable,2010,Harvested,7,F,F,T,F,F,F Bhutan,Vegetable,2010,Yield,35,F,F,T,F,F,F Bhutan,Vegetable,2010,Production,5,F,F,T,F,F,F Bhutan,Vegetable,2010,Derived Yield,(2.44+7)/(6.48+5),T,F,T,F,F,F Bhutan,Vegetable,2010,Derived Yield,(15.2+7)/(2.66+5),F,T,T,F,F,F Bhutan,Vegetable,2010,Derived Yield,(2.44+15.2+7)/(6.48+2.66+5),T,T,T,F,F,F Bhutan,Vegetable,2011,Harvested,2,F,F,T,F,F,F Bhutan,Vegetable,2011,Yield,6,F,F,T,F,F,F Bhutan,Vegetable,2011,Production,3,F,F,T,F,F,F Bhutan,Vegetable,2011,Derived Yield,(2.44+7)/(6.48+5),T,F,T,F,F,F Bhutan,Vegetable,2011,Derived Yield,(10+2)/(9+3),F,T,T,F,F,F Bhutan,Vegetable,2011,Derived Yield,(6+10+2)/(3+9+3),T,T,T,F,F,F The image of the above expected output data for a structured look at it: Part 1 - Part 2 - Formulae for populating Amount Field for Derived Type: Derived Amount = Sum of Harvested of all countries with T (True) grouped by Year and Commodity columns divided by Sum of Production of all countries with T (True)grouped by Year and Commodity columns. So, the target is to have a combination of all the countries from source and calculate the sum of respective Harvested and Production values which then has to be divided. The commodity can be more than one in the actual scenario for any given country, but that should not bother as the summation of amount happens on grouped commodity and year. Note: The users in the frontend can select any combination of countries. The sole purpose of doing it in the backend rather than dynamically doing it in the frontend is because AWS QuickSight (our visualisation tool), even though can populate sum on selected column filters but doesn't yet support calculation on those derived summed fields. Hence, the entire calculation of all combination of countries has to be pre-populated (very naive approach) in order to make it available in report on dynamic users selection of countries. Also if you've any better approach (than the above naive approach mentioned in note) to solve this problem, you are most welcome to guide me. I've also posted a question on the same problem without writing my expected approach for experts to show me the path on how we can solve this kind of a problem better than this naive approach. If you want to help solve it with some other technique, you're most welcome, here is the link to that question. Any help shall be greatly acknowledged.
Binary Sankey Diagram in Tableau - Not All Activities Match The Corresponding Number of KPIs
How do I link my activities variable to only the corresponding KPIs variable? Using guidance from a number of sources, but primarily the genius of Jeffery Shafer articulated through the SuperDataScience video, I built a Sankey Diagram for my work. For the most part it works, however, I have been trying to figure out how to adjust my Sankey Diagram model to line up each activity with ONLY the corresponding KPIs, but am having no luck. The data structure looks like this: You'll note I changed the binary value to "", 2 instead of 0, 1 as it makes visual calculations easier. For the "Viz" variable, I have "Activity" for the raw data set, then I copy/paste/replicate the data to mirror the data (required for the model) but with "KPI" for the mirrored data. In the following image, you'll see my main issue is that the smallest represented activity still shows as corresponding to all KPIs when in fact it does not. I want activity to line up only with the corresponding KPIs as some activities don't correspond with all, or even any, KPIs. Finally, here is the model very similar to what the above video link shows: Can someone help provide insight into how I can adjust the model to fit activities linking only to corresponding KPIs? I appreciate any insight. Thanks!
I have a solution to the issue, thanks to a helpful Tableau support member named Anthony. It was in the data structure. The data was not structured to only associate "Activities" with their "KPI" values within Tableau's requirements, but every "Activities" value with every "KPI" value. As a result, to achieve the desired result, the data needs to be restructured to only contain a row for every valid "Activities" and "KPI" combination. See the visual below where data is removed to format properly: --------------------------------------> Once the table is restructured, the desired visual result should configure with the model. It works like a charm! Good luck out there!
Do I need a database for this application?
I have a very large amount of data that would most naturally be represented as a tree: Category 1 Sub-category 1 data point 1 attribute 1 Sub-cateogry 2 data point 1 attribute 1 attribute 2 data point 2 Category 2 Sub-category 1 Sub-category 1 data point 1 Sub-category 2 data point 1 data point 2 Sub-category 2 data point 1 data point 2 data point 3 ... The individual data points have text and numerical attributes, bit it doesn't really suited for representation as a set of related tables. I would like to be able to perform SQL-like queries, but I would also like to be able to browse through the data in a way that makes the tree structure of the data obvious, like with a file manager. There's probably some class of application that is ideal for such a thing, but it isn't occurring to me at the moment. Some kind of combination of a database and a tree viewer control? Anyone know what it is I'm looking for? As always, I'm terrified of asking a question in the wrong forum, but I see some related questions here at stackoverflow, so hopefully it's OK. Thanks!
You could make a table like this id name parent_id This structure would allow for nested categories You could then make a table that relates category and data points.
The java.swing packages contain several table and tree solutions such as the JTable and JTree classes. JTree can be easily constructed to produce the tree structure you are looking for (looks like a file directory.) The JTable class can be used to create sortable and searchable tables, although you would have to borrow or write your own sort & search methods. Although these are from Java, other languages offer similar structures that may serve your needs without using a database. That being said, "mySQL" is a very easy to use database and you can download the community DB package for free.
Arranging dimentions for clustering with SSAS
I am having some trouble with SSAS and data mining - specifically the Microsoft Clustering package. I intend to ultimately do my work in AMO and MDX, but for now, just happy to understand how it works in the BIDS via Visual Studio. One step at a time! The whole problem is around clustering both "vertically" and "horizontally" (separately) from a table that is organized vertically. My main source data table in my OLTP database looks like => ID_NUM {numbers 1 - 20,000} TECK_ID {numbers 1-500, {for each ID_NUM}} (though just grabbed a few of these for playing around with the data in the screencaps) TECK_VALUE {a double, the 'fact' bit} So- 10 million rows, of two ints and a double. Which looks like this- http://i.imgur.com/KG1LhaJ.jpg So I create a new Analysis Services project in Visual Studio, set up a Data Source, and bring in the above table, as well as two "dimension tables" (identity of what id_num is, names of what each teck_id is) into a Data Source View and link it up, matching up the appropriate keys. Which looks like this- http://i.imgur.com/Q0vgwIc.jpg Next I want to manipulate how my data is represented, so I go to set up a cube from this Data Source View. I create dimensions based on my two "dimension" tables (the above "id_num" primary key one, and the "teck_id" primary key one), and create a single measure (as sum) of the teck_value column from my main table. This all seems to compile successfully. Which looks like this- http://i.imgur.com/y5pUSjh.jpg The reason I think everything has worked well is I can arrange my data how I want by browsing the cube. I am able to define my "rows" as both the id_num, or as the "teck_id", with the other one filling up the columns. The measure "Teck_value" always makes up the dataset of the table. This is exactly how I want it, the flexibility to arrange my data both ways. Which looks like this- http://i.imgur.com/ugLUkgg.jpg And this- http://i.imgur.com/RwQgj58.jpg Beautiful! Now I wish to do some mining on this basis! I wish to, quite simply, using Microsoft Clustering to (separately) - Assign each TECK_ID a cluster number based on how it varies on each ID_NUM Assign each ID_NUM a cluster based on how it varies on each TECK_ID Seemingly a simple requirement - just changing what is represented as "rows" and what as "columns" - which I already appear to be able to do through the cube browser. This seems to be one of the main points of OLAP rather than OLTP from my uneducated perspective! Yet when I try to set this up I fail utterly! The Clustering Wizard leaves me confounded and I come up with nonsense results. I am given the option of selecting a key (for which I can choose either of the above), but no option to parse by the other dimension. Indeed, the only thing I can choose to mine on is TECK_VALUE, which isn't any good as that doesn't separate out the different fields! My wizard looks like this- http://i.imgur.com/lHfasv0.jpg So, I am left in a pickle. I really don't want to go back and line up my OLTP databases horizontally because 1) this would mean having 20k columns when I try to categorize my TECK_IDs. and 2) I was hoping SSAS and OLAP can give me the flexibility I need to mine the fields that I want - isn't that part of the reason you set up a cube "chop up the data how you like" ? Bonus points for helping me with the AMO / MDX side as well! :)
Accommodating Dynamic Hierarchies in a Data Warehouse Model
I am building a data warehouse for the company's (which I am working for) core ERP application, for a particular client. Most of the data in the source database, which is related to hierarchies in the data warehouse are in columns as shown below: But traditionally the model to store dimension data according to my knowledge is as: I could pivot the data and fit them in the model shown above. But the issue comes when a user introduces a new hierarchy value. Say for instance the user in the future decides to define a new level called Product Sub Category. Then my entire data warehouse model will collapse without a way to accommodate the new hierarchy level defined. Do let me know a way to overcome this situation. I hope my answer is clear enough. Just let me know if further details are needed.
Well, nothing should collapse -- the ETL should extract and load the data as always. Here are a few options to consider: Simply add one more column for the new hierarchy to the dimProduct. Try using hierarchy helper table. Consider adding path string attribute to the dimProduct.