SSAS Row Count Aggregation - sql

Hi I have a table like this:
idCustomer | idTime | idStatus
---------------------------------
1 | 20010101 | 2
1 | 20010102 | 2
1 | 20010103 | 3
2 | 20010101 | 1
...
I have now added this table as a factless fact table in my cube with a measure which aggregates the row count for each customer, so that for each day I can see how many customers are at each status and I can drill down to see which customers they are.
This is all well and good but when I roll it up to the month or year level it start summing up the values of each day where instead I want to see the last non empty value.
I'm not sure if this is possible but I can't think of another way of getting this information without creating a fact table with the counts for each status on each day and loosing the ability to drill down.
Can anyone help??

An easy way to get what you want would be to convert your factless fact table to one having a fact: the count. Just add a named calculation to the table object in the data source view. Name the calculation like you want your measure to be named, and use 1 as the expression. Then you can define a measure based on this calculation using the aggregate function "LastNonEmpty", and use this instead of your current count measure.

Related

SQL REDSHIFT - Unpivot fields and group by a split function without joining

I have a data set structure similar to below.
Product Group
Item
Sunday_Capacity
Monday_Capacity
Tuesday_Capacity
etc.
Product GroupA
Item A
10
8
5
I would like to get a resulting data set that was Product Group | Item | Day of Week | Capacity.
I realize I could do this with a join for each day but would like to unpivot this information. I imagine I will have to dynamically split the day of week value with a spilt function.

Best practice for saving a series of dates in SQL

I'm reworking some old programs and in one of them I need so save a repeating series of Dates in the database. The User picks days ranging from 1-31 and months ranging from 1-12 in a PHP-Form. Multiple Choices are possible. At least one of each must be provided.
I'll then use a daily scheduled Task to check if the value (day and month) is given and if yes - do something.
In the old system I saved it like this:
| Days | Months |
|1,2,5,13,15 | 1,2,3,4,5,6,7,8,9,10,11,12|
Then I exploded every row in the PHP-File fired by the scheduled Task and iterated over the Array. If one of the dates is valid - do something.
What is best practice for this Use-Case? I thought about some solutions like "saving all possible Outcomes of days and months as single rows in an mapping-table" but I don't think that's an elegant solution...and it needs to be editable too after being implemented.
Any suggestions?
I think you're looking at three tables.
Table one records the groups, give it a sequential group id and whatever other properties you need to record about the group of dates as a whole (requesting user id).
Second table is just group id from table one and the chosen days in rows, so each group has multiple rows.
Third table is the same as for second but for months.
When you need the final result join the second and third tables to the first on the group id. you'll automatically get a cross join between the two giving the combinations you need.
If you're expecting a large volume of data and\or a lot of repeats of the same groups then you may want to consider the possibility of re-using the groups of days and months. It will be a similar table design but tables 2 and 3 will have their own group ids and table one will have two extra columns one for day group and one for month group.
Seems, you can use a dimension-like scheme and attach day-month pairs to different entities. Suppose, the entity is called "task".
| tasks | | days | | months |
| ------- | | -------- | | -------- |
| id_task | | id_day | | id_month |
| ... | >---M:1--- | id_month | >---M:1--- | month |
| id_day | | day |
Don't forget to add check constraints for day (1-31) and month (1-12) columns.
I think you should expand the data in the database. Clearly, you need a table groups (or something like that) with one row per group:
create table groups (
group_id int identity(1, 1) primary key,
. . . -- additional columns
);
Then, expand the dates for each group for the schedule:
create table groups_schedule (
group_schedule_id int identity(1, 1) primary key,
group_id int references groups(group_id),
month int,
day int
);
This requires multiplying out the data in the database. However, I think it is a more accurate representation. In addition, it will give you more flexibility in the future so you are not tied specifically to lists of months/days. For instance, you might have day "25" in most months, but not December.

HR Cube in SSAS

I have to design a cube for students attendance, we have four status (Present, Absent, Late, in vacation). the cube has to let me know the number of students who are not present in a gap of time (day, month, year, etc...) and the percent of that comparing the total number.
I built a fact table like this:
City ID | Class ID | Student ID | Attendance Date | Attendance State | Total Students number
--------------------------------------------------------------------------------------------
1 | 1 | 1 | 2016-01-01 | ABSENT | 20
But in my SSRS project I couldn't use this to get the correct numbers. I have to filter by date, city and attendance status.
For example, I must know that in date X there is 12 not present which correspond to 11% of total number.
Any suggestion of a good structure to achieve this.
I assume this is homework.
Your fact table is wrong.
Don't store aggregated data (Total Students) in the fact as it can make calculations difficult.
Don't store text values like 'Absent' in the fact table. Attributes belong in the dimension.
Reading homework for you:
Difference between a Fact and Dimension and how they work together
What is the grain of a Fact and how does that affect aggregations and calculations.
There is a wealth of information at the Kimball Groups pages. Start with the lower # tips as they get more advanced as you move on.

Calculation for month number in time series data

The data I am working with is oil and gas production data. The production table uniquely identifies each well and contains a time series of production values. I want to be able to calculate a column that contains the month number occurrence of production for every well in the production table. This needs to be a calculation, so I can graph the production for various wells based on the production month, not the calendar month. (I want to compare well performance across wells over the life of wells.) Also note that there could be gaps in the production data so you can't depend on having twelve months of sequential production for each well.
I tried using the answer in this postRankValues but the calculation would never finish. I have over 4 million rows of production data.
In the table shown below, the values shown in ProdMonth is what I need to calculate based on their time occurrence shown in ProdDate. This needs to be performed as a row calculation for each unique WellId
Thanks.
WellID ProdDate ProdMonth
1 12/1/2011 1
1 1/1/2012 2
1 2/1/2012 3
1 3/1/2012 4
… … …
1 11/1/2012 12
2 3/1/2014 1
2 4/1/2014 2
2 5/1/2014 3
2 6/1/2014 4
2 7/1/2014 5
… … …
2 2/1/2014 12
I would create a new date table that has a row for each day (the granularity of your data). I would then add to that table the ProdMonth column. This will ensure you have dates for all days (even if there are gaps in the well reporting data). Then you can use a relationship between the well production data and the Date table on the ProdDate field. Then if you pull in the ProdMonth from the date table, you'll have a list of all of the ProdMonths (hint: you may need to select 'show values with no data' on the field right click menu in the fields well). Then if you add to the same visualization WellID you should be able to see which wells were active in which ProdMonth. If WellID is a number, you might need do use the 'do not summarize' feature on the WellID to get the result you desire.
I posted this question on the PowerPivotPro and Tom Allan provided the DAX formula I needed. First step was to calculate a field that concatenated Year and Month (YearMonth). Then utilized the RANKXX function as such:
= RANKX ( FILTER ( Data, [WellID] = EARLIER ( [WellID] ) ), [YearMonth], , 1, DENSE )
That did the trick and performed fairly quickly on 12mm rows.

Designing a scalable points leaderboard system using SQL Server

I'm looking for suggestions for scaling a points leaderboard system. I already have a working version using a very normalized strategy. This first version was essentially a table which looked something like this.
UserPoints - PK: (UserId,Date)
+------------+--------+---------------------+
| UserId | Points | Date |
+------------+--------+---------------------+
| 1 | 10 | 2011-03-17 07:16:36 |
| 2 | 35 | 2011-03-17 08:09:26 |
| 3 | 40 | 2011-03-17 08:05:36 |
| 1 | 65 | 2011-03-17 09:01:37 |
| 2 | 16 | 2011-03-17 10:12:35 |
| 3 | 64 | 2011-03-17 12:51:33 |
| 1 | 300 | 2011-03-17 12:19:21 |
| 2 | 1200 | 2011-03-17 13:24:13 |
| 3 | 510 | 2011-03-17 17:29:32 |
+------------+--------+---------------------+
I then have a stored procedure which basically does a GroupBy UserID and Sums the Points. I can also pass #StartDate and #EndDate parameters to create a leaderboard for a specific time period. For example, time windows for Top Users for the Day / Week / Month / Lifetime.
This seemed to work well with a moderate amount of data, but things became noticeably slower as the number of points records passed a million or so. The test data I'm working with is just over a million point records created by about 500 users distributed over a timespan of 3 months.
Is there a different way to approach this? I have experimented with denormalizing the data by pre-grouping the points into hour datetime buckets to reduce the number of rows. But I'm starting to think the real problem I need to worry about is the increasing number of users that need to be accounted for in the leaderboard. The time window sizes will generally be small but more and more users will start generating points within any given window.
Unfortunately I don't have access to 'Jobs' since I'm using SQL Azure and the Agent is not available (yet). But, I am open to the idea of scaling this using a different storage system if you are convincing enough.
My past work experience tells me I should look into data warehousing since this is almost a reporting problem. But at the same time I need it to be as real-time as possible.
Update
Ultimately, I would like to support custom leaderboards that could span from Monday 8am - Friday 6pm every week. But that's down the road and why I'm trying to not get too fancy with the aggregation. I'm willing to settle with basic Day/Week/Month/Year/AllTime windows for now.
The tricky part is that I really can't store them denormalized because I need these windows to be TimeZone convertible. The system is mult-tenant and therefore all data is stored as UTC. The problem is a week starts at different hours for different customers. Aggregating the sums together will cause some points to fall into the wrong buckets.
here are a few thoughts:
Sticking with SQL Azure: you can have another table, PointsTotals. Every time you add a row to your UserPoints table, also increment the TotalPoints value for a given UserId in PointsTotals (or insert a new row if they don't have a row to increment). Now you always have totals computed for each UserId.
Going with Azure Table Storage: Create a UserPoints table, with Partition Key being userId. This keeps all of a user's points rows together, where you'd easily be able to sum them. And... you can borrow the idea from suggestion #1, creating a separate PointsTotals table, with PartitionKey being UserId and RowKey probably being the total points.
If it were my problem, I'd ignore the timestamps and store the user and points totals by day
I decided to go with the idea of storing points along with a timespan (StartDate and EndDate columns) localized to the customer's current TimeZone setting. I realized an extra benefit with this is that I can 'purge' old leaderboard round data after a few monts without affecting the lifetime total of points.