I am stuck with this report pattern can anyone help me out how to deal with this situation? Here is what want to accomplish on SSRS
I have table
Units distribution
High 10 (10/30) -33%
low 20 (20/30) -66%
total 30
how can we use the total value of high, low rows to calculate distribution?
Sample Pic with formulas table and Data table
and here I want to achieve data table on right on the picture, so I want to implement the formulas table pattern on Data table using SSRS, I have already pulled the data.
Thank you
create a dataset that will return those rows as columns.. then display the table vertically (columns to rows) since the structure is fixed.
Related
I am editing my question which I want exactly.
I have two columns Actual Units, Future Units from Fact A and Fact B respectively but at same granular level.I also have Demand Units from Fact B
My requirement is :
1. Projected Units = Coalesce(Actual Units,Future Units)
2. Stock Units = IF(Projected Units > Demand Units,Demand Units,Projected
Units)
3. Stock Rate = (Stock Units/Demand Units)
I cannot join the two facts in the data source view level and do the
calculation there because they are a very huge tables, so I think the
performance would be very slow. If you say that doing the calculations at
the data source view level level is the only way we have, please let me
know.
Did you get this?
When calculating the grand total MDX is summing up A, summing up B, and then comparing them.
If you want the calculation to occur at the row level (checking whether B>A) then edit the Data Source View and add a new calculated column to the table your measure group is based upon. The calculated column should be:
CASE WHEN B>A THEN A ELSE B END
Then create a Sum measure based upon that new column.
This approach will perform much better compared to any completely MDX approach to calculating this at a very detailed grain. If your fact tables had 500,000 rows or less and you had a degenerate Dimension which was the same grain as the grain you need to calculate at, we could possibly do it in MDX. But since you are concerned with SQL query performance I am assuming the tables are big. Just remember that SQL is done once at processing time. MDX is calculated in every query at query time. So do expensive things in SQL when you can.
How to write this expression in PowerBI
select distinct([date]),Temperature from Device47A8F where Temperature>25
Totally new to PowerBI. Is there any tool that can change the query from sql to PowerBI expression?
I have tried so many type of different type of expressions but getting error, Most of the time I am getting this:
The expression refers to multiple columns. Multiple columns cannot be converted to a scalar value.
Need help, Thanks.
After I posted my answer, wondered if your expected result is get only one date by temperature, In other words, without repeated dates in your result set.
A side note: select distinct([date]),Temperature from Device47A8F where Temperature>25 returns repeated dates since DISTINCT keyword evaluate distinct columns values specified in the SELECT statement, it doesn't return distinct values in a specific column even if you surround it with parenthesis.
Now what brings us here. What I can see in your error is that you are trying to use a table-valued (produces a table with multiple columns) expression in a measure which only accepts scalar-valued (calculate only one value).
Supposing you have a table like this:
Running your SQL query you will get the highlighted in yellow rows:
You can see 01/09/2016 date is repeated. If you want to create a measure you have to define what calculation you want to show for temperature. i.e, average, max or min etc.
In the below expression is being calculated the maximum temperature greater than 25 per date:
MaxTempGreaterThan25 =
CALCULATE ( MAX ( Device47A8F[Temperature] ), Device47A8F[Temperature] > 25 )
In this case the measure MaxTempGreaterThan25 is calculated per date.
If you don't want to produce a measure but a table. In the Power BI Tool bar select Modeling tab and click New Table icon.
Use this expression:
MyTemperatureTable =
FILTER ( Device47A8F, Device47A8F[Temperature] > 25 )
It should produce a new table named MyTemperatureTable like this:
I recommend you learn some basics about DAX, it is pretty different from SQL / T-SQL and there are things you can't do depending on your model and data.
Let me know if this helps.
You probably don't need to write any code if your objective is to show the result in a Power BI visual e.g. a table. Power BI naturally aggregates data if the datatype is numeric (e.g. Temperature).
I would just add a Table visual on a Report page and add the Date and Temperature columns to it. Then in Visualizations / Fields / Values I would click the little down-arrow on the Temperature field and set the Aggregation e.g. Maximum. Then in Visualizations / Fields / Filters I would click the little down-arrow on the Temperature field and set the Filter e.g. is greater than: 25
Hard-coded solutions are unlikely to survive the next question from your users e.g. "but what if I want to see Temperature > 24? Or 20? Or 30?"
I know PowerPivot is not programming, but I wanted to see if I could get help or a recommendation on how to get the total rows to correctly calculate the sum of Absolute Values at a higher aggregation level that the detailed data in a PowerPivot data model.
Here is my example: This is detailed data format. There are 2 hierarchies:
1. Product Group\Product Family\**Material** '3 columns'
2. Region\**Plant**\SalesMgr\Employee\Customer '5 columns'
Hierarchy represents 8 columns in my data model
All data in the data model is at the lowest level (i.e. Material/Customer). However I want to calculate Forecast Accuracy at Material/Plant Level (a higher aggregation level).
DataModel Column9: Sales 'Sales at the detailed Material/Customer'
DataModel Column10: Forecast 'Forecast at the detailed Material/Customer'
Calculated Column11: AbsoluteError = ABS(Forecast - Sales) 'calced at Mat/Cust'
PowerPivot DAX formulas/measures, shortened for simplicity
TotalSales:=SUM(Sales)
TotalAbsError:=SUM(AbsoluteError)
MAPE:= TotalAbsError/TotalSales
ForecastAccuracy:= 1 - MAPE (Mean Absolute Percentage Error)
This calculates the data correctly at the Material/Plant level when I create my Pivot Table. However the Total values in the Pivot Table are incorrect.
For example, the Total of AbsError is the Sum of 20 rows of Material/Customer Combinations. I need the Total of only the 5 Material/Plant Combinations. The 20 rows of detailed Material/Customer data aggregates up to only 5 rows of Material/Plant Combinations.
I have a sample spreadsheet I can share. Thanks Dimitry for updating the question!
I would like to post the spreadsheet / powerpivot so that the StackOverflow community can see it. I assume I have to post a link. I will start trying to figure out how.
Thanks
David
I have a query to pull clickthrough for a funnel, where if a user hit a page it records as "1", else NULL --
SELECT datestamp
,COUNT(visits) as Visits
,count([QE001]) as firstcount
,count([QE002]) as secondcount
,count([QE004]) as thirdcount
,count([QE006]) as finalcount
,user_type
,user_loc
FROM
dbname.dbo.loggingtable
GROUP BY user_type, user_loc
I want to have a column for each ratio, e.g. firstcount/Visits, secondcount/firstcount, etc. as well as a total (finalcount/Visits).
I know this can be done
in an Excel PivotTable by adding a "calculated field"
in SQL by grouping
in PowerPivot by adding a CalculatedColumn, e.g.
=IFERROR(QueryName[finalcount]/QueryName[Visits],0)
BUT I need give the report consumer the option of slicing by just user_type or just user_loc, etc, and excel will tend to ADD the proportions, which won't work b/c
SUM(A/B) != SUM(A)/SUM(B)
Is there a way in DAX/MDX/PowerPivot to add a calculated column/measure, so that it will be calculated as SUM(finalcount)/SUM(Visits), for any user-defined subset of the data (daterange, user type, location, etc.)?
Yes, via calculated measures. calculated columns are for creating values that you want to see on rows/columns/report header...calculated measures are for creating values that you want to see in the values section of a pivot table and can slice/dice by the columns in the model.
The easiest way would be to create 3 calculated "measures" in the calculation area of the powerpivot sheet.
TotalVisits:=SUM(QueryName[visits])
TotalFinalCount:=SUM(QueryName[finalcount])
TotalFinalCount2VisitsRatio:=[TotalFinalCount]/[TotalVisits]
You can then slice the calculated measure [TotalFinalCount2VisitsRatio] by user_type or just user_loc (or whatever) and the value will be calculated correctly. The difference here is that you are explicitly telling the xVelocity engine to SUM-then-DIVIDE. If you create the calculated column, then the engine thinks you want to DIVIDE-then-SUM.
Also, you don't have to break down the measure into 3 separate measures...it's just good practice. If you're interested in learning more, I'd recommend this book...the author is the PowerPivot/DAX guru and the book is very straightforward.
Supose I would like to store a table with 440 rows and 138,672 columns, as SQL limit is 1024 columns I would like to transform rows into columns, I mean to convert the
440 rows and 138,672 columns to 138,672 rows and 440 columns.
Is this possible?
SQL Server limit is actually 30000 columns, see Sparse Columns.
But creating a query that returns 30k columns (not to mention +138k) will be basically uncontrollable, the sheer size of the metadata on each query result would halt the client to a crawl. One simply does not design databases like that. Go back to the drawing board, when you reach 10 columns stop and think, when you reach 100 column erase the board and start anew.
And read this: Best Practices for Semantic Data Modeling for Performance and Scalability.
The description of the data is as follows....
Each attribute describes the measurement of the occupancy rate
(between 0 and 1) of a captor location as recorded by a measuring
station, at a given timestamp in time during the day.
The ID of each station is given in the stations_list text file.
For more information on the location (GPS, Highway, Direction) of each
station please refer to the PEMS website.
There are 963 (stations) x 144 (timestamps) = 138,672 attributes for
each record.
This is perfect for normalision.
You can have a stations table and a measurements table. Two nice long thin tables.