So I am a beginner at MDX and I have an MDX query that works the way I want it to so long as I put the set on either the columns or rows. If I put the same set on the filter axis it doesn't work. I'd like to make this calculated measure is independent on where this set lives. I'm guaranteed to always have some form of a set included, but I'm not guaranteed which axis the user will place it on (eg row, columns, filter).
Here is the query that works:
WITH MEMBER Measures.avgApplicants as
Avg([applicationDate].[yearMonth].[month].Members, [Measures].[applicants])
SELECT
{[Measures].[applicants],[Measures].[avgApplicants]} ON 0,
{[applicationDate].[yearMonth].[year].[2015]:[applicationDate].[yearMonth].[year].[2016]} ON 1
FROM [applicants]
And results:
| | applicants | avgMonthlyApplicants |
+------+------------+----------------------+
| 2015 | 367 | 33 |
| 2016 | 160 | 33 |
However, if I shift this query around to move the set onto the filter axis I get nothing:
WITH MEMBER Measures.avgApplicants as
Avg([applicationDate].[yearMonth].[month].Members, [Measures].[applicants])
SELECT
{[Measures].[applicants],[Measures].[avgApplicants]} ON 0,
{[Gender].Members} ON 1
FROM [applicants]
WHERE ([applicationDate].[yearMonth].[year].[2015]:[applicationDate].[yearMonth].[year].[2016])
I get this:
| | applicants | avgApplicants |
+-------------+-------------+------------+---------------+
| All Genders | | 478 | |
| | Female | 172 | |
| | Male | 183 | |
| | Not Known | 61 | |
| | Unspecified | 62 | |
So how do a create this calculated measure work so that it isn't dependent on which axis the set is placed on?
Related
I'm currently trying to query up a list of the top 15 occurring faults on a PLC in the warehouse. I've gotten that part down:
Select top 15 fault_number, fault_message, count(*) FaultCount
from Faults_Stator
where T_stamp> dateadd(hour, -18, getdate())
Group by Fault_number, Fault_Message
Order by Faultcount desc
HOOOWEVER I now need to find out the accumulated downtime of said faults in the top 15 list, information in another column "Fault_duration". How would I go about doing this? Thanks in advance, you've all helped me so much already.
+--------------+---------------------------------------------+------------+
| Fault Number | Fault Message | FaultCount |
+--------------+---------------------------------------------+------------+
| 122 | ST10: Part A&B Failed | 23 |
| 4 | ST16: Part on Table B | 18 |
| 5 | ST7: No Spring Present on Part A | 15 |
| 6 | ST7: No Spring Present on Part B | 12 |
| 8 | ST3: No Pin Present B | 8 |
| 1 | ST5: No A Housing | 5 |
| 71 | ST4: Shuttle Right Not Loaded | 4 |
| 144 | ST15: Vertical Cylinder did not Retract | 3 |
| 98 | ST8: Plate Loader Can not Retract | 3 |
| 72 | ST4: Shuttle Left Not Loaded | 2 |
| 94 | ST8: Spring Gripper Cylinder did not Extend | 2 |
| 60 | ST8: Plate Loader Can not Retract | 1 |
| 83 | ST6: No A Spring Present | 1 |
| 2 | ST5: No B Housing | 1 |
| 51 | ST4: Vertical Cylinder did not Extend | 1 |
+--------------+---------------------------------------------+------------+
I know I wouldn't be using the same query, but I'm at a loss at how to do this next step.
Fault duration is a column which dictates how long the fault lasted in ms. I'm trying to have those accumulated next to the corresponding fault. So the first offender would have those 23 individual fault occurrences summed next to it, in another column.
You should be able to use the SUM accumulator:
Select top 15 fault_number, fault_message, count(*) FaultCount, SUM (Fault_duration) as FaultDuration
from Faults_Stator
where T_stamp> dateadd(hour, -18, getdate())
Group by Fault_number, Fault_Message
Order by Faultcount desc
I want to line up multiple series so that all milestone dates are set to month zero, allowing me to measure the before-and-after effect of the milestone. I'm hoping to be able to do this using SQL server.
You can see an approximation of what I'm starting with at this data.stackexchange.com query. This sample query returns a table that basically looks like this:
+------------+-------------+---------+---------+---------+---------+---------+
| UserID | BadgeDate | 2014-01 | 2014-02 | 2014-03 | 2014-04 | 2014-05 |
+------------+-------------+---------+---------+---------+---------+---------+
| 7 | 2014-01-02 | 232 | 22 | 19 | 77 | 11 |
+------------+-------------+---------+---------+---------+---------+---------+
| 89 | 2014-04-02 | 345 | 45 | 564 | 13 | 122 |
+------------+-------------+---------+---------+---------+---------+---------+
| 678 | 2014-03-11 | 55 | 14 | 17 | 222 | 109 |
+------------+-------------+---------+---------+---------+---------+---------+
| 897 | 2014-03-07 | 234 | 56 | 201 | 19 | 55 |
+------------+-------------+---------+---------+---------+---------+---------+
| 789 | 2014-02-22 | 331 | 33 | 67 | 108 | 111 |
+------------+-------------+---------+---------+---------+---------+---------+
| 989 | 2014-01-09 | 12 | 89 | 97 | 125 | 323 |
+------------+-------------+---------+---------+---------+---------+---------+
This is not what I'm ultimately looking for. Values in month columns are counts of answers per month. What I want is a table with counts under relative month numbers as defined by BadgeDate (with BadgeDate month set to month 0 for each user, earlier months set to negative relative month #s, and later months set to positive relative month #s).
Is this possible in SQL? Or is there a way to do it in Excel with the above table?
After generating this table I plan on averaging relative month totals to plot a line graph that will hopefully show a noticeable inflection point at relative month zero. If there's no apparent bend, I can probably assume the milestone has a negligible effect on the Y-axis metric. (I'm not even quite sure what this kind of chart is called. I think Google might have been more helpful if I knew the proper terms for what I'm talking about.)
Any ideas?
This is precisely what the aggregate functions and case when ... then ... else ... end construct are for:
select
UserID
,BadgeDate
,sum(case when AnswerDate = '2014-01' then 1 else 0 end) as '2014-01'
-- etc.
group by
userid
,BadgeDate
The PIVOT clause is also available in some flavours and versions of SQL, but is less flexible in general so the traditional mechanism is worth understanding.
Likewise, the PIVOT TABLE construct in EXCEL can produce the same report, but there is value in maximally aggregating the data on the server in bandwidth competitive environments.
I have seen many similar questions but none that meet my needs exactly, and I cannot seem to deduce a solution on my own from inspecting the other questions.
I have the following (mock) table below. My actual table has many more columns.
TableA:
ID | color | feel | size | alive | age
------------------------------------------
1 | blue | soft | large | true | 36
2 | red | soft | large | true | 36
2 | blue | hard | small | false | 37
2 | blue | soft | large | true | 36
2 | blue | soft | small | false | 39
15 | blue | soft | medium | true | 04
15 | blue | soft | large | true | 04
15 | green | soft | large | true | 15
40 | pink | sticky | large | true | 83
51 | brown | rough | tiny | false | 01
51 | gray | soft | tiny | true | 59
34 | blue | soft | large | true | 02
I want the result to look like:
Result of query on TableA:
ID | color | feel | size | alive | age
-------------------------------------------
1 | blue | soft | large | true | 36
2 | red | soft | large | true | 36
15 | blue | soft | medium | true | 04
40 | pink | sticky | large | true | 83
51 | brown | rough | tiny | false | 01
34 | blue | soft | large | true | 02
I want one row for every unique ID column, but I do not want to check the other columns. I need the other columns returned in my result set, but I do not want to filter on them. I just need one row for every unique ID - I do not care which row.
In my example, I selected the first row of every unique ID.
I have tried variations of
select *
from TableA
group by ID having ID = max(ID)
Most examples I have seen with group by and max and/or min functions involve only 2 columns. I have many more columns, however.
I have also seen examples using CTE, but I am not using SQL Server (I am using Sybase).
How can I achieve the result set described?
EDIT
We are using Sybase version 15.1.
Your solution with MIN has some drawbacks. It doesn't return you a specific row but MIN values from the group of rows. You can get as result rows which are not in database. Is it OK for you ?
Row_number is supported in sybase 15.2
http://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.infocenter.dc38151.1520/html/iqrefbb/iqrefbb262.htm
It's sad if it is not supported in 15.1. You can use then identity column and temporary table to achieve what you want.
There are a variety of ways to do this. If you have a more recent version of Sybase, you can use row_number():
select t.*
from (select t.*, row_number() over (partition by id order by id) as seqnum
from table t
) t
where seqnum = 1;
The solution I have come up with is below.
It "feels" like a poor solution - I am still open to new answers:
SELECT
ID,
min(color),
min(feel),
min(size),
min(alive),
min(age)
FROM TableA
group by ID
I do not like how verbose I am with the application of the min function to every column, but this returns the desired result set.
is it possible to have a sum in detail band in iReport?
It is important to have cells merged vertically after export to excel like this:
-----------------------------
| id | year | value | sum |
-----------------------------
| | 2010 | 55 | |
| 1 | 2011 | 65 | 180 |
| | 2012 | 60 | |
-----------------------------
| 2 | 2010 | 70 | 70 |
-----------------------------
My idea is to have the main query with GROUP BY clause and for "year" and "value" use table component with another query. Problem is that my query is long running and i need to have only one in whole report.
First have a look at here. It's about grouping rows.
You will see that you should create a group in your report, not in the query depending on your id field.
For calculating the sum field, drag the value field to the column footer, and then you will see a pop-up menu. Click to the result of an aggregation function radio button, then choose sum function. This will create a variable to calculate the sum of the value field. Change this variable's reset type to group (to id_group). Use this field in your sum field.
For grouping rows depending on id, click on the sum field and set this field's print when group changes to id_group.
this should help :)
when you group your fields your table will look like this. The grouped fields are at the top.
-----------------------------
| id | year | value | sum |
-----------------------------
| 1 | 2010 | 55 | 180 |
| | 2011 | 65 | |
| | 2012 | 60 | |
-----------------------------
| 2 | 2010 | 70 | 70 |
-----------------------------
I have a report in reporting services. In this report, I am displaying the Top N values. But my Grand Total is displaying the sum of all the values.
Right now I am getting something like this.Here N = 2
+-------+------+-------------+
| Area |ID | Count |
+-------+------+-------------+
| - A | | 4 |
| | a1 | 1 |
| | b1 | 1 |
| | c1 | 1 |
| | d1 | 1 |
| | | |
| - B | | 3 |
| | a2 | 1 |
| | b2 | 1 |
| | c2 | 1 |
| | | |
|Grand | | 10 |
|Total | | |
+-------+------+-------------+
The correct Grand Total should be 7 instead of 10. A and B are toggle items(You can expand and contract)
How can I display the correct Grand Total using Top N filter?
I also want to use the filter in the report and not in the SQL query.
You should use the filter on the Dataset. Filtering the report object itself only turns off the items (rows, for example) visibility. The item / row itself will still be part of the group and will be used for calculations.
I found a way to solve my question. As Ido said I worked on the dataset. I am using Analysis Cube. So in this cube I created a Named Set Calculation.
In this set I used the TopCount() function. It filters out the TOP N values where N can be integer according to your choice.
So the final Named Set in this case is :-
TopCount([Dim Area].[Area].[Area], 2, ([Measures].[Count]))
This will give you Grand total of Top N filtered values.