First of I've got a table like this:
vID
bID
date
type
value
1
100
22.01.2021
o
250.00
1
110
25.01.2021
c
100.00
2
120
13.02.2021
o
400.00
3
130
20.02.2021
o
475.00
3
140
11.03.2022
c
75.00
1
150
15.03.2022
o
560.00
To show which values were ordered(o) and charged(c) per Month, I have to like 'generate' columns for each month both ordered and charged in a MSSQL SELECT query.
Here is an example table of what I want to get:
vID
JAN2021O
JAN2021C
FEB2021O
FEB2021C
…
MAR2022O
MAR2022C
1
250.00
100.00
560.00
2
400.00
3
475.00
75.00
I need a posibility to join it in a SQL SELECT in addition to some other columns I already have.
Does anyone has an idea and could help me please?
The SQL language has a very strict requirement to know the number of columns in the results and the type of each column at query compile time, before looking at any data in the tables. This applies even to SELECT * and PIVOT queries, where the columns are still determined at query compile time via the table definition (not data) or SQL statement.
Therefore, what you want to do is only possible in a single query if you want to show a specific, known number of months from a base date. In that case, you can accomplish this by specifying each column in the SQL and using date math with conditional aggregation to figure the value for each of the months from your starting point. The PIVOT keyword can help reduce the code, but you're still specifying every column by hand, and the query will still be far from trivial.
If you do not have a specific, known number of months to evaluate, you must do this over several steps:
Run a query to find out how many months you have.
Use the result from step 1 to dynamically construct a new statement
Run the statement constructed in step 2.
There is no other way.
Even then, this kind of pivot is usually better handled in the client code or reporting tool (at the presentation level) than via SQL itself.
It's not as likely to come up for this specific query, but you should also be aware there are certain security issues that can be raised from this kind of dynamic SQL, because some of the normal mechanisms to protect against injection issues aren't available (you can't parameterize the names of the source columns, which are dependent on data that might be user-generated) as you build the new query in step 2.
Related
I am stuck on something, which I have never used in my 10 years of SQL. I thought it would be useful if there was someway of doing this. Firstly I am running SQL Server Express (latest free version) on Windows. To talk to the database I am using SSMS.
There are three tables/queries.
1 table (A) has one data value I want to pull through.
2 tables (B)/(C) have multiple values.
Column common to all tables is CAMPAIGN NAME
Column common to (B)/(C) is PRODUCT NAME
This is an example of the data:
OUTPUT GOAL
I have tried the following:
UNION ALL (but this does not assist when I want to calculate AMOUNT - MARKETING - TOTAL INVESTMENT
I tried PARTITION (but I simple could now get it to work.
If I use joins, it brings through a head count / total investment and marketing cost per product, which when using SUM brings through the incorrect values for head count / total investment and marketing cost vs total amount, quantity.
I tried splitting the costs based on Quantity / Total Quantity or Amount / Total Amount, but the cost associated with the product is not correct or directly relating to the product this way.
Am I trying to do something impossible, or is there a way to do this in SQL?
The following comes pretty close to what you want:
select . . . -- select the columns you want here
from a join
b
on b.campaign_name = a.campaign_name join
c
on c.campaign_name = b.campaign_name and
c.product_name = b.product_name;
This produces a result set with a separate row for each campaign/product.
I have a fact/dim combination in OBIEE that looks something like this:
Order_number, Order_Quantity
1234, 150
2345, 80
3456, 20
4567, 50
What I would like to do is create a report that aggregates the total number of orders with quantities in the defined 'bins'. For example, there are 3 orders with less than 100 quantity, and one with greater than 100:
Quantity_Bin, # Orders
>100, 1
<100, 3
I can do this quite easily using a 'CASE WHEN' statement and a pivot table, however that requires me to include the 'order_number' field on the report. The problem is that table has 1 million+ rows, which are all returned to the presentation server even though they aren't displayed on the report. Can I specify obiee to do this calculation/aggregation without returning a row for every order_number?
Already responded on the OTN forums...
What I am trying to do is fairly simple. I just want to add a row number to a query. Since this is in Access is a bit more difficult than other SQL, but under normal circumstances is still doable using solutions such as DCount or Select Count(*), example here: How to show row number in Access query like ROW_NUMBER in SQL or Access SQL how to make an increment in SELECT query
My Issue
My issue is I'm trying to add this counter to a multi-join query that orders by fields from numerous tables.
Troubleshooting
My code is a bit ridiculous (19 fields, seven of which are long expressions, from 9 different joined tables, and ordered by fields from 5 of those tables). To make things simple, I have an simplified example query below:
Example Query
SELECT DCount("*","Requests_T","[Requests_T].[RequestID]<=" & [Requests_T].[RequestID]) AS counter, Requests_T.RequestHardDeadline AS Deadline, Requests_T.RequestOverridePriority AS Priority, Requests_T.RequestUserGroup AS [User Group], Requests_T.RequestNbrUsers AS [Nbr of Users], Requests_T.RequestSubmissionDate AS [Submitted on], Requests_T.RequestID
FROM (((((((Requests_T
INNER JOIN ENUM_UserGroups_T ON ENUM_UserGroups_T.UserGroups = Requests_T.RequestUserGroup)
INNER JOIN ENUM_RequestNbrUsers_T ON ENUM_RequestNbrUsers_T.NbrUsers = Requests_T.RequestNbrUsers)
INNER JOIN ENUM_RequestPriority_T ON ENUM_RequestPriority_T.Priority = Requests_T.RequestOverridePriority)
ORDER BY Requests_T.RequestHardDeadline, ENUM_RequestPriority_T.DisplayOrder DESC , ENUM_UserGroups_T.DisplayOrder, ENUM_RequestNbrUsers_T.DisplayOrder DESC , Requests_T.RequestSubmissionDate;
If the code above is trying to select a field from a table not included, I apologize - just trust the field comes from somewhere (lol i.e. one of the other joins I excluded to simply the query). A great example of this is the .DisplayOrder fields used in the ORDER BY expression. These are fields from a table that simply determines the "priority" of an enum. Example: Requests_T.RequestOverridePriority displays to the user as an combobox option of "Low", "Med", "High". So in a table, I assign a numerical priority to these of "1", "2", and "3" to these options, respectively. Thus when ENUM_RequestPriority_T.DisplayOrder DESC is called in order by, all "High" priority requests will display above "Medium" and "Low". Same holds true for ENUM_UserGroups_T.DisplayOrder and ENUM_RequestNbrUsers_T.DisplayOrder.
I'd also prefer to NOT use DCOUNT due to efficiency, and rather do something like:
select count(*) from Requests_T where Requests_T.RequestID>=RequestID) as counter
Due to the "Order By" expression however, my 'counter' doesn't actually count my resulting rows sequentially since both of my examples are tied to the RequestID.
Example Results
Based on my actual query results, I've made an example result of the query above.
Counter Deadline Priority User_Group Nbr_of_Users Submitted_on RequestID
5 12/01/2016 High IT 2-4 01/01/2016 5
7 01/01/2017 Low IT 2-4 05/06/2016 8
10 Med IT 2-4 07/13/2016 11
15 Low IT 10+ 01/01/2016 16
8 Low IT 2-4 01/01/2016 9
2 Low IT 2-4 05/05/2016 2
The query is displaying my results in the proper order (those with the nearest deadline at the top, then those with the highest priority, then user group, then # of users, and finally, if all else is equal, it is sorted by submission date). However, my "Counter" values are completely wrong! The counter field should simply intriment +1 for each new row. Thus if displaying a single request on a form for a user, I could say
"You are number: Counter [associated to RequestID] in the
development queue."
Meanwhile my results:
Aren't sequential (notice the first four display sequentially, but then the final two rows don't)! Even though the final two rows are lower in priority than the records above them, they ended up with a lower Counter value simply because they had the lower RequestID.
They don't start at "1" and increment +1 for each new record.
Ideal Results
Thus my ideal result from above would be:
Counter Deadline Priority User_Group Nbr_of_Users Submitted_on RequestID
1 12/01/2016 High IT 2-4 01/01/2016 5
2 01/01/2017 Low IT 2-4 05/06/2016 8
3 Med IT 2-4 07/13/2016 11
4 Low IT 10+ 01/01/2016 16
5 Low IT 2-4 01/01/2016 9
6 Low IT 2-4 05/05/2016 2
I'm spoiled by PLSQL and other software where this would be automatic lol. This is driving me crazy! Any help would be greatly appreciated.
FYI - I'd prefer an SQL option over VBA if possible. VBA is very much welcomed and will definitely get an up vote and my huge thanks if it works, but I'd like to mark an SQL option as the answer.
Unfortuantely, MS Access doesn't have the very useful ROW_NUMBER() function like other clients do. So we are left to improvise.
Because your query is so complicated and MS Access does not support common table expressions, I recommend you follow a two step process. First, name that query you already wrote IntermediateQuery. Then, write a second query called FinalQuery that does the following:
SELECT i1.field_primarykey, i1.field2, ... , i1.field_x,
(SELECT field_primarykey FROM IntermediateQuery i2
WHERE t2.field_primarykey <= t1.field_primarykey) AS Counter
FROM IntermediateQuery i1
ORDER BY Counter
The unfortunate side effect of this is the more data your table returns, the longer it will take for the inline subquery to calculate. However, this is the only way you'll get your row numbers. It does depend on having a primary key in the table. In this particular case, it doesn't have to be an explicitly defined primary key, it just needs to be a field or combination of fields that is completely unique for each record.
I have a report with a static choice on the prompt page. The user can choose 'Full Detail', or 'Summarised'.
For a simplified example, say my report has these columns: Customer, Product, Date, Quantity, Value.
I would like to be able to show/hide the Date column based on the detail level choice, and have the Quantity and Value columns aggregate into a single Customer/Product line. I know how to show/hide the column (tying the choice variable to the column's Render Variable), but this does not do the aggregation, only makes the column invisible.
I have thought about doing a separate report page for Full Detail and Summary, but in my actual report I have a second choice box with which the user can choose a field to summarise by (e.g. Customer or Product), and the report will section-group by that field. At the moment I am doing that one per page (5 of them). Doing the detail choice the same way would mean I would need 10 pages. There is surely a better way.
Full detail:
Customer Product Date Qty Value
ABCD Things 22/10/2014 10 1.00
21/10/2014 40 4.00
23/10/2014 50 5.00
Summarised (How it looks at the moment, after hiding the Date column):
Customer Product Qty Value
ABCD Things 10 1.00
40 4.00
50 5.00
Summarised (How I would like it to look):
Customer Product Qty Value
ABCD Things 100 10.00
I am using Cognos Report Studio 10.1.1
You should not just hide column.
You should also set same value for this column in all rows
Instead of just [Date] in this column set
if (?HideDate? = 1) then ('') else ([Date])
or, if you prefer CASE
case ?HideDate? when 1 then '' else [Date] end
replace ?HideDate? = 1 with you own condition
Alexey's answer is great if you are using the standard Cognos 'auto-group and summarize' functionality.
If you have a custom aggregate that includes the [Date] column in its definition, you might squeeze a bit of performance gain out of modifying the aggregate function itself to disregard the [Date] column when a summarized total is desired.
If your aggregate function was:
total([Value] for [Customer],[Product],[Date])
..you might change this to a CASE statement like so:
CASE ?HideDate?
WHEN 1 then total([Value] for [Customer],[Product])
ELSE total([Value] for [Customer],[Product],[Date])
END
The data items after a 'for' clause usually end up in a GROUP BY clause in the resultant SQL. Limiting the items grouped, when possible, can help performance. In this case the performance improvement would likely be slight since there will only be one distinct value in Alexey's solution, but it's something to consider.
Anyone have advice on how to build an average measure that is dynamic -- it doesn't specify a particular slice but instead uses your current view? I'm working within a front-end OLAP viewer (Strategy Companion) and I need a "dynamic" implementation based on the dimensions that are currently filtered in the data view.
My fact table looks something like this:
Key AmountA IndicatorA AmountB Other Data
1 5 1 null 25
2 6 1 null 52
3 7 1 2 106
4 null 0 4 108
Now I can specify a simple average for "[Measures].[AmountA]" with "[Measures].[AmountA] / [Measures].[IndicatorA]" which works great - "[IndicatorA]" sums up to the number of non-null values of "[AmountA]". And this also works great no matter what dimensions are selected in the view - it always divides by the count of rows that have been filtered in.
But what about [AmountB]? I don't have a null indicator column. I want to get an average value of [AmountB] for whatever rows have been filtered in for my current view. If I try to use the count of rows as a simple formula (psuedo-code "[Measures].[AmountB] / Count([Measures].[Key])") I get the wrong result, because it is counting all the null rows in the average.
So, I need a way to use the AVG function to specify the average of [AmountB] over the set of "whatever rows I'm currently filtering in, based on whatever dimensions I'm currently using". How do I specify this dynamic set?
I've tried several different uses of the AVG function and they have either returned null or summed up to huge numbers, clearly not the average I'm looking for.
Thanks-
Matt
Sorry, my first suggestion was wrong. If you don't have access to OLAP cube you can't write any mdx-query for this purpose (IMHO). Because, you don't have any detailed data (from your fact table) in this access level and you can use only aggregated data and dimensions from your cube.
Otherwise (if you have access to olap db), you can create this metric (count of not NULL rows) in your measure group and after that use it for AVG calculation (as calculated member in your cube or in section "WITH" in your mdx-query).