I am using Visual Studio 2010 and SQL Server 2012.
I have searched the net and the stackoverflow archives and found a few others who have had this problem but I could not get their solutions to work for me. The problem involves filtering based on the aggregate value of a dataset using a user definable report parameter.
I have the following report.
The ultimate goal of this report is to filter portfolios that have a user definable % of cash as a percent of total assets. The default will be 30% or greater. So for example if a Portfolios total market value was $100,000 and their cash asset class was $40,000 this would be a cash perent of 40% and this portfolio would appear on the report.
I have been able to select just the cash asset class using a filter on the dataset itself so that is not an issue. I easily added a cash percent parameter to the dataset but soon realized this is filtering on the row detail not the aggregated sum, and sometimes portfolios have multiple cash accounts. I need the sum of all cash accounts so I can truly know if cash is 30% or greater of total market value.
At first I thought the report was working correctly.
But after cross referencing this against another report I realized this portfolio only has 2.66 % total Cash because it has a large negative balance in a second cash account as well. It's the sum of all cash accounts I care about.
I want to filter portfolios that have >= cash based on the total line not the detail lines. I suspect I may need to alter the dataset using the scalar function SUM() then building a parameter off that, but I have not had success writing a query to do that. I also would be very interested to know if somehow this can be done in the .rdl layer rather than at the sql dataset level. The SQL for this is a little complicated because the program that this is reporting on requries the use of SQL functions, stored procedures, and parameters.
If the solution involves altering the query to include sum() in the dataset itself, I suspect it is line 20 that needs to be summed
a.PercentAssets,
Here is the data set for the report.
https://www.dropbox.com/s/bafdo2i6pfvdkk4/CashPercentDataSet.sql
and here is the .rdl file.
https://www.dropbox.com/s/htg09ypyh7f1a98/cashpercent2.rdl
Thank you
I would do this in SQL personally. You can use a SUM() aggregate to get the value you're after for each account. You won't want to sum the percents though, you'll want to SUM the values and then create the percent afterwards once you have the totals. A sum of averages isn't the same as an average of sums (except in some circumstances but we can't assume that).
I would do something like this, though you'll probably need to tweak it to get it working correctly for you. (I also may not fully understanding the meaning of all the columns you're using).
SELECT PortfolioBaseCode,totalValue,cashValue,
(CAST(cashValue AS FLOAT)/totalValue) * 100 AS pcntCash
FROM (
SELECT b.PortfolioBaseCode,
SUM(a.MarketValue) AS totalValue
SUM(CASE s.SecuritySymbol WHEN 'CASH' THEN a.MarketValue ELSE 0 END) AS cashValue
FROM APXUser.fAppraisal (#ReportData) a
LEFT JOIN APXUser.vPortfolioBaseSettingEx b ON b.PortfolioBaseID = a.PortfolioBaseID
LEFT JOIN APXUser.vSecurityVariant s ON s.SecurityID = a.SecurityID
AND s.SecTypeCode = a.SecTypeCode
AND s.IsShort = a.IsShortPosition
AND a.PercentAssets >= #PercentAssets
GROUP BY b.PortfolioBaseCode
) AS t
WHERE (CAST(cashValue AS FLOAT)/totalValue)>=#pcntParameter
Alternatively you can use the HAVING clause to filter aggregate functions like a WHERE clause would (though I find this a little less readable):
SELECT b.PortfolioBaseCode,
SUM(a.MarketValue) AS totalValue
SUM(CASE s.SecuritySymbol WHEN 'CASH' THEN a.MarketValue ELSE 0 END) AS cashValue,
(SUM(a.MarketValue)/SUM(CASE s.SecuritySymbol WHEN 'CASH' THEN a.MarketValue ELSE 0 END))*100 AS cashPcnt
FROM APXUser.fAppraisal (#ReportData) a
LEFT JOIN APXUser.vPortfolioBaseSettingEx b ON b.PortfolioBaseID = a.PortfolioBaseID
LEFT JOIN APXUser.vSecurityVariant s ON s.SecurityID = a.SecurityID
AND s.SecTypeCode = a.SecTypeCode
AND s.IsShort = a.IsShortPosition
AND a.PercentAssets >= #PercentAssets
GROUP BY b.PortfolioBaseCode
HAVING (SUM(a.MarketValue)/SUM(CASE s.SecuritySymbol WHEN 'CASH' THEN a.MarketValue ELSE 0 END))>=#pcntParameter
Basically you can use grouping and aggregate functions to get the total value and the cash value for each account and then calculate the proper percentage from that. This will take into account any negatives, etc in other holdings.
Some references for aggregate function and grouping:
MSDN - GROUP BY (TSQL)
MSDN - Aggregate Functions
MSDN - HAVING Clause
If you find you're needing to do some kind of aggregation but you cannot group your results you should look into using the TSQL window functions which are very handy:
Working with Window Functions in SQL Server
MSDN - OVER Clause
Related
Good evening. Could someone please help me with the following. I am trying to join two tables.The first id wbr_global.gl_ap_details. This stores historic GL information. The second table sandbox.utr_fixed_mapping is where account mapping is stored. For example, ana ccount number 60820 is mapped as Employee relation. The first table needs the mapping from the second table linked on the account number. The output I am getting is not right and way to bug. Any help would be appreciated!
Output
select sandbox.utr_fixed_mapping_na.new_mapping_1,sum(wbr_global.gl_ap_details.amount)
from wbr_global.gl_ap_details
LEFT JOIN sandbox.utr_fixed_mapping_na ON wbr_global.gl_ap_details.account_number = sandbox.utr_fixed_mapping_na.account_number
Where gl_ap_details.cost_center = '1172'
and gl_ap_details.period_name = 'JUL-21'
and gl_ap_details.ledger_name = 'Amazon.com, Inc.'
Group by 1;
I tried adding the cast function but after 5000 seconds of the query running I canceled it.
The query itself appears ok, but minor changes. Learn to use table "aliases". This way you don't have to keep typing long database.table.column all over. Additionally, SQL is easier to read doing it that way anyhow.
Notice the aliases "gl" and "fm" after the tables are declared, then these aliases are used to represent the columns.. Easier to read, would you agree.
Added GL Account number as described below the query.
select
gl.account_number,
fm.new_mapping_1,
sum(gl.amount)
from
wbr_global.gl_ap_details gl
LEFT JOIN sandbox.utr_fixed_mapping_na fm
ON gl.account_number = fm.account_number
Where
gl.cost_center = '1172'
and gl.period_name = 'JUL-21'
and gl.ledger_name = 'Amazon.com, Inc.'
Group by
gl.account_number,
fm.new_mapping_1
Now, as for your query and getting null. This just means that there are records within the gl_ap_details table with an account number that is not found in the utr_fixed_mapping_na table. So, to see WHAT gl account number does NOT exist, I have added it to the query. Its possible there are MULTIPLE records in the gl_ap_details that are not found in the mapping table. So, you may get
GLAccount Description SumOfAmount
glaccount1 null $someAmount
glaccount37 null $someAmount
glaccount49 null $someAmount
glaccount72 Depreciation $someAmount
glaccount87 Real Estate $someAmount
glaccount92 Building $someAmount
glaccount99 Salaries $someAmount
I obviously made-up glaccounts just to show the purpose. You may have multiple where the null's total amount is actually masking how many different gl account numbers were NOT found.
Once you find which are missing, you can check / confirm they SHOULD be in the mapping table.
FEEDBACK.
Since you do realize the missing numbers, lets consider a Cartesian result. If there are multiple entries in the mapping table for the same G/L account number, you will get a Cartesian result thus bloating your numbers. To clarify, lets say your mapping table has
Mapping file.
GL Descr1 NewMapping
1 test Salaries
1 testView Buildings
1 Another Depreciation
And your GL_AP_Details has
GL Amount
1 $100
Your total for the query would result in $300 because the query is trying to join the AP Details GL #1 to EACH of the entries in the mapping file thus bloating the amount. You could also add a COUNT(*) as NumberOfEntries to the query to see how many transactions it THINKS it is processing. Is there some "unique ID" in the GL_AP_Details table? If so, then you could also do a count of DISTINCT ID values. If they are different (distinct is lower than # of entries), I think THAT is your culprit.
select
fm.new_mapping_1,
sum(gl.amount),
count(*) as NumberOfEntries,
count( distinct gl.UniqueIdField ) as DistinctTransactions
from
wbr_global.gl_ap_details gl
LEFT JOIN sandbox.utr_fixed_mapping_na fm
ON gl.account_number = fm.account_number
Where
gl.cost_center = '1172'
and gl.period_name = 'JUL-21'
and gl.ledger_name = 'Amazon.com, Inc.'
Group by
fm.new_mapping_1
Might you also need to limit the mapping table for a specific prophecy or mec view?
If you "think" that the result of an aggregate is wrong, then the easiest way to verify this is to select the individual rows that correlate to 1 record in the aggregate output and inspect the records, looking for duplications.
For instance, pick 'Building Management':
SELECT fixed.new_mapping_1,details.amount,*
FROM wbr_global.gl_ap_details details
LEFT JOIN sandbox.utr_fixed_mapping_na fixed ON details.account_number = fixed.account_number
WHERE details.cost_center = '1172'
AND details.period_name = 'JUL-21'
AND details.ledger_name = 'Amazon.com, Inc.'
AND details.account_number = 'Building Management'
Notice that we tack on a ,* to the end of the projection, this will show you everything that the query has access to, you should look for repeating sections of data that you were not expecting, then depending on which table they originate from your might add additional criteria to the JOIN, or to the WHERE or you might need to group by additional columns.
This type of issue is really hard to comment on in a forum like this because it is highly specific to your schema, and the data contained within it, making solutions highly subjective to criteria you are not likely to publish online.
Generally if you think a calculation is wrong, you need to manually compute it to verify, this above advice helps you to inspect the data your query is using, you should either construct your own query or use other tools to build the data set that helps you to manually compute the correct values, then work them back into or replace your original query.
The speed issues are out of scope here, we can comment on the poor schema design but I suspect you don't have a choice. In the utr_fixed_mapping_na table you should make the account_number have the same column type as the source data, or add a new column that has the data in the original type, then you can setup indexes on the columns to improve the speed of the join.
DAX 2013 standalone power pivot.
I have a sales table with Product and Brand columns, and Sales measure which explicitly sums up sales column.
Task in hand: I need to create 1 measure RANK which would ...
if Product is filtered expressly, then return count of Products that have higher or equal sales amount, divided by total count of products.
If it's a subtotal brand level, show the same but for brands.
My current approach is using RANK and then MAXX of rank which seems working but a no-go - slow nightmare. Excel runs out of memory.
Research: it's been a week. This is the most relevant post i found anywhere, this question here , but it's in MDX.
In my example picture, I'm showing Excel formulas with which I can get to the result. Ideally there shouldn't be any helpers, 1 formula for all.
I.E.
RANK:=IF( HASONEFILTER(PRODUCTS[PRODUCT], HELPER_PROD, HELPER_BRAND)
where HELPER_PROD part would be something like this - need to find a way to refer to "current" result in pivot table like Excel does using [#[...:
HELPER_PROD:=COUNTX(ALL(PRODUCTS), [SALES]>=[#[SALES]]) / COUNTX(ALL(PRODUCTS))
HELPER_BRAND:=COUNTX(
DISTINCT(ALL(PRODUCTS[BRAND])),
[SALES]>=[#[SALES]]) /
COUNT(DISTINCT(ALL(PRODUCTS[BRAND]))
You can use the "Earlier" function to compare with the current record.
ProductsWithHigherSales:=CALCULATE(countrows(sales),
FILTER(all(Sales),
countrows(filter(Sales,Sales[Sales]<=EARLIER(Sales[Sales])))
))
Using Earlier function in measures: can-earlier-be-used-in-dax-measures
Used workbook: Excel File
First post; go easy on me.
Relatively new to SQL (anything beyond simple queries really), but attempting to learn more complex functions in an effort to take advantage of superior server resources. My issue:
I would like to use a SUM function to aggregate cash flows across a very large variety of sources. I would like to see these cash flows along a monthly time period. Because the cash flows start at different times, I would like to season them so that they are all aligned. My current code:
select
months_between(A.reporting_date, B.start_date) as season,
sum(case when A.current_balance is null then B.original_balance
else A.current_balance end) as cashflow
from dataset1 A, dataset2 B
group by season
order by season
Now, executing the code like this generates an error message that states that A.reporting_date and B.start_date must be GROUPED or part of an AGGREGATE function.
The problem is, if I add them to the GROUP BY statement, while it generates output without error, I get cash flow sums that are essentially Cartesian crosses with all the grouped variables.
So long story short, is there any way for me to get cash flow sums grouped by only the season? If so, any ideas how to do it?
Thank you.
Most databases don't allow using column aliases defined previously, in where, group by and order by clauses.
For your query you should use months_between(A.reporting_date, B.start_date) instead of the alias season in group by and order by.
Also your query will return a cross product, as a join condition isn't specified.
select
months_between(A.reporting_date, B.start_date) as season,
sum(case when A.current_balance is null then B.original_balance
else A.current_balance end) as cashflow
from dataset1 A
JOIN dataset2 B ON --add a join condition
group by months_between(A.reporting_date, B.start_date)
order by months_between(A.reporting_date, B.start_date)
I have a database in which there are 13 different products, sold in 6 different countries.
Prices increase once a year.
Prices need to be calculated using a linear interpolation method. I have 21 different price and quantity increments for each product for each country for each year.
The user needs to be able to see how much an order would cost for any given value (as you would expect).
What the database needs to do (in English!) is to:
If there is a matching quantity from TblOrderDetail in the TblPrices,
use the price for the current product, country and year
if there isn't a matching quantity but the quantity required is greater than 1000 for one product (GT) and greater than 100 for every other product:
Find the highest quantity for the product, country and year (so, 1000 or 100, depending on the product), and calculate a pro-rated price. eg. If someone wanted 1500 of product GT for the UK for 2015, we'd look at the price for 1000 GT in the UK for 2015 and multiply it by 1.5. If 1800 were required, we'd multiply it by 1.8. I haven't been able to get this working yet as I'm looking at it alongside the formula for the next possibility...
If there isn't a matching quantity and the quantity required is less than 1000 for the product GT but 100 for the other products (this is the norm)...
Find the quantity and price for the increment directly below the quantity required by the user for the required product, country and year (let's call these quantitybelow and pricebelow)
Find the quantity and price for the increment directly above the quantity required by the user for the required product, country and year (let's call these quantityabove and priceabove)
Calculate the price for the required number of products for an account holder in a particular country for a given year using this formula.
ActualPrice: PriceBelow + ((PriceAbove - PriceBelow) * (The quantity required in the order detail - QuantityBelow) / (QuantityAbove - QuantityBelow))
I have spent days on this and have sought advice about this before but I am still getting very stuck.
The tables I've been working with to try and make this work are as follows:
TblAccount (primary key is AccountID, it also has a Country field which joins to the TblCountry.Code (primary key)
TblOrders (primary key is Order ID) which joins to TblAccount via the AccountID field; TblOrderDetail via the OrderID. This table also holds the OrderDate and Recipient ID which links to a person in TblContact - I don't need that here but will need it later to generate an invoice
TblOrderDetail (primary key is DetailID) which joins to TblOrders via OrderID field; TblProducts via ProductID field, and holds the Quantity required as well as the product
TblProducts (primary key is ProductCode) which as well as joining to TblOrderDetail, also joins to TblPrice via the Product field
TblPrices links to the TblProducts (as you have just read). I've also created an Alias for the TblCountry (CountryAliasForProductCode) so I can link it to the TblPrices to show the country link. I'm not sure if I needed to do this - it doesn't work if I do or I don't do it, so I seek guidance again here.
This is the code I've been trying to use (and failing) to get my price and quantity steps above and I hope to replicate it, making a couple of tweaks to get the steps below:
SELECT MIN(TblPrices.stepquantity) AS QuantityAbove, MIN(TblPrices.StepPrice) AS PriceAbove, TblOrders.OrderID, TblOrders.OldOrderID, TblOrders.AccountID, TblOrders.OrderDate, TblOrders.RecipientID, TblOrders.OrderStatus, TblOrderDetail.DetailID, TblOrderDetail.Product, TblOrderDetail.Quantity
FROM (TblCountry INNER JOIN ((TblAccount INNER JOIN TblOrders ON TblAccount.AccountID = TblOrders.AccountID) INNER JOIN (TblOrderDetail INNER JOIN TblProducts ON TblOrderDetail.Product = TblProducts.ProductCode) ON TblOrders.OrderID = TblOrderDetail.OrderID) ON TblCountry.Code = TblAccount.Country) INNER JOIN (TblCountry AS CountryAliasForProduct INNER JOIN TblPrices ON CountryAliasForProduct.Code = TblPrices.CountryCode) ON TblProducts.ProductCode = TblPrices.Product
WHERE (StepQuantity >= TblOrderDetails.Quantity)
AND (TblPrices.CountryCode = TblAccount.Country)
AND (TblOrderDetail.Product = TblPrices.Product)
AND (DATEPART('yyyy', TblPrices.DateEffective) = DATEPART('yyyy', TblOrders.OrderDate));
I've also tried...
I've even tried going back to basics and trying again to generate the steps below in 1 query, then try the steps above in another and finally, create the final calculation in another query.
This is what I have been trying to get my prices and quantities below:
SELECT Max(StepQuantity) AS quantity_below, Max(StepPrice) AS price_below, TblOrderDetails.Quantity, TblAccounts.Country
FROM
(TblProducts INNER JOIN TblPrices ON TblProducts.ProductCode = TblPrices.Product)
(TblOrderDetail INNER JOIN TblProducts ON TblOrderDetail.Product = TblProducts.ProductCode)
(TblOrders INNER JOIN TblOrderDetail ON TblOrders.OrderID = TblOrderDetail.OrderID)
(TblAccount INNER JOIN TblOrders ON TblAccount.AccountID = TblOrders.AccountID),
WHERE (((TblPrices.StepQuantity)<=(TblOrderDetail.Quantity)) AND ((TblPrices.CountryCode)=([TblAccounts].[country])) AND ((TblPrices.Product)=([TblOrderDetail].[product])) AND ((DatePart('yyyy',[TblPrices].[DateApplicable]))=(DatePart('yyyy',[TblOrders].[OrderDate]))));
You may be able to see glaring errors in this but I'm afraid I can't. I've tried re-jigging it and I'm getting nowhere.
I need to be able to tie the information in to the OrderDetail records as the price generated will need to be added to a financial transactions table as a debit amount and will show as an amount owing on statements.
I'm really not very good at SQL. I've read and worked though several self-study books and I have asked part of this question before; but I really am struggling with it. If anyone has any ideas on how to proceed, or even where I've gone wrong with my code, I'd be delighted, even if you tell me I shouldn't be using SQL. For the record, I originally posted this question on a different forum under Visual Basic. Responses from that forum brought me to SQL - however, anything that works would be good!
I've even tried, using Excel, concatenating the Year&Product&Country&Quantity to get a unique product code, interpolating the prices for every quantity between 1 and 1000 for each product, country and year and bringing them into a TblProductsAndPrices table. In Access, I created a query to concatenate the Year(of order date from tblOrders)&Product(of tblorderdetails)&Country(of tblAccount) in order to get the required product code for the order. Another query would find a price for me. However, any product code that doesn't appear on the list (such as where a quantity isn't listed in the tblProductsAndPrices as it is larger than the highest price increment) doesn't have a price.
If there was a workable solution to what I've just described that would generate a price for everything, then I'd be so pleased.
I'd really like to be able to generate an order for any quantity of any product for any account based in any country on any date and retrieve a price which will be used to "debit" a financial account in the database, who in a transaction history for an account and appear on statements. I'd also like to be able to do an ad-hoc price check on the spot.
Thank you very much for taking the time to read this. I really appreciate it. If you could offer any help or words of encouragement, I'd be very grateful.
Many thanks
Karen
Maybe no one thinks on an easy solution to the problem, since not all minds work in database thinking.
Easy solution: Create one view that gives all calculated values, not only the final one you need, each one as a column. Then you can use such view in a relation view and use on some rows one of the values and on other rows other values, etc.
How to think is simple, think in reverse order, instead of thinking "if that then I need to calculate such else I need this other", think as "I need "such" and I need "this other", both are columns of an intermediate view, then think on top level "if" that would be another view, such view will select the correct value ignoring the rest.
Never ever try to solve all in one step, that can be a really big headache.
Pros: You can isolate calculated values (needed or not), sql is much more easy to write and maintain.
Cons: Resources use is bigger than minimal, but most of times that extra calculated values does not represent a really big impact.
In terms of tutorial out there: Instead of a Top-Down method, use a Down-Top method.
Sometimes it is better (with your example) to calculate all three values (you write sentences on bold) ignoring the if part, and have all three possible values for your order and after that discard the ones not wanted, than trying to only calculate one.
Trying to calculate only one is thinking as a procedural programming, when working with databases most times one must get rid of such thinking and think as reverse, first do the most internal part of such procedural programming to have all data collected, then do the external selection of the procedural programing.
Note: If one of the values can not be calculated, just generate a Null.
I know it is hard to think on First in, last out (Down-Top) model, but it is great for things as the one you want.
Step1 (on specific view, or a join from one view per calculation):
Calculate column 1 as price for the current product, country and
year
Calculate column 2 as calculate a pro-rated price as if 1000
Calculate column 3 as calculate a pro-rated price as if 100
Calculate column 4 as etc
Calculate column N as etc
Step 2 (Another view, the one you want):
Calculate the if part, so you can choose adequate column from previous view (you can use immediately if or a calculated auxiliary field).
Hope you can follow theese way of thinking, I have solved a lot of things like that one (and more complex) thinking in that way, but it is not easy to think as that, needs an extra effort.
I have a database which currently records the amount of times someone does a certain procedure and they scores they have received. The scoring is done by select a value of either N, B or C.
I currently have written a query which will count the total number of times a procedure is done and the amount of times each score is received.
Here is the result of the query (original: http://www.flickr.com/photos/mattcripps/6673555339/)
and here is the code
TRANSFORM Count(ed.[Entry ID]) AS [CountOfEntry ID]
SELECT ap.AdultProcedureName, ap.Target, Count(ed.[Entry ID]) AS [Total Of Entry ID]
FROM tblAdultProcedures AS ap LEFT JOIN tblEntryData AS ed ON ap.AdultProcedureName = ed.[Adult Procedure]
GROUP BY ap.AdultProcedureName, ap.Target
PIVOT ed.Grade;
If a score of N or B is given that is deemed below standard and C is deemed at standard. Is there a way I can add something to my query which will show me in percentage how many of the procedures we at standard and how many below?
I really cant get my head round this so any help would be great.
Thanks in advance
UPDATE TabProd
SET PrecProd = (PrecProd * 1.1)
WHERE Código IN (1,2,3,4)
I did something very similar to this on a pretty large scale.
My issue was the need to be able to run queries over specific (but user variable) timeframes and output similar percentage of total results in a report.
I won't get into the date issue but my solution was to run the "sum" function on the total line on my specific reject criteria to get totals of the rejects then use a divide expression to create a new column element (defined expression) in the same query pulling from the joined table of "Total net production" - joined by a common reference - job ID.
For your case it sounds like you want to sum the two failure types - which you would simply add defined expressions dividing your total instances into your various failure modes and formatting in your output report as percents. To finish the data portion of your report you then need a third expression defining your "non-fail percent" - which would be 1.0 - N/total - B/total - both of which you will have previously defined in the query to determine the N and B failure rates.
Then its a matter of pulling that information into your report and formatting. It definitely CAN be done.
Hope this helps.