I am stuck at a problem that I want to solve with REDUCE syntax.
I have this this internal table
I need to sum the values of the column "quantity" and update column called "Totals quantity"
And the same thing with column price in column "Total Price".
This is for each Purchase order.
I have this code right now
Loop at it_numpos Into Data(ls_numpos).
lv_valort = lv_valort + ls_numpos-netpr. " Purchase Order Total Price
lv_cantt = lv_cantt + ls_numpos-menge. " Purchase Total Quantity
At end of ebeln.
ls_numpos-zmenge3 = lv_cantt.
ls_numpos-znetpr6 = lv_valort.
Modify it_numpos From ls_numpos Transporting zmenge3 znetpr6 Where ebeln = ls_numpos-ebeln.
Clear: lv_cantt, ls_numpos, lv_valort.
Endat.
EndLoop.
It is possible to transform this code to abap new syntax?
I don't think REDUCE is the right tool for the job here, as it is meant to reduce the values of a table to one single value. In your case there is not one single value, as you're calculating new values for each sales order. So you would need to somehow loop over the table grouping the items together, then use reduce, then loop again to assign the values back into the table. That would rather complicate the code and just for the sake of using new synax is probably not worth the trouble. I think a LOOP AT is the better choice here, though I'd use a LOOP AT ... GROUP BY and then two LOOP AT GROUP loops, which makes the whole processing quite readable:
LOOP AT order_items ASSIGNING FIELD-SYMBOL(<order_item>) GROUP BY <order_item>-id INTO DATA(order).
DATA(total_price) = 0.
LOOP AT GROUP order ASSIGNING <order_item>.
total_price = total_price + <order_item>-price.
ENDLOOP.
LOOP AT GROUP order ASSIGNING <order_item>.
<order_item>-total_price = total_price.
ENDLOOP.
ENDLOOP.
However whether that is better than group level processing is up to you.
Related
In this query, I want to add a new column, which gives the SUM of a.VolumetricCharge, but only where PremiseProviderBillings.BillingCategory = 'Water'. But i don't want to add it in the obvious place since that would limit the rows returned, I only want it to get the new column value
SELECT b.customerbillid,
-- Here i need SUM(a.VolumetricCharge) but where a.BillingCategory is equal to 'Water'
Sum(a.volumetriccharge) AS Volumetric,
Sum(a.fixedcharge) AS Fixed,
Sum(a.vat) AS VAT,
Sum(a.discount) + Sum(deferral) AS Discount,
Sum(Isnull(a.estimatedconsumption, 0)) AS Consumption,
Count_big(*) AS Records
FROM dbo.premiseproviderbillings AS a WITH (nolock)
LEFT JOIN dbo.premiseproviderbills AS b WITH (nolock)
ON a.premiseproviderbillid = b.premiseproviderbillid
-- Cannot add a where here since that would limit the results and change the output
GROUP BY b.customerbillid;
Bit of a tricky one, as what you're asking for will definitely affect performance (your asking SQL Server to do more work after all!).
However, we can add a column to your results which performs a conditional sum so that it does not affect the result of the other columns.
The answer lies in using a CASE expression!
Sum(
CASE
WHEN PremiseProviderBillings.BillingCategory = 'Water' THEN
a.volumetriccharge
ELSE
0
END
) AS WaterVolumetric
I am looking for a function module that does the calculation schema for arbitrary material.
When opening ME23N and looking for the position details you have the tab Conditions where the table showing contains the base price and various conditions and below the "endprice". But since the price finding calculates the (baseprice + conditions) * amount as the netto value and divides this by the amount this can lead to rounding issues where the calculated value of 4,738 gets rounded to 4,74 which gets stored as netto price. Now when calculating nettoprice * amount this value can be different to the original value printed on the purchase document.
Since the purchase-document-value is not stored in the EKPO my goal is to re-evaluate this value by simply calling a function module with the material number and the calculation schema and any necessary parameter to give me the actual value that (again) is printed on the document.
Is there any function module that can do this or do I have to code the logic by myself?
As I wrote in my comment the solution is the FM BAPI_PO_GETDETAIL1. If you supply the PO number you get several tables containing information that is displayed in the PO create/view transaction. One of them is the iTab POCOND that has all conditions. Then you just have to read this iTab and calculate the values and add them up.
lv_ebeln = 4711
lv_ebelp = 10
" Call FM to get the detail data for one PO and each position
call function 'BAPI_PO_GETDETAIL1'
exporting
purchaseorder = lv_ebeln
tables
pocond = gt_pocond
.
" Loop over the iTab and only read entries for position 10
loop at gt_pocond
into gs_pocond
where itm_number = lv_ebelp.
" Get the netto value NAVS
if ( gs_pocond-cond_type = 'NAVS' ).
lv_netwr = gs_pocond-conbaseval.
endif.
endloop.
I have data loaded in a table called Trades. Now I need to query this table, find elements that satisfy a particular condition and produce the trade value amount.
Here is the requirement
TradeAmt = 0
Loop for all Trades
{IF TradeId is 35
If type = 'I'
ADD (TradeAmt =TradeAmt + col_TradeAmt )
else
ADD (TradeAmt = TradeAmt + col_TradeAmtOverlap )
END-IF}
Return TradeAmt
Data:
Row1: tradeid=35, type=I, col_TradeAmt=10, col_TradeAmtOverlap=20
Row2: tradeid=35, type=S, col_TradeAmt=30, col_TradeAmtOverlap=40
Output: TradeAmt=50
How can i write this using SQL statements.
Well, in SQL you don't really loop over a sequence.
You write a statement that describes what you want to get from the set of data (e.g. the Trades table).
In your case, you want to accumulate all the elements in some way and provide that accumulation as a result, you can do that by using an aggregate function like SUM.
Something along these lines probably could work. Note that I'm nesting two queries here, the inner one to decide which column to treat as the "Amount" to accumulate depending on the Type of the trade and also to filter only the trade with Id 35, and the outer query performs the sum aggregate of all amounts:
SELECT SUM("Amount") FROM
(SELECT
CASE
WHEN Type = 'I' THEN col_TradeAmt
ELSE col_TradeAmtOverlap
END "Amount"
FROM Trades
WHERE TradeId = 35) "TradeAmt";
I'm trying to rank (A,B,C) a list of customers according to their profitability , which is calculated as the amount of each sale multiplied by the product profitability (each product has a profitability value assigned). Hence, Profit = SaleAmount*ProductProfitability
To rank every customer, I have a pivot table with the customer id (CustID) as dimension and two expressions:
1)
= SaleAmount*ProductProfitability
2) = if(SaleAmount*ProductProfitability > $(vPercentile75Profit),'A', if(SaleAmount*ProductProfitability > $(vPercentil25Profit),'B','C'))
Expression 2) works correctly if I fix the values of vPercentile75Profit and vPercentile25Profit, but obviously I need this to be dynamic.
For that I've defined those variables as (same for both, just switching 0.75 with 0.25):
vPercentile75Profit =Fractile(aggr(sum({$<ProductProfitability = {'>0'} >} SaleAmount*ProductProfitability/100),CustID), 0.75)
If I understand well, this calculates a list of each customer profitability and then performs the 75 percentile of that list (which is a single value). This works great if I show the value in a Text box for example, however, if I use it in my table, it takes a different percentile for each customer (since CustID is in the dimension).
How can I bypass this? The percentiles must be the same for each customer, but I cannot find the way.
Thanks in advance, any help will be greatly appreciated!
Nothing works better to find the answer than asking your question to others. It was as simple as adding TOTAL to the variable definition:
vPercentile75Profit =Fractile(TOTAL aggr(sum({$<ProductProfitability = {'>0'} >} SaleAmount*ProductProfitability/100),CustID), 0.75)
I have a table in MS Access which has stock prices arranged like
Ticker1, 9:30:00, $49.01
Ticker1, 9:30:01, $49.08
Ticker2, 9:30:00, $102.02
Ticker2, 9:30:01, $102.15
and so on.
I need to do some calculation where I need to compare prices in 1 row, with the immediately previous price (and if the price movement is greater than X% in 1 second, I need to report the instance separately).
If I were doing this in Excel, it's a fairly simple formula. I have a few million rows of data, so that's not an option.
Any suggestions on how I could do it in MS Access?
I am open to any kind of solutions (with or without SQL or VBA).
Update:
I ended up trying to traverse my records by using ADODB.Recordset in nested loops. Code below. I though it was a good idea, and the logic worked for a small table (20k rows). But when I ran it on a larger table (3m rows), Access ballooned to 2GB limit without finishing the task (because of temporary tables, the size of the original table was more like ~300MB). Posting it here in case it helps someone with smaller data sets.
Do While Not rstTickers.EOF
myTicker = rstTickers!ticker
rstDates.MoveFirst
Do While Not rstDates.EOF
myDate = rstDates!Date_Only
strSql = "select * from Prices where ticker = """ & myTicker & """ and Date_Only = #" & myDate & "#" 'get all prices for a given ticker for a given date
rst.Open strSql, cn, adOpenKeyset, adLockOptimistic 'I needed to do this to open in editable mode
rst.MoveFirst
sPrice1 = rst!Open_Price
rst!Row_Num = i
rst.MoveNext
Do While Not rst.EOF
i = i + 1
rst!Row_Num = i
rst!Previous_Price = sPrice1
sPrice2 = rst!Open_Price
rst!Price_Move = Round(Abs((sPrice2 / sPrice1) - 1), 6)
sPrice1 = sPrice2
rst.MoveNext
Loop
i = i + 1
rst.Close
rstDates.MoveNext
Loop
rstTickers.MoveNext
Loop
If the data is always one second apart without any milliseconds, then you can join the table to itself on the Ticker ID and the time offsetting by one second.
Otherwise, if there is no sequence counter of some sort to join on, then you will need to create one. You can do this by doing a "ranking" query. There are multiple approaches to this. You can try each and see which one works the fastest in your situation.
One approach is to use a subquery that returns the number of rows are before the current row. Another approach is to join the table to itself on all the rows before it and do a group by and count. Both approaches produce the same results but depending on the nature of your data and how it's structured and what indexes you have, one approach will be faster than the other.
Once you have a "rank column", you do the procedure described in the first paragraph, but instead of joining on an offset of time, you join on an offset of rank.
I ended up moving my data to a SQL server (which had its own issues). I added a row number variable (row_num) like this
ALTER TABLE Prices ADD Row_Num INT NOT NULL IDENTITY (1,1)
It worked for me (I think) because my underlying data was in the order that I needed for it to be in. I've read enough comments that you shouldn't do it, because you don't know what order is the server storing the data in.
Anyway, after that it was a join on itself. Took me a while to figure out the syntax (I am new to SQL). Adding SQL here for reference (works on SQL server but not Access).
Update A Set Previous_Price = B.Open_Price
FROM Prices A INNER JOIN Prices B
ON A.Date_Only = B.Date_Only
WHERE ((A.Ticker=B.Ticker) AND (A.Row_Num=B.Row_Num+1));
BTW, I had to first add the column Date_Only like this (works on Access but not SQL server)
UPDATE Prices SET Prices.Date_Only = Format([Time_Date],"mm/dd/yyyy");
I think the solution for row numbers described by #Rabbit should work better (broadly speaking). I just haven't had the time to try it out. It took me a whole day to get this far.