Sum values in the same field inside a loop - abap

In the program that i gonna do it's show the info of every company code.
for example:
Fiscal Year | 4030 | 4020 | 4040 | 4050 | 4070 | TOTAL
1/2010 | 423.12 | 89.79 ..... | ......| .... | .....
2/2010 |234.00 | 04.38 ..... | ...... | .... |.....
3/2010 |432.652 | 98.80 ..... | ...... | .... | .....
.
.
12/2010 |978.687 | 089.787 .... | ...... | .... | .....
My question is how can I sum all data in the field fkdat for date and dmbtr for amount? because in field fkdat there is a lots of same date. All I want to do is to sum up all same dates and amount so that i can show the info and amount in every plant code. How can i do that in side the loop?
SELECT-OPTIONS:
BUKRS for wa_zfi_vbrp_bseg_1-BUKRS OBLIGATORY NO INTERVALS.
PARAMETERS:
p_gjahr LIKE zfi_vbrp_bseg-gjahr OBLIGATORY.
SELECTION-SCREEN BEGIN OF LINE.
SELECTION-SCREEN COMMENT 10(15) text-005.
SELECTION-SCREEN POSITION 35.
select-options: p_perde for ce1osgc-perde no-display.
SELECTION-SCREEN END OF LINE.
START-OF-SELECTION.
SELECT
zf~gjahr
zf~bukrs
zf~dmbtr
zf~monat
zf~vbeln
zf~hkont
fi~fkdat
vb~werks
INTO CORRESPONDING FIELDS OF TABLE it_zfi_vbrp_bseg_1
FROM zfi_vbrp_bseg as zf
INNER JOIN vbrp as vb ON vb~vbeln EQ zf~vbeln
INNER JOIN zfi_vbrp as fi ON zf~vbeln EQ fi~vbeln
WHERE zf~bukrs in bukrs
AND zf~gjahr EQ p_gjahr
GROUP BY
zf~gjahr
zf~bukrs
zf~dmbtr
zf~monat
zf~vbeln
zf~hkont
fi~fkdat
"zf-perio
vb~werks.
"-------------------------------------------------------------------------------------------------------------
DATA:" l_budat TYPE budat,
date TYPE string,
date1 TYPE string,
date2 TYPE STRING,
out TYPE string,
vvdcs TYPE ce1osgc-vvdcs.
DATA: pos TYPE i VALUE 40,
pos2 TYPE i VALUE 57,
pos3 TYPE i VALUE 75,
pos4 TYPE i VALUE 93,
pos5 TYPE i VALUE 110.
IF SY-SUBRC EQ 0.
LOOP AT it_zfi_vbrp_bseg_1 INTO wa_zfi_vbrp_bseg_1.
AT FIRST.
WRITE: /01(150) SY-ULINE,
/01 SY-VLINE,
11 'Fiscal Year', 36 SY-VLINE,
40 '4030', 55 SY-VLINE,
57 '4020', 73 SY-VLINE,
75 '4050', 91 SY-VLINE,
93 '4040', 108 SY-VLINE,
110 '4070', 127 SY-VLINE,
129 'Total', 140 SY-VLINE,
(150) SY-ULINE.
ENDAT.
IF wa_zfi_vbrp_bseg_1-werks EQ '4030'.
WRITE AT pos wa_zfi_vbrp_bseg_1-dmbtr.
ENDIF.
IF wa_zfi_vbrp_bseg_1-werks EQ '4020'.
WRITE AT pos2 wa_zfi_vbrp_bseg_1-dmbtr.
ENDIF.
IF wa_zfi_vbrp_bseg_1-werks EQ '4050'.
WRITE AT pos3 wa_zfi_vbrp_bseg_1-dmbtr.
ENDIF.
IF wa_zfi_vbrp_bseg_1-werks EQ '4040'.
WRITE AT pos4 wa_zfi_vbrp_bseg_1-dmbtr.
ENDIF.
IF wa_zfi_vbrp_bseg_1-werks EQ '4070'.
WRITE AT pos5 wa_zfi_vbrp_bseg_1-dmbtr.
ENDIF.
ENDLOOP.
ENDIF.

Try using COLLECT in a loop.
Check how to use COLLECT statement:
LOOP AT it_zfi_vbrp_bseg_1 INTO wa_zfi_vbrp_bseg_1.
COLLECT wa_zfi_vbrp_bseg_1 INTO it_zfi_vbrp_bseg_2. " You have to declear a new internal table.
ENDLOOP.
LOOP AT it_zfi_vbrp_bseg_2 INTO wa_zfi_vbrp_bseg_1.
WRITE Causes......
.........
ENDLOOP.

Related

Reading space delimited text file into SAS

I have a following .txt file:
Mark1[Country1]
type1=1 type2=5
type1=1.50 EUR type2=21.00 EUR
Mark2[Country2]
type1=2 type2=1 type3=1
type1=197.50 EUR type2=201.00 EUR type3= 312.50 EUR
....
I am trying to input it in my SAS program, so that it would look something like that:
Mark Country Type Count Price
1 Mark1 Country1 type1 1 1.50
2 Mark1 Country1 type2 5 21.00
3 Mark1 Country1 type3 NA NA
4 Mark2 Country2 type1 2 197.50
5 Mark2 Country2 type2 2 201.00
6 Mark2 Country2 type3 1 312.50
Or maybe something else, but i need it to be possible to print two way report
Country1 Country2
Type1 ... ...
Type2 ... ...
Type3 ... ...
But the question is how to read that kind of txt file:
read and separate Mark1[Country1] to two columns Mark and Country;
retain Mark and Country and read info for each Type (+somehow ignoring type1=, maybe using formats) and input it in a table.
Maybe there is a way to use some kind of input templates to achive that or nasted queries.
You have 3 name/value pairs, but the pairs are split between two rows. An unusual text file requiring creative input. The INPUT statement has a line control feature # to read relative future rows within the implicit DATA Step loop.
Example (Proc REPORT)
Read the mark and country from the current row (relative row #1), the counts from relative row #2 using #2 and the prices from relative row #3. After the name/value inputs are made for a given mark country perform an array based pivot, transposing two variables (count and price) at a time into a categorical (type) data form.
Proc REPORT produces a 'two-way' listing. The listing is actually a summary report (cells under count and price are a default SUM aggregate), but each cell has only one contributing value so the SUM is the original individual value.
data have(keep=Mark Country Type Count Price);
attrib mark country length=$10;
infile cards delimiter='[ ]' missover;
input mark country;
input #2 #'type1=' count_1 #'type2=' count_2 #'type3=' count_3;
input #3 #'type1=' price_1 #'type2=' price_2 #'type3=' price_3;
array counts count_:;
array prices price_:;
do _i_ = 1 to dim(counts);
Type = cats('type',_i_);
Count = counts(_i_);
Price = prices(_i_);
output;
end;
datalines;
Mark1[Country1]
type1=1 type2=5
type1=1.50 EUR type2=21.00 EUR
Mark2[Country2]
type1=2 type2=1 type3=1
type1=197.50 EUR type2=201.00 EUR type3= 312.50 EUR
;
ods html file='twoway.html';
proc report data=have;
column type country,(count price);
define type / group;
define country / ' ' across;
run;
ods html close;
Output image
Combined aggregation
proc means nway data=have noprint;
class type country;
var count price;
output out=stats max(price)=price_max sum(count)=count_sum;
run;
data cells;
set stats;
if not missing(price_max) then
cell = cats(price_max,'(',count_sum,')');
run;
proc transpose data=cells out=twoway(drop=_name_);
by type;
id country;
var cell;
run;
proc print noobs data=twoway;
run;
You can specify the name of variable with the DLM= option on the INFILE statement. That way you can change the delimiter depending on the type of line being read.
It looks like you have three lines per group. The first one have the MARK and COUNTRY values. The second one has a list of COUNT values and the third one has a list of PRICE values. So something like this should work.
data want ;
length dlm $2 ;
length Mark $8 Country $20 rectype $8 recno 8 type $10 value1 8 value2 $8 ;
infile cards dlm=dlm truncover ;
dlm='[]';
input mark country ;
dlm='= ';
do rectype='Count','Price';
do recno=1 by 1 until(type=' ');
input type value1 #;
if rectype='Price' then input value2 #;
if type ne ' ' then output;
end;
input;
end;
cards;
Mark1[Country1]
type1=1 type2=5
type1=1.50 EUR type2=21.00 EUR
Mark2[Country2]
type1=2 type2=1 type3=1
type1=197.50 EUR type2=201.00 EUR type3= 312.50 EUR
;
Results:
Obs Mark Country rectype recno type value1 value2
1 Mark1 Country1 Count 1 type1 1.0
2 Mark1 Country1 Count 2 type2 5.0
3 Mark1 Country1 Price 1 type1 1.5 EUR
4 Mark1 Country1 Price 2 type2 21.0 EUR
5 Mark2 Country2 Count 1 type1 2.0
6 Mark2 Country2 Count 2 type2 1.0
7 Mark2 Country2 Count 3 type3 1.0
8 Mark2 Country2 Price 1 type1 197.5 EUR
9 Mark2 Country2 Price 2 type2 201.0 EUR
10 Mark2 Country2 Price 3 type3 312.5 EUR

Ignore missing values when creating dummy variable

How can I create a dummy variable in Stata that takes the value of 1 when the variable pax is above 100 and 0 otherwise?
Missing values should be labelled as 0.
My code is the following:
generate type = 0
replace type = 1 if pax > 100
The problem is that Stata labels all missing values as 1 instead of keeping them as 0.
This occurs because Stata views missing values as large positive values. As such, your variable type is set equal to 1 when you request this for all values of pax > 100 (which includes missings).
You can avoid this by explicitly indicating that you do not want missing values replaced as 1:
generate type = 0
replace type = 1 if pax > 100 & pax != .
Consider the toy example below:
clear
input pax
20
30
40
100
110
130
150
.
.
.
end
The following syntax is in fact sufficient:
generate type1 = pax > 100 & pax < .
Alternatively, one can use the missing() function:
generate type2 = pax > 100 & !missing(pax)
Note the use of ! before the function, which tells Stata to focus on the non-missing values.
In both cases, the results are the same:
list
+---------------------+
| pax type1 type2 |
|---------------------|
1. | 20 0 0 |
2. | 30 0 0 |
3. | 40 0 0 |
4. | 100 0 0 |
5. | 110 1 1 |
|---------------------|
6. | 130 1 1 |
7. | 150 1 1 |
8. | . 0 0 |
9. | . 0 0 |
10. | . 0 0 |
+---------------------+

ABL Progress 4gl : For Each with Count in Output-Stream

Progress-Procedure-Editor:
DEFINE STREAM myStream.
OUTPUT STREAM myStream TO 'C:\Temp\BelegAusgangSchnittstelle.txt'.
FOR EACH E_BelegAusgang
WHERE E_BelegAusgang.Firma = '000'
AND E_BelegAusgang.Schnittstelle = '$Standard'
NO-LOCK:
PUT STREAM myStream UNFORMATTED
STRING(E_BelegAusgang.Firma)
'|'
STRING(E_BelegAusgang.BelegNummer)
'|'
STRING(E_BelegAusgang.Schnittstelle)
'|'
SKIP
.
END.
I get this (extraction):
Firma | BelegNr | Schnittstelle
000 | 3 | $Standard
000 | 3 | $Standard
000 | 3 | $Standard
000 | 3 | $Standard
000 | 3 | $Standard
000 | 8 | $Standard
000 | 8 | $Standard
What I need is to COUNT the BelegNr. So I import the data of the TXT to SQL Server.
On Server my query is:
SELECT [BelegNr]
,COUNT(*) AS [Anzahl]
FROM [TestDB].[dbo].[Beleg_Ausgang]
GROUP BY [BelegNr]
ORDER BY [Anzahl]
With that query I got (extraction):
BelegNr Anzahl
3 | 5
8 | 2
Is there a way to put the COUNT directly into the Progress-Code? I mean, I want my result directly from the Progress-Procedure-Editor.
In ABL you use BREAK BY instead of GROUP BY. One limit is that BREAK BY groups AND sorts.
You could for instance have another "FOR EACH" for this:
DEFINE VARIABLE iCount AS INTEGER NO-UNDO.
FOR EACH E_BelegAusgang NO-LOCK
WHERE E_BelegAusgang.Firma = '000'
AND E_BelegAusgang.Schnittstelle = '$Standard'
BREAK BY BelegNr:
iCount = iCount + 1.
IF LAST-OF(BelegNr) THEN DO:
DISPLAY BelegNr iCount.
iCount = 0.
END.
END.
You could also incorporate that code in the export but note: that will change the order of the file rows. Maybe that's a problem for you, maybe not!

Group and split records in postgres into several new column series

I have data of the form
-----------------------------|
6031566779420 | 25 | 163698 |
6031566779420 | 50 | 98862 |
6031566779420 | 75 | 70326 |
6031566779420 | 95 | 51156 |
6031566779420 | 100 | 43788 |
6036994077620 | 25 | 41002 |
6036994077620 | 50 | 21666 |
6036994077620 | 75 | 14604 |
6036994077620 | 95 | 11184 |
6036994077620 | 100 | 10506 |
------------------------------
and would like to create a dynamic number of new columns by treating each series of (25, 50, 75, 95, 100) and corresponding values as a new series. What I'm looking for as target output is,
--------------------------
| 25 | 163698 | 41002 |
| 50 | 98862 | 21666 |
| 75 | 70326 | 14604 |
| 95 | 51156 | 11184 |
| 100 | 43788 | 10506 |
--------------------------
I'm not sure what the name of the sql / postgres operation I want is called nor how to achieve it. In this case the data has 2 new columns but I'm trying to formulate a solution that has has many new columns as are groups of data in the output of the original query.
[Edit]
Thanks for the references to array_agg, that looks like it would be helpful! I should've mentioned this earlier but I'm using Redshift which reports this version of Postgres:
PostgreSQL 8.0.2 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.4.2 20041017 (Red Hat 3.4.2-6.fc3), Redshift 1.0.1007
and it does not seem to support this function yet.
ERROR: function array_agg(numeric) does not exist
HINT: No function matches the given name and argument types. You may need to add explicit type casts.
Query failed
PostgreSQL said: function array_agg(numeric) does not exist
Hint: No function matches the given name and argument types. You may need to add explicit type casts.
Is crosstab the type of transformation I should be looking at? Or something else? Thanks again.
I've used array_agg() here
select idx,array_agg(val)
from t
group by idx
This will produce result like below:
idx array_agg
--- --------------
25 {163698,41002}
50 {98862,21666}
75 {70326,14604}
95 {11184,51156}
100 {43788,10506}
As you can see the second column is an array of two values(column idx) that corresponding to column idx
The following select queries will give you result with two separate column
Method : 1
SELECT idx
,col [1] col1 --First value in the array
,col [2] col2 --Second vlaue in the array
FROM (
SELECT idx
,array_agg(val) col
FROM t
GROUP BY idx
) s
Method : 2
SELECT idx
,(array_agg(val)) [1] col1 --First value in the array
,(array_agg(val)) [2] col2 --Second vlaue in the array
FROM t
GROUP BY idx
Result:
idx col1 col2
--- ------ -----
25 163698 41002
50 98862 21666
75 70326 14604
95 11184 51156
100 43788 10506
You can use array_agg function. Asuming, your columns are named A,B,C:
SELECT B, array_agg(C)
FROM table_name
GROUP BY B
Will get you output in array form. This is as close as you can get to variable columns in a simple query. If you really need variable columns, consider defining a PL/pgSQL procedure to convert array into columns.

how to find the max 'n' values for data based on SUM of values?

Below is the table
Amt | Val | Location
230 | a | DEL
450 | b | KOL
670 | c | BLR
890 | d | DEL
111 | e | KOL
133 | a | KOL
155 | b | DEL
177 | c | BLR
199 | a | DEL
221 | b | BLR
243 | c | BLR
265 | d | KOL
287 | a | KOL
309 | b | DEL
331 | c | DEL
353 | d | KOL
375 | e | BLR
397 | a | BLR
419 | b | DEL
441 | c | KOL
out of a,b,c,d,e values how to find the maximum 2 values for respective location based on the a's..b's..c's..d's..e's amount.
I am able to get the sum of values of top 2 val through Pivot table, for one location
Please tell how to get the top 2 val with their sum of amount for all location simultaneously through VBA,
I have Posted VBA code for the same, which gives result for only one location.
Sorry not able to upload the snapshot.
Say your data is in A1 thru C20. You have three unique locations: DEL, KOL, BLR.
In D1 enter:
=SUMPRODUCT(--(A$1:A$20)*(C$1:C$20=C1)) and copy down thru D3
In E1 enter:
=LARGE(D1:D3,1)
In E2 enter:
=LARGE(D1:D3,2)
Should look like:
EDIT:
based upon your comment, the highest two values for DEL would be:
=LARGE(IF(C1:C20="del",A1:A20),1)
and
=LARGE(IF(C1:C20="del",A1:A20),2)
These are array formulas that must be entered with CNTRL-SHFT-ENTER rather than just the ENTER key
DMAX function returns the largest number in a column in a list or database, based on a given criteria.
http://www.techonthenet.com/excel/formulas/dmax.php
1. Insert a Pivot Table.
2. Add Val in Row Labels.
3. Add Location in column Labels.
4. Add Amt in Values field(Sumof Amt).
Now In created Pivot Table,
1. In column labels filter for only one location(eg: Blr).
2. In Row Labels filter apply value filters and select Top 10..(last item).
3. In place of 10(by default) give 2.
4. Now the table consists of Top 2 val with their sum of Amount for BLR Location.
VBA Code for the Same:
Private Sub CommandButton1_Click()
Dim wkbk As Workbook
Set wkbk = ActiveWorkbook
With wkbk.Sheets(1)
LastRow = .Range("A1").End(xlDown).Row
LastCol = .Range("A1").End(xlToRight).Column
Set rngSource = .Range("A1", .Cells(LastRow, LastCol))
End With
With wkbk.Sheets(2)
Set dst = .Range("a1")
End With With wkbk
Sheets(1).PivotTableWizard _
SourceType:=xlDatabase, _
SourceData:=rngSource, _
TableDestination:=dst, _
TableName:="Pivotinfo"
End With
With wkbk.Sheets(2).PivotTables("Pivotinfo")
.PivotFields("Val").Orientation = xlRowField
.PivotFields("Location").Orientation = xlColumnField
With .PivotFields("Amt")
.Orientation = xlDataField
.Function = xlSum With wkbk.Sheets(2).PivotTables("Pivotinfo").PivotFields("Location")
.PivotItems("DEL").Visible = False
.PivotItems("KOL").Visible = False
End With
With wkbk.sheets(2).PivotTables("Pivotinfo").PivotFields("Val").AutoShow _
xlAutomatic, xlTop, 2, "Sum of Amt"
End With
End With
End With
End Sub