From the below table I want to add 2 values from the same records and one value from different record, that are (extraaamt from the first record)+(trnamt from the second record)
5140560000001183 1016.00 0.00 2014-05-23 R 0.00 1017 13
5140560000001183 1016.00 0.00 2014-05-24 N 30.00 1017 0
carno emi recamt lastrecdate status penamt trnamt extraamt
5140560000001183 1016.00 0.00 2014-05-23 R 0.00 1017 13
5140560000001191 880.00 0.00 2014-05-23 R 0.00 880 0
5140560000001142 934.00 0.00 2014-05-23 P 0.00 500 0
5140560000001209 963.00 0.00 2014-05-23 P 0.00 600 0
5140560000001175 1024.00 0.00 2014-05-23 N 0.00 0 0
5140560000001167 1117.00 0.00 2014-05-23 N 0.00 0 0
5140560000001159 834.00 0.00 2014-05-23 N 0.00 0 0
5140560000001183 1016.00 0.00 2014-05-24 N 30.00 1017 0
5140560000001191 880.00 0.00 2014-05-24 N 0.00 880 0
5140560000001142 934.00 0.00 2014-05-24 N 0.00 500 0
5140560000001209 963.00 0.00 2014-05-24 N 0.00 600 0
5140560000001175 1024.00 0.00 2014-05-24 N 0.00 0 0
5140560000001167 1117.00 0.00 2014-05-24 N 0.00 0 0
5140560000001159 834.00 0.00 2014-05-24 N 0.00 0 0
I have used the below query but still it is not helping:
Select
Case WHEN ( lastrecdate=( cast (GETDATE() as DATE))and CardNo=CardNo and Status in('N','P') ) then trnammt else 0 end +
Case WHEN ( lastrecdate=( cast (GETDATE() as DATE))and CardNo=CardNo and Status in('N','P')) then pendamt else 0 end +
Case WHEN (lastrecdate= (select MAX(lastrecdate ) from Tbl_Emi WHERE Status ='R' and CardNo=CardNo) ) then extraamt else 0 end as totalamount
from Tbl_Emi where CardNo=CardNo
Please google on CrossTab query/Pivot Query. You can achieve this task using this.
CrossTab query is amazing, which helps generating reports and play with aggregate values. Excel/Ms Access gives nice user interface for Pivot Table. It’s way to transfer rows into column. It is more often used to generate matrix form of report.
Look at this blog.
select tt.carno, t1.extraamt+t2.trnamt total from
(select t.carno,
MIN(t.lastrecdate) first
, MAX(t.lastrecdate) second
from dbo.[Table] t
group by t.carno) tt
inner join dbo.[Table] t1
on t1.carno=tt.carno and t1.lastrecdate=tt.first
inner join dbo.[Table] t2
on t2.carno=tt.carno and t2.lastrecdate=tt.second
Related
I have the following output file. Please note that this data is dynamic, so there could be more or less years and many more categories A,B,C,D...:
2015 2016 2017
EX
FE
B 0.00 -2.00 -1.00
D 0.00 -1.00 0.00
sumFE 0.00 -3.00 -1.00
VE
B 0.00 -3.00 0.00
C -4.00 0.00 0.00
D 0.00 -5.00 0.00
sumVE -4.00 -8.00 0.00
sumE -4.00 -11.00 -1.00
IN
FI
A 8.00 0.00 0.00
C 0.00 0.00 8.00
sumFI 8.00 0.00 8.00
VI
A 0.00 0.00 5.00
B 4.00 0.00 0.00
sumVI 4.00 0.00 5.00
sumI 12.00 0.00 13.00
net 8.00 -11.00 12.00
I am trying to format it like this.
2015 2016 2017
IN
VI
A 0.00 0.00 5.00
B 4.00 0.00 0.00
sumVI 4.00 0.00 5.00
FI
A 8.00 0.00 0.00
C 0.00 0.00 8.00
sumFI 8.00 0.00 8.00
sumI 12.00 0.00 13.00
EX
VE
B 0.00 -3.00 0.00
C -4.00 0.00 0.00
D 0.00 -5.00 0.00
sumVE -4.00 -8.00 0.00
FE
B 0.00 -2.00 -1.00
D 0.00 -1.00 0.00
sumFE 0.00 -3.00 -1.00
sumE -4.00 -11.00 -1.00
net 8.00 -11.00 12.00
I have tried the following script as a start:
#!/usr/bin/env bash
awk '
BEGIN{FS="\t"}
3>NR {print "D" $0}
$1 ~ /^I$/,$1 ~ /^sumI$/ {
print
}
$1 ~ /^E$/,$1 ~ /^sumE$/{
print
}
$1 ~ /net/ {print ORS $0}
' "${#:--}"
The script would go a long way in replacing all I data for E data however the execution order is not preserved and the I block is printed out last. Can someone please help with this.
This will probably be easier to modify the originating code to use GNU awk's predefined array scanning orders. The key objective is to switch the scanning order (PROCINFO["sorted_in"]) just prior to the associated for (index in array) loop.
Adding four lines of code (see # comments) to what I'm guessing is the originating code:
...
END {
for (year = minYear; year <= maxYear; year++) {
printf "%s%s", OFS, year
}
print ORS
PROCINFO["sorted_in"]="#ind_str_desc" # sort cat == { I | E } in descending order
for (cat in ctiys2amounts) {
printf "%s\n\n",(cat=="I") ? "IN" : "EX" # print { IN | EX }
delete catSum
PROCINFO["sorted_in"]="#ind_str_desc" # sort type == { VI | FI } || { VE | FE } in descending order
for (type in ctiys2amounts[cat]) {
print type
delete typeSum
PROCINFO["sorted_in"]="#ind_str_asc" # sort item == { A | B | C | D } in ascending order
for (item in ctiys2amounts[cat][type]) {
printf "%s", item
for (year = minYear; year <= maxYear; year++) {
amount = ctiys2amounts[cat][type][item][year]
printf "%s%0.2f", OFS, amount
typeSum[year] += amount
}
print ""
}
....
This generates:
2015 2016 2017
IN
VI
A 0.00 0.00 5.00
B 4.00 0.00 0.00
sumVI 4.00 0.00 5.00
FI
A 8.00 0.00 0.00
C 0.00 0.00 8.00
sumFI 8.00 0.00 8.00
sumI 12.00 0.00 13.00
EX
VE
B 0.00 -3.00 0.00
C -4.00 0.00 0.00
D 0.00 -5.00 0.00
sumVE -4.00 -8.00 0.00
FE
B 0.00 -2.00 -1.00
D 0.00 -1.00 0.00
sumFE 0.00 -3.00 -1.00
sumE -4.00 -11.00 -1.00
net 8.00 -11.00 12.00
I am wondering if what I am trying to do is possible. I believe it is using the PIVOT function in TSQL but don't have enough experience with the PIVOT function to know where to start.
Basically I'm trying to take the following # table called #tmpbudgetdata (truncated for simplicity):
Account Description BudgetAmount Period
-------------------- ---------------------------------------------------------------------------------------------------- --------------------- --------------------
4001 Mood Embedded Account 0.00 1
4001 Mood Embedded Account 0.00 2
4001 Mood Embedded Account 0.00 3
4001 Mood Embedded Account 0.00 4
4001 Mood Embedded Account 0.00 5
4001 Mood Embedded Account 0.00 6
4001 Mood Embedded Account 0.00 7
4001 Mood Embedded Account 0.00 8
4001 Mood Embedded Account 0.00 9
4001 Mood Embedded Account 0.00 10
4001 Mood Embedded Account 0.00 11
4001 Mood Embedded Account 0.00 12
4003 DBS Music 0.00 1
4003 DBS Music 0.00 2
4003 DBS Music 0.00 3
4003 DBS Music 0.00 4
4003 DBS Music 0.00 5
4003 DBS Music 0.00 6
4003 DBS Music 0.00 7
4003 DBS Music 0.00 8
4003 DBS Music 0.00 9
4003 DBS Music 0.00 10
4003 DBS Music 0.00 11
4003 DBS Music 0.00 12
4010 Sales - Software 5040.00 1
4010 Sales - Software 0.00 2
4010 Sales - Software 6280.56 3
4010 Sales - Software 6947.93 4
4010 Sales - Software 4800.00 5
4010 Sales - Software 0.00 6
4010 Sales - Software 2400.00 7
4010 Sales - Software 2550.00 8
4010 Sales - Software 4800.00 9
4010 Sales - Software 2400.00 10
4010 Sales - Software 0.00 11
4010 Sales - Software 2400.00 12
4015 New Install Revenue 0.00 1
4015 New Install Revenue 0.00 2
4015 New Install Revenue 0.00 3
4015 New Install Revenue 3844.79 4
4015 New Install Revenue 0.00 5
4015 New Install Revenue 0.00 6
4015 New Install Revenue 0.00 7
4015 New Install Revenue 0.00 8
4015 New Install Revenue 0.00 9
4015 New Install Revenue 0.00 10
4015 New Install Revenue 0.00 11
4015 New Install Revenue 0.00 12
and turning it into something like this:
Account Description Period1 Period2 Period3 Period4 Period5 Period6 Period7 Period8 Period9 Period10 Period11 Period12
------- --------------- -------- ------- -------- ------ ------- ------- -------- ------ ------- -------- -------- --------
4001 Mood Enabled... 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
4003 Dbs Music 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
4010 Sales - Software 5040.00 0.00 6280.56 6947.93 4800.00 0.00 2400.00 2550.00 4800.00 2400.00 0.00 2400.00
...etc...
Basically just grouping via the Account column (the description is the same per account) and then taking the period values and pivoting them horizontally.
I know I could do it with a cursor and loop through but wondering if this is possible with a pivot or by other means.
Thanks in advance
I simple PIVOT should do the trick
Example
Select *
From (
Select [Account]
,[Description]
,Period = concat('Period',Period)
,[BudgetAmount]
From YourTable
) src
Pivot (sum([BudgetAmount]) for Period in ( [Period1],[Period2],[Period3],[Period4],[Period5],[Period6],[Period7],[Period8],[Period9],[Period10],[Period11],[Period12] ) ) pvt
Returns
I am stuck in a case since 2 days and I need your assistance please.
the table as below :
Sample Table
SR UID Flag Fees
1 AAA 1 100.00
2 AAA 0 0.00
3 AAA 0 0.00
4 AAA 1 120.00
5 AAA 0 0.00
6 AAA 0 0.00
7 AAA 1 140.00
1 BBB 1 200.00
2 BBB 0 0.00
3 BBB 0 0.00
4 BBB 0 0.00
5 BBB 0 0.00
6 BBB 0 0.00
7 BBB 1 400.00
how I can use First_value function to replace the 0.00 values in fees column with the first value where Flag =1 partition by UID
the result should be as the following
Results
SR UID Flag Fees First_Value
1 AAA 1 100.00 100.00
2 AAA 0 0.00 100.00
3 AAA 0 0.00 100.00
4 AAA 1 120.00 120.00
5 AAA 0 0.00 120.00
6 AAA 0 0.00 120.00
7 AAA 1 140.00 140.00
1 BBB 1 200.00 200.00
2 BBB 0 0.00 200.00
3 BBB 0 0.00 200.00
4 BBB 0 0.00 200.00
5 BBB 0 0.00 200.00
6 BBB 0 0.00 200.00
7 BBB 1 400.00 400.00
One idea, using FIRST_VALUE like the Op was trying to:
CREATE TABLE #Sample (SR int, UID char(3), Flag bit, Fees decimal(5,2));
INSERT INTO #Sample
VALUES
(1,'AAA',1,100.00),
(2,'AAA',0,0.00 ),
(3,'AAA',0,0.00 ),
(4,'AAA',1,120.00),
(5,'AAA',0,0.00 ),
(6,'AAA',0,0.00 ),
(7,'AAA',1,140.00),
(1,'BBB',1,200.00),
(2,'BBB',0,0.00 ),
(3,'BBB',0,0.00 ),
(4,'BBB',0,0.00 ),
(5,'BBB',0,0.00 ),
(6,'BBB',0,0.00 ),
(7,'BBB',1,400.00);
WITH Groups AS(
SELECT *,
ROW_NUMBER() OVER (PARTITION BY UID ORDER BY SR) -
ROW_NUMBER() OVER (PARTITION BY UID, Flag ORDER BY SR) AS Grp
FROM #Sample)
SELECT SR, UID, Flag, Fees,
FIRST_VALUE(Fees) OVER (PARTITION BY UID, CASE Grp WHEN 0 THEN 1 ELSE Grp END ORDER BY SR)
FROM Groups
ORDER BY UID, SR;
DROP TABLE #Sample
I'm trying to generate a summary row using ROLLUP grouping,
Here is my query
SELECT nic as NIC,branch_id,SUM(as_share),SUM(as_deposit) as as_deposit,SUM(as_credits) as as_credits,SUM(as_fixed) as as_fixed,SUM(as_ira) as as_ira,SUM(as_saviya) as as_saviya
FROM As_Member_Account_Details
GROUP BY nic,branch_id
WITH ROLLUP
But it give me this output,
112233 1 30.00 0.00 0.00 50.00 0.00 0.00
112233 2 20.00 0.00 0.00 0.00 0.00 0.00
112233 3 0.00 0.00 0.00 0.00 0.00 0.00
112233 NULL 50.00 0.00 0.00 50.00 0.00 0.00
NULL NULL 50.00 0.00 0.00 50.00 0.00 0.00
The row before the last row is unnecessary. Because there should be only 3 data rows + a summary row. How can I eliminate that row
Grouping sets allows more granular control when cubeing data.
SELECT nic as NIC
, branch_id,SUM(as_share)
, SUM(as_deposit) as as_deposit
, SUM(as_credits) as as_credits
, SUM(as_fixed) as as_fixed
, SUM(as_ira) as as_ira
, SUM(as_saviya) as as_saviya
FROM As_Member_Account_Details
GROUP BY GROUPING SETS ((nic,branch_id),())
WITH CTE_YourQuery AS
(
SELECT nic as NIC,branch_id,SUM(as_share),SUM(as_deposit) as as_deposit,SUM(as_credits) as as_credits,SUM(as_fixed) as as_fixed,SUM(as_ira) as as_ira,SUM(as_saviya) as as_saviya
FROM As_Member_Account_Details
GROUP BY nic,branch_id
WITH ROLLUP
)
SELECT *
FROM CTE_YourQuery
WHERE NOT (nic IS NOT NULL AND branch_id IS NULL)
I have a Table (parts) where I store when an item was requested and when it was issued. With this, I can easily compute each items turn-around-time ("TAT"). What I'd like to do is have another column ("Computed") where any overlapping request-to-issue dates are properly computed.
RecID Requested Issued TAT Computed
MD0001 11/28/2012 12/04/2012 6.00 0.00
MD0002 11/28/2012 11/28/2012 0.00 0.00
MD0003 11/28/2012 12/04/2012 6.00 0.00
MD0004 11/28/2012 11/28/2012 0.00 0.00
MD0005 11/28/2012 12/10/2012 12.00 0.00
MD0006 11/28/2012 01/21/2013 54.00 54.00
MD0007 11/28/2012 11/28/2012 0.00 0.00
MD0008 11/28/2012 12/04/2012 6.00 0.00
MD0009 01/29/2013 01/30/2013 1.00 1.00
MD0010 01/29/2013 01/30/2013 1.00 0.00
MD0011 02/05/2013 02/06/2013 1.00 1.00
MD0012 02/07/2013 03/04/2013 25.00 25.00
MD0013 03/07/2013 03/14/2013 7.00 7.00
MD0014 03/07/2013 03/08/2013 1.00 0.00
MD0015 03/13/2013 03/25/2013 12.00 11.00
MD0016 03/20/2013 03/21/2013 1.00 0.00
Totals 133.00 99.00 <- waiting for parts TAT summary
In the above, I manually filled in the ("Computed") column so that there is an example of what I'm trying to accomplish.
NOTE: Notice how MD0013 affects the computed time for MD0015 as MD0013 was "computed" first. This could have been where MD0015 was computed first, then MD0013 would be affected accordingly - the net result is there is -1 day.