Access to SQL Query Conversion containing FIRST() - sql

I am converting an MS Access query to a SQL Server stored procedure. I get to this point:
SELECT
AuthNum, AuthStatus, DateCreated,
MIN(DateInitiated) AS DateInitiated,
EventClassification,
FIRST(PlaceOfService) AS PlaceOfService,
Lob, MemId,
MAX(NoticeDate) AS NoticeDate,
MAX(Tat) AS Tat,
FIRST(StaffId) AS StaffId
FROM
PA_TAT_Detailed
GROUP BY
AuthNum, AuthStatus, DateCreated, EventClassification, Lob, MemId
HAVING
((FIRST(PlaceOfService) <> 'Inpatient Hospital')
AND (FIRST(PlaceOfService) <> 'Office - Dental')
AND (FIRST(PlaceOfService) <> 'Dialysis Center'))
AND
((MAX(Tat) Is Null) OR ((MAX(Tat) >= 0) AND (MAX(Tat) <= 28)))
ORDER BY
AuthNum;
But I don't know how to converted the FIRST operator. Any thoughts? Do I need to add an ORDER BY associated with the GROUP BY so I can TAKE 1 perhaps?
Does MIN give the same result in this case?
BTW PlaceOfService, StaffId are strings.

Something like this should be pretty close. Double check that this returns the right info.
SELECT top 1 AuthNum
, AuthStatus
, DateCreated
, MIN(DateInitiated) AS DateInitiated
, EventClassification
, PlaceOfService
, Lob
, MemId
, MAX(NoticeDate) AS NoticeDate
, MAX(Tat) AS Tat
, StaffId
FROM PA_TAT_Detailed
where PlaceOfService not in ('Inpatient Hospital', 'Office - Dental', 'Dialysis Center')
GROUP BY AuthNum
, AuthStatus
, DateCreated
, EventClassification
, PlaceOfService
, Lob
, MemId
, StaffId
HAVING MAX(Tat) Is Null
OR (MAX(Tat) >= 0 AND MAX(Tat) <= 28)
ORDER BY AuthNum;

Related

Total customer per reporting date without union

I would like to display to run this report where I show the total number of customers per reporting date. Here is a how I need the data to look like:
My original dataset look like this (please see query): In order to calculate the number of customers. I need to use the start and end date: if Start_Date>reporting_date and End_Date<=reporting_date then count as a customer.
I was able to develop a script, but it only gives me the total number of customers for only one reporting date.
select '2022-10-31' reporting_date, count(case when Start_Date>'2022-10-31' and End_Date<='2022-10-31' then Customer_ID end)
from (values ('2022-10-14','2022-8-19','0010Y654012P6KuQAK')
, ('2022-3-15','2022-9-14','0011v65402PoSpVAAV')
, ('2021-1-11','2022-10-11','0010Y654012P6DuQAK')
, ('2022-12-1','2022-5-14','0011v65402u7muLAAQ')
, ('2021-1-30','2022-3-14','0010Y654012P6DuQAK')
, ('2022-10-31','2022-2-14','0010Y654012P6PJQA0')
, ('2021-10-31','US','0010Y654012P6PJQA0')
, ('2021-5-31','2022-5-14','0011v65402x8cjqAAA')
, ('2022-6-2','2022-1-13','0010Y654016OqkJQAS')
, ('2022-1-1','2022-11-11','0010Y654016OqIaQAK')
) a(Start_Date ,End_Date ,Customer_ID)
Is there a way to amend the code with cross-join or other workarounds to the total customers per reporting date without doing many unions
select '2022-10-31' reporting_date, count(case when Start_Date>'2022-10-31' and End_Date<='2022-10-31' then Customer_ID end)
from (values ('2022-10-14','2022-8-19','0010Y654012P6KuQAK')
, ('2022-3-15','2022-9-14','0011v65402PoSpVAAV')
, ('2021-1-11','2022-10-11','0010Y654012P6DuQAK')
, ('2022-12-1','2022-5-14','0011v65402u7muLAAQ')
, ('2021-1-30','2022-3-14','0010Y654012P6DuQAK')
, ('2022-10-31','2022-2-14','0010Y654012P6PJQA0')
, ('2021-10-31','US','0010Y654012P6PJQA0')
, ('2021-5-31','2022-5-14','0011v65402x8cjqAAA')
, ('2022-6-2','2022-1-13','0010Y654016OqkJQAS')
, ('2022-1-1','2022-11-11','0010Y654016OqIaQAK')
) a(Start_Date ,End_Date ,Customer_ID)
UNION ALL
select '2022-9-30' reporting_date, count(case when Start_Date>'2022-9-301' and End_Date<='2022-9-30' then Customer_ID end)
from (values ('2022-10-14','2022-8-19','0010Y654012P6KuQAK')
, ('2022-3-15','2022-9-14','0011v65402PoSpVAAV')
, ('2021-1-11','2022-10-11','0010Y654012P6DuQAK')
, ('2022-12-1','2022-5-14','0011v65402u7muLAAQ')
, ('2021-1-30','2022-3-14','0010Y654012P6DuQAK')
, ('2022-10-31','2022-2-14','0010Y654012P6PJQA0')
, ('2021-10-31','US','0010Y654012P6PJQA0')
, ('2021-5-31','2022-5-14','0011v65402x8cjqAAA')
, ('2022-6-2','2022-1-13','0010Y654016OqkJQAS')
, ('2022-1-1','2022-11-11','0010Y654016OqIaQAK')
) a(Start_Date ,End_Date ,Customer_ID)
It is possible to provide date ranges as a separate table/subquery, join to the actual data and perform grouping:
select s.start_d, s.end_d, COUNT(Customer_ID) AS total
FROM (SELECT '2022-10-31'::DATE, '2022-10-31'::DATE
UNION SELECT '2022-09-30', '2022-09-30')
AS s(start_d, end_d)
LEFT JOIN (values ('2022-10-14','2022-8-19','0010Y654012P6KuQAK')
, ('2022-3-15','2022-9-14','0011v65402PoSpVAAV')
, ('2021-1-11','2022-10-11','0010Y654012P6DuQAK')
, ('2022-12-1','2022-5-14','0011v65402u7muLAAQ')
, ('2021-1-30','2022-3-14','0010Y654012P6DuQAK')
, ('2022-10-31','2022-2-14','0010Y654012P6PJQA0')
, ('2021-10-31','2021-10-31','0010Y654012P6PJQA0')
, ('2021-5-31','2022-5-14','0011v65402x8cjqAAA')
, ('2022-6-2','2022-1-13','0010Y654016OqkJQAS')
, ('2022-1-1','2022-11-11','0010Y654016OqIaQAK')
) a(Start_Date ,End_Date ,Customer_ID)
ON a.Start_Date>s.start_d and a.End_Date<=s.end_d
GROUP BY s.start_d, s.end_d;
Output:

How to use a SUM, LEFT JOIN, group by and order by in one SQL statement

I am trying to construct a query I can use within my ASP.NET code that pulls from a data base and then exports it into an excel file. My goal is to have SQL do most of the work before I iterate onto my worksheet. This is my code before using sum and group by
SELECT EOD_Rental_Fees.*, POSH5_Prod_CoreBankingDetails.description as TotalFeeAmount)
FROM EOD_Rental_Fees
LEFT JOIN POSH5_Prod_CoreBankingDetails ON EOD_Rental_Fees.CoreBankingID = POSH5_Prod_CoreBankingDetails.ID
WHERE DateProcessed >= '2018-07-01 00:00:00.000'
AND DateProcessed <= '2018-08-30 00:00:00:000'
ORDER BY description, DateProcessed;
This is the result of it:
When I add SUM or GROUP BY like so
Select EOD_Rental_Fees.*, POSH5_Prod_CoreBankingDetails.description, (Select SUM (EOD_Rental_Fees.TotalFee) as TotalFeeAmount) from
EOD_Rental_Fees
LEFT JOIN POSH5_Prod_CoreBankingDetails ON EOD_Rental_Fees.CoreBankingID = POSH5_Prod_CoreBankingDetails.ID
WHERE DateProcessed >= '2018-07-01 00:00:00.000'
AND DateProcessed <= '2018-08-30 00:00:00:000'
Group By Currency
Order By description, DateProcessed;
I get the following error: I am trying to have a column that shows TotalFeeAmount that is grouped by Currency.
From what I can see my query looks fine. What am I doing wrong to cause this?
What I am trying to achieve is something like this:
You keep adding to the question which makes it difficult to answer. In the latest image you are displaying a report - not a query result, so I am going to ignore the sub-totals and that i something for your presentation layer to deal with.
"From what I can see my query looks fine" Sorry, it isn't fine because it produces SQL errors. That error actually tells you that if you want to use group by you must specify which columns to be grouped by. Your query does not do that.
To avoid that error every "non-aggregating" column needs to be spelled out in the group by clause like so (note you cannot use * for this)
SELECT
rf.id
, rf.year
, rf.month
, rf.DateProcessed
, rf.CoreBankingID
, rf.MerchantRecordID
, rf.DeployedDate
, rf.TerminalRecordID
, rf.DeployedDate
, rf.RecoveredDate
, rf.MonthlyFee
, rf.IsProRated
, rf.DaysActive
, rf.TotalFee
, rf.IsPinPad
, rf.Currency
, det.description
, sum(rf.TotalFee) as TotalFeeAmount
FROM EOD_Rental_Fees AS rf
LEFT JOIN POSH5_Prod_CoreBankingDetails as det ON rf.CoreBankingID = det.ID
WHERE rf.DateProcessed >= '2018-07-01 00:00:00.000'
AND rf.DateProcessed < '2018-09-01 00:00:00:000'
GROUP BY
rf.id
, rf.year
, rf.month
, rf.DateProcessed
, rf.CoreBankingID
, rf.MerchantRecordID
, rf.DeployedDate
, rf.TerminalRecordID
, rf.DeployedDate
, rf.RecoveredDate
, rf.MonthlyFee
, rf.IsProRated
, rf.DaysActive
, rf.TotalFee
, rf.IsPinPad
, rf.Currency
, det.description
ORDER BY
det.description
, rf.DateProcessed
However I suspect this isn't going to produce the wanted outcome because you are after both details and summary at the same time which GROUP BY isn't designed to achieve.
I think you will find using SUM() OVER() will be closer to you need, but exactly how you need the PARTITION BY subclause to work isn't clear to me. notwithstanding this may work for you:
SELECT
rf.id
, rf.year
, rf.month
, rf.DateProcessed
, rf.CoreBankingID
, rf.MerchantRecordID
, rf.DeployedDate
, rf.TerminalRecordID
, rf.DeployedDate
, rf.RecoveredDate
, rf.MonthlyFee
, rf.IsProRated
, rf.DaysActive
, rf.TotalFee
, rf.IsPinPad
, rf.Currency
, det.description
, sum(rf.TotalFee) over(partition by rf.CoreBankingID, Currency) as TotalFeeAmount
FROM EOD_Rental_Fees AS rf
LEFT JOIN POSH5_Prod_CoreBankingDetails as det ON rf.CoreBankingID = det.ID
WHERE rf.DateProcessed >= '2018-07-01 00:00:00.000'
AND rf.DateProcessed < '2018-09-01 00:00:00:000'
ORDER BY
det.description
, rf.DateProcessed
Notes:
Use table aliases to simplify your queries
when joining tables include the table aliases in all column references
I subtly changed the way the date range works, always use a combination of >= with < and the upper boundary is "the next day". With this approach you cover yourself for any data rows that have both date and time.
here is how you can do it by using GROUP BY ROLLUP:
Select
DESCRIPTION
, currency
, DateProcessed
SUM (EOD_Rental_Fees.TotalFee)
from
EOD_Rental_Fees
LEFT JOIN POSH5_Prod_CoreBankingDetails
ON EOD_Rental_Fees.CoreBankingID = POSH5_Prod_CoreBankingDetails.ID
WHERE
DateProcessed >= '2018-07-01 00:00:00.000'
AND DateProcessed <= '2018-08-30 00:00:00:000'
GROUP BY ROLLUP (DESCRIPTION, currency, DateProcessed)
Order By
description,DateProcessed;

Performance issue using IsNull function in the Select statement

I have a financial application. I have ViewHistoricInstrumentValue which has rows like this
instrument1, date1, price, grossValue, netValue
instrument2, date1, price, grossValue, netValue
...
instrument1, date2, price, grossValue, netValue
...
My views are complicated but the db itself is small (4000 transactions). ViewHistoricInstrumentValue was executed in less than 1 second before I added the next CTE to the view. After that it takes 26s. ActualEvaluationPrice is the price for instrumentX at dateY. If this value is missing from HistoricPrice table then I find the previous price for instrumentX.
, UsedEvaluationPriceCte AS (
SELECT *
, isnull(ActualEvaluationPrice,
(select top 1 HistoricPrice.Price -- PreviousPrice
from HistoricPrice JOIN ValidDate
on HistoricPrice.DateId = ValidDate.Id
and HistoricPrice.InstrumentId = StartingCte.InstrumentId
and ValidDate.[Date] < StartingCte.DateValue
order by ValidDate.[Date]))
as UsedEvaluationPrice
FROM StartingCte
)
My problem is that the execution time increased needlessly. Right now the HistoricPrice table has no missing value so ActualEvaluationPrice is never null, so the previous price should be never determined.
ViewHistoricInstrumentValue returns 1815 rows. One other mystery is that the first query takes 26s, but the second only 2s.
SELECT * FROM [ViewHistoricInstrumentValue]
SELECT top(2000) * FROM [ViewHistoricInstrumentValue]
Appendix
The execution plan: https://www.dropbox.com/s/5st69uhjkpd3b5y/IsNull.sqlplan?dl=0
The same plan: https://www.brentozar.com/pastetheplan/?id=rk9bK1Wiv
The view:
ALTER VIEW [dbo].[ViewHistoricInstrumentValue] AS
WITH StartingCte AS (
SELECT
HistoricInstrumentValue.DateId
, ValidDate.Date as DateValue
, TransactionId
, TransactionId AS [Row]
, AccountId
, AccountName
, ViewTransaction.InstrumentId
, ViewTransaction.InstrumentName
, OpeningDate
, OpeningPrice
, Price AS ActualEvaluationPrice
, ClosingDate
, Amount
, isnull(ViewTransaction.FeeValue, 0) as FeeValue
, HistoricInstrumentValue.Id AS Id
FROM ViewBriefHistoricInstrumentValue as HistoricInstrumentValue
JOIN ValidDate on HistoricInstrumentValue.DateId = ValidDate.Id
JOIN ViewTransaction ON ViewTransaction.Id = HistoricInstrumentValue.TransactionId
left JOIN ViewHistoricPrice ON ViewHistoricPrice.DateId = HistoricInstrumentValue.DateId AND
ViewHistoricPrice.InstrumentId = ViewTransaction.InstrumentId
)
, UsedEvaluationPriceCte AS (
SELECT *
, isnull(ActualEvaluationPrice,
(select top 1 HistoricPrice.Price -- PreviousPrice
from HistoricPrice JOIN ValidDate
on HistoricPrice.DateId = ValidDate.Id
and HistoricPrice.InstrumentId = StartingCte.InstrumentId
and ValidDate.[Date] < StartingCte.DateValue
order by ValidDate.[Date]))
as UsedEvaluationPrice
FROM StartingCte
)
, GrossEvaluationValueCte AS (
SELECT *
, Amount * UsedEvaluationPrice AS GrossEvaluationValue
, (UsedEvaluationPrice - OpeningPrice) * Amount AS GrossCapitalGains
FROM UsedEvaluationPriceCte
)
, CapitalGainsTaxCte AS (
SELECT *
, dbo.MyMax(GrossCapitalGains * 0.15, 0) AS CapitalGainsTax
FROM GrossEvaluationValueCte
)
, IsOpenCte AS (
SELECT
DateId
, DateValue
, TransactionId
, [Row]
, AccountId
, AccountName
, InstrumentId
, InstrumentName
, OpeningDate
, OpeningPrice
, ActualEvaluationPrice
, UsedEvaluationPrice
, ClosingDate
, Amount
, GrossEvaluationValue
, GrossCapitalGains
, CapitalGainsTax
, FeeValue
, GrossEvaluationValue - CapitalGainsTax - FeeValue AS NetEvaluationValue
, GrossCapitalGains - CapitalGainsTax - FeeValue AS NetUnrealizedGains
, CASE WHEN ClosingDate IS NULL OR DateValue < ClosingDate
THEN CAST(1 AS BIT)
ELSE CAST(0 AS BIT)
END
AS IsOpen
, convert(NVARCHAR, DateValue, 20) + cast([Id] AS NVARCHAR(MAX)) AS Temp
, Id
FROM CapitalGainsTaxCte
)
Select * from IsOpenCte
I have no idea what your query is supposed to be doing. But this process:
ActualEvaluationPrice is the price for instrumentX at dateY. If this value is missing from HistoricPrice table then I find the previous price for instrumentX.
is handled easily with lag():
select vhiv.*
coalesce(vhiv.ActualEvaluationPrice,
lag(vhiv.ActualEvaluationPrice) over (partition by vhiv.InstrumentId order by DateValue)
) as UsedEvaluationPrice
from ViewHistoricInstrumentValue vhiv;
Note: If you need to filter out certain dates by joining to ValidDates, you can include the JOIN in the query. However, that is not part of the problem statement.

SQL Adjustment to query for selecting specfic times

I have the following query below to select points from a database. I need to make one slight adjustment, I only want points from '2013-09'. I have tried by simply adding AND "Time" LIKE '2013-09%' but that doesn't seem to work as it produces 0 records. I also know for a fact that the database contains records matching this year, as I have used the query below (with the time selection part removed) to select all the records . What might be the issue?
; WITH positions AS (
SELECT MMSI
, Message_ID
, "Time"
, Latitude
, Longitude
FROM dbo.DecodedCSVMessages_Staging
WHERE Message_ID IN (1, 3)
AND Latitude > 45
AND Latitude < 85
AND Longitude < -50
AND Longitude > -141
AND "Time" LIKE '2013-09%' <- this is where I'd put it
)
, details AS (
SELECT MMSI
, Ship_Type
, Vessel_Name
, IMO
, Row_Number() OVER (PARTITION BY MMSI ORDER BY "Time" DESC) As row_num
FROM dbo.DecodedCSVMessages_Staging
WHERE Message_ID = 5
)
SELECT positions.MMSI
, positions.Message_ID
, positions."Time"
, details.Ship_Type
, details.Vessel_Name
, details.IMO
, positions.Latitude
, positions.Longitude
FROM positions
INNER
JOIN details
ON details.MMSI = positions.MMSI
There is no reason to use LIKE, change this line to:
AND [Time] > '2013-09-01'
If you only want values from September 2013 you can try this:
AND "Time" >= '2013-09-01'
AND "Time" < '2013-10-01'

SQL query help to generate data

Below the query I created to get certain itemnumbers, qty ordered and price and others from the database. The problem is that sometimes an order doesn't contain 20 itemsnumbers but only 2. Now my question is if it's possible to fill the spaces with other itemnumbers random from the DB. It doesn't need to be correct because it's just for testing.
So can anybody help?
select
t.*,
-- THE THREE SUMVAT VALUES BELOW ARE VERY IMPORTANT. THEY ARE ONLY CORRECT HOWEVER WHEN THERE ARE NO NULL VALUES INVOLVED IN THE MATH,
-- I.E. WHEN THERE ARE 20 ITEMS/QTYS/PRICES INVOLVED WITH A CERTAIN ORDER_NO
((t.QTY1*t.PRICE1)+(t.QTY2*t.PRICE2)+(t.QTY3*t.PRICE3)+(t.QTY4*t.PRICE4)+(t.QTY5*t.PRICE5)) SUMVAT0, -- example: 5123.45 <- lines 1-5: Q*P
((t.QTY6*t.PRICE6)+(t.QTY7*t.PRICE7)+(t.QTY8*t.PRICE8)+(t.QTY9*t.PRICE9)+(t.QTY10*t.PRICE10)+(t.QTY11*t.PRICE11)+(t.QTY12*t.PRICE12)+(t.QTY13*t.PRICE13)+(t.QTY14*t.PRICE14)+(t.QTY15*t.PRICE15))
SUMVAT6, -- example: 1234.56 <- lines 6-15: Q*P
((t.QTY16*t.PRICE16)+(t.QTY17*t.PRICE17)+(t.QTY18*t.PRICE18)+(t.QTY19*t.PRICE19)+(t.QTY20*t.PRICE20)) SUMVAT19 -- example: 4567.89 <- lines 16-20: Q*P
from (
select
(to_char(p.vdate, 'YYYYMMDD') || to_char(sysdate, 'HH24MISS')) DT,
(to_char(p.vdate, 'YYYY-MM-DD') ||'T' || to_char(sysdate, 'HH24:MI:') || '00') DATETIME,
(to_char(orh.written_date, 'YYYY-MM-DD') ||'T00:00:00') DATETIME2,
orh.supplier FAKE_GLN,
y.*
from (
select
x.order_no ORDNO
, max(decode(r,1 ,x.item,null)) FAKE_GTIN1
, max(decode(r,2 ,x.item,null)) FAKE_GTIN2
, max(decode(r,3 ,x.item,null)) FAKE_GTIN3
, max(decode(r,4 ,x.item,null)) FAKE_GTIN4
, max(decode(r,5 ,x.item,null)) FAKE_GTIN5
, max(decode(r,6 ,x.item,null)) FAKE_GTIN6
, max(decode(r,7 ,x.item,null)) FAKE_GTIN7
, max(decode(r,8 ,x.item,null)) FAKE_GTIN8
, max(decode(r,9 ,x.item,null)) FAKE_GTIN9
, max(decode(r,10,x.item,null)) FAKE_GTIN10
, max(decode(r,11,x.item,null)) FAKE_GTIN11
, max(decode(r,12,x.item,null)) FAKE_GTIN12
, max(decode(r,13,x.item,null)) FAKE_GTIN13
, max(decode(r,14,x.item,null)) FAKE_GTIN14
, max(decode(r,15,x.item,null)) FAKE_GTIN15
, max(decode(r,16,x.item,null)) FAKE_GTIN16
, max(decode(r,17,x.item,null)) FAKE_GTIN17
, max(decode(r,18,x.item,null)) FAKE_GTIN18
, max(decode(r,19,x.item,null)) FAKE_GTIN19
, max(decode(r,20,x.item,null)) FAKE_GTIN20
, max(decode(r,1 ,x.qty_ordered,null)) QTY1
, max(decode(r,2 ,x.qty_ordered,null)) QTY2
, max(decode(r,3 ,x.qty_ordered,null)) QTY3
, max(decode(r,4 ,x.qty_ordered,null)) QTY4
, max(decode(r,5 ,x.qty_ordered,null)) QTY5
, max(decode(r,6 ,x.qty_ordered,null)) QTY6
, max(decode(r,7 ,x.qty_ordered,null)) QTY7
, max(decode(r,8 ,x.qty_ordered,null)) QTY8
, max(decode(r,9 ,x.qty_ordered,null)) QTY9
, max(decode(r,10,x.qty_ordered,null)) QTY10
, max(decode(r,11,x.qty_ordered,null)) QTY11
, max(decode(r,12,x.qty_ordered,null)) QTY12
, max(decode(r,13,x.qty_ordered,null)) QTY13
, max(decode(r,14,x.qty_ordered,null)) QTY14
, max(decode(r,15,x.qty_ordered,null)) QTY15
, max(decode(r,16,x.qty_ordered,null)) QTY16
, max(decode(r,17,x.qty_ordered,null)) QTY17
, max(decode(r,18,x.qty_ordered,null)) QTY18
, max(decode(r,19,x.qty_ordered,null)) QTY19
, max(decode(r,20,x.qty_ordered,null)) QTY20
, max(decode(r,1 ,x.unit_cost,null)) PRICE1
, max(decode(r,2 ,x.unit_cost,null)) PRICE2
, max(decode(r,3 ,x.unit_cost,null)) PRICE3
, max(decode(r,4 ,x.unit_cost,null)) PRICE4
, max(decode(r,5 ,x.unit_cost,null)) PRICE5
, max(decode(r,6 ,x.unit_cost,null)) PRICE6
, max(decode(r,7 ,x.unit_cost,null)) PRICE7
, max(decode(r,8 ,x.unit_cost,null)) PRICE8
, max(decode(r,9 ,x.unit_cost,null)) PRICE9
, max(decode(r,10,x.unit_cost,null)) PRICE10
, max(decode(r,11,x.unit_cost,null)) PRICE11
, max(decode(r,12,x.unit_cost,null)) PRICE12
, max(decode(r,13,x.unit_cost,null)) PRICE13
, max(decode(r,14,x.unit_cost,null)) PRICE14
, max(decode(r,15,x.unit_cost,null)) PRICE15
, max(decode(r,16,x.unit_cost,null)) PRICE16
, max(decode(r,17,x.unit_cost,null)) PRICE17
, max(decode(r,18,x.unit_cost,null)) PRICE18
, max(decode(r,19,x.unit_cost,null)) PRICE19
, max(decode(r,20,x.unit_cost,null)) PRICE20
from (
select
rank() over (partition by oh.order_no order by ol.item asc) r,
oh.supplier,
oh.order_no,
oh.written_date,
ol.item,
ol.qty_ordered,
ol.unit_cost
from
ordhead oh
JOIN ordloc ol ON oh.order_no = ol.order_no
where
-- count(numrows) = 1500
not unit_cost is null
-- and ol.order_no in (6181,6121)
) x
group by x.order_no
) y
JOIN ordhead orh ON orh.order_no = y.ORDNO,
period p
) t
;
Without being able to really test this, you might try something like this. Replace the inline view 'x' with this:
FROM (
WITH q AS (
SELECT LEVEL r, TO_CHAR(TRUNC(dbms_random.value*1000,0)) item
, TRUNC(dbms_random.value*100,0) qty_ordered
, TRUNC(dbms_random.value*10,2) unit_cost
FROM dual CONNECT BY LEVEL <= 20
)
SELECT COALESCE(x1.r, q.r) r, supplier, order_no, written_date
, COALESCE(x1.item, q.item) item
, COALESCE(x1.qty_ordered, q.qty_ordered) qty_ordered
, COALESCE(x1.unit_cost, q.unit_cost) unit_cost
FROM (SELECT ROW_NUMBER() OVER (PARTITION BY oh.order_no ORDER BY ol.item ASC) r
, oh.supplier
, oh.order_no
, oh.written_date
, ol.item
, ol.qty_ordered
, ol.unit_cost
FROM ordhead oh JOIN ordloc ol ON oh.order_no = ol.order_no
WHERE NOT unit_cost IS NULL) x1 RIGHT JOIN q ON x1.r = q.r
) x
GROUP BY x.order_no
The WITH clause will give you a table with 20 rows of random data. Outer join that with your old 'x' data and you will be guaranteed 20 rows of data. You might not need to cast the item as a varchar2 depending on data. (N.B., I finally found a query that it makes sense to use a RIGHT JOIN with. See this SO question)
I'm not quite sure what you're trying to do with the GROUP BY and MAX stuff? In the future it would be helpful to condense your examples into something others can easily test, a minimal case that gets your point across.
I also incorporated #Kevin's good suggestion to use ROW_NUMBER instead of RANK.
very difficult to understand...
i think you might be ok if you put a 0 instead of null in the price values...
, max(decode(r,18,x.unit_cost,0)) PRICE18
and
, max(decode(r,20,x.qty_ordered,0)) QTY20
then at least the math should work.
Rank will not guarantee a sequential count of the items in the groups there, may be gaps when you have several rows with the same value.
for a decent explanation see:
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:2920665938600
I think you need to use row_number