How to remove records when ID has at least one related measure = 0 - qlikview

Essentially, with reference to the table, I want to exclude all matching IDs in this case C123 because it has at least one of the UsedResources = 0.
Any help or advice would be very appreciated here.
PersonalID
ID_Holder
AssigmentTags
UsedResources
C123
Kratos
AS001
0
C123
Kratos
AS999
15
C123
Kratos
AS542
20
P567
Zesus
AS874
25
P567
Zesus
AS123
10
P567
Zesus
AS983
5

Script
You can flag these records and the use the flag to filter them out in the UI
RawData:
Load * Inline [
PersonalID, ID_Holder, AssigmentTags, UsedResources
C123 , Kratos , AS001 , 0
C123 , Kratos , AS999 , 15
C123 , Kratos , AS542 , 20
P567 , Zesus , AS874 , 25
P567 , Zesus , AS123 , 10
P567 , Zesus , AS983 , 5
];
join
Load
PersonalID,
if(MinUsedResources > 0, 1, 0) as HasNonZeroResources
;
Load
distinct
PersonalID,
min(UsedResources) as MinUsedResources
Resident
RawData
Group By
PersonalID
;
Once the app is reloaded then HasNonZeroResources field can be used in the expressions:
With Set analysis:
count( {< HasNonZeroResources = {1} >} AssigmentTags)
Without Set analysis:
// not sure how effective is this
count( AssigmentTags ) * HasNonZeroResources
Expression
One way is to only include PersonalID for which the minimum value for UsedResources is
count( {< PersonalID = {"=min(UsedResources) > 0"} >} AssigmentTags)

Related

ORA-00937: not a single-group group function PROBLEM

I'm trying to find the number of transaction per value bend analysis for following for each of trans type per the BRACKTS : 1-100 , 101-200
SELECT TO_CHAR(TRANSDATE,'MM-YY') MONTH
, AGENT_NUMBER
, COUNT(DISTINCT RECEIVER) NUM_OF_CUST
, COMMAND_ID
, BRACKETS
FROM (
SELECT TRANSDATE
, AGENT_NUMBER
, RECEIVER
, COMMAND_ID
, CASE
WHEN COUNT(DISTINCT RECEVIVER) BETWEEN 1 AND 100 THEN '1-100' ,
WHEN COUNT(DISTINCT RECEIVER) BETWEEN 101 AND 200 THEN '101-200'
ELSE '0' END BRACKETS
FROM TRANSACTION#ABSDB
WHERE RESULT_CODE = 'DONE'
AND TO_CHAR (TRANSDATE,''MM-YY) = '06-21')
GROUP BY TO_CHAR(TRANSDATE,'MM-YY')
, AGENT_NUMBER , COMMAND_ID , BRACKETS
You need a GROUP BY on the inner sub-query for the COUNT(DISTINCT ...) (and there was at least one typo, maybe two, see the inline comments):
SELECT MONTH
, AGENT_NUMBER
, COUNT(DISTINCT RECEIVER) NUM_OF_CUST
, COMMAND_ID
, BRACKETS
FROM (
SELECT TO_CHAR(TRANSDATE, 'MM-YY') AS month
, AGENT_NUMBER
, RECEIVER
, COMMAND_ID
, CASE
WHEN COUNT(DISTINCT RECEVIVER) BETWEEN 1 AND 100 THEN '1-100' ,
-- ^ Is this spelt correctly?
WHEN COUNT(DISTINCT RECEIVER) BETWEEN 101 AND 200 THEN '101-200'
ELSE '0' END BRACKETS
FROM TRANSACTION#ABSDB
WHERE RESULT_CODE = 'DONE'
AND TO_CHAR(TRANSDATE, 'MM-YY') = '06-21' -- Also the quotes were wrong here
GROUP BY TO_CHAR(TRANSDATE, 'MM-YY'),
, AGENT_NUMBER
, RECEIVER
, COMMAND_ID
)
GROUP BY MONTH
, AGENT_NUMBER
, COMMAND_ID
, BRACKETS;

Bigquery similar query different output

I have 2 standard SQL queries in Bigquery. They are:
Query1:
select sfcase.case_id
, sfuser.user_id
, sfcase_create_date
, sfcase_status
, sfcase_origin
, sfcategory_category1
, sfcategory_category2
, sfcase_priority
, sftime_elapsedmin
, sftime_targetmin
, sfcase_sla_closemin
, if(count(sfcomment.parentid)=0,"0"
,if(count(sfcomment.parentid)=1,"1"
,if(count(sfcomment.parentid)=2,"2"
,"3"))) as comment_response
from(
select id as case_id
, timestamp_add(createddate, interval 7 hour) as sfcase_create_date
, status as sfcase_status
, origin as sfcase_origin
, priority as sfcase_priority
, case when status = 'Closed' then timestamp_diff(timestamp_add(closeddate, interval 7 hour),timestamp_add(createddate, interval 7 hour),minute)
end as sfcase_sla_closemin
, case_category__c
from `some_of_my_dataset.cs_case`
) sfcase
left join(
select upper(x1st_category__c) as sfcategory_category1
, upper(x2nd_category__c) as sfcategory_category2
, id
from `some_of_my_dataset.cs_case_category`
) sfcategory
on sfcategory.id = sfcase.case_category__c
left join(
select parentid as parentid
from `some_of_my_dataset.cs_case_comment`
) sfcomment
on sfcase.case_id = sfcomment.parentid
left join(
select ELAPSEDTIMEINMINS as sftime_elapsedmin
, TARGETRESPONSEINMINS as sftime_targetmin
, caseid
from `some_of_my_dataset.cs_case_milestone`
)sftime
on sfcase.case_id = sftime.caseid
left join(
select id as user_id
, createddate
from `some_of_my_dataset.cs_user`
)sfuser
on date(sfuser.createddate) = date(sfcase.sfcase_create_date)
group by 1
, 2
, 3
, 4
, 5
, 6
, 7
, 8
, 9
, 10
, 11
Query2:
select sfcase.id as case_id
, sfuser.id as user_id
, timestamp_add(sfcase.createddate, interval 7 hour) as sf_create_date
, sfcase.status as sf_status
, sfcase.origin as sf_origin
, upper(sfcategory.x1st_category__c) as sf_category1
, sfcategory.x2nd_category__c as sf_category2
, sfcase.priority as sf_priority
, sftime.ELAPSEDTIMEINMINS as sf_elapsedresponsemin
, sftime.TARGETRESPONSEINMINS as sf_targetresponsemin
, case when sfcase.status = 'Closed' then timestamp_diff(timestamp_add(sfcase.closeddate, interval 7 hour),timestamp_add(sfcase.createddate, interval 7 hour),minute)
end as sla_closemin
, if(count(sfcomment.parentid)=0,"0"
,if(count(sfcomment.parentid)=1,"1"
,if(count(sfcomment.parentid)=2,"2"
,"3"))) as comment_response
from `some_of_my_dataset.cs_case` as sfcase
left join `some_of_my_dataset.cs_case_category` as sfcategory
on sfcategory.id = sfcase.case_category__c
left join `some_of_my_dataset.cs_case_comment` as sfcomment
on sfcase.id = sfcomment.parentid
left join `some_of_my_dataset.cs_case_milestone` as sftime
on sfcase.id = sftime.caseid
left join `some_of_my_dataset.cs_user` as sfuser
on date(sfuser.createddate) = date(sfcase.createddate)
group by 1
, 2
, 3
, 4
, 5
, 6
, 7
, 8
, 9
, 10
, 11
I tried to run them in the same time. Query1 perform faster with less rows of data, while Query2 perform longer with more rows of data. Both of Query1 and Query2 have 12 columns.
Why do they return different result?
Which query should i use?
update: rename my dataset

Oracle SQL- Flag records based on record's date vs history

This is my 1st post on the forum. Usually I was able to find what I needed - but to tell the truth - I am not really sure how to ask a correct question to the issue. Therefore, please accept my apologies if there already is an answer on the forum and I missed it.
I am running the following code in an Oracle database via Benthic Software:
SELECT
T1."REGION"
, T1."COUNTRY"
, T1."IDNum"
, T1."CUSTOMER"
, T1."BUSSINESS"
, T3."FISCALYEARMONTH"
, T3."FISCALYEAR"
, SUM(T4."VALUE")
,"HISTORICAL_PURCHASE_FLAG"
FROM
"DATABASE"."SALES" T4
, "DATABASE"."CUSTOMER" T1
, "DATABASE"."PRODUCT" T2
, "DATABASE"."TIME" T3
WHERE
T4."CUSTOMERID" = T1."CUSTOMERID"
AND T4."PRODUCTID" = T2."PRODUCTID"
AND T4."DATEID" = T3."DATEID"
AND T3."FISCALYEAR" IN ('2016')
AND T1."COUNTRY" IN ('ENGLAND', 'France')
GROUP BY
T1."REGION"
, T1."COUNTRY"
, T1."IDNum"
, T1."CUSTOMER"
, T1."BUSSINESS"
, T3."FISCALYEARMONTH"
, T3."FISCALYEAR"
;
This query provides me with information on transactions. As you can see above, I would like to add a column named "HISTORICAL_PURCHASE_FLAG".
I would like the query to take CUSTOMER and FISCALYEARMONTH. Then, I would like to check if there are any transactions registered for the CUSTOMER, up to 2 years in the past.
So lets say I get the following result:
LineNum REGION COUNTRY IDNum CUSTOMER BUSSINESS FISCALYEARMONTH FISCALYEAR VALUE HISTORICAL_PURCHASE_FLAG
1 Europe ENGLAND 255 Abraxo Cleaner Co. Chemicals 201605 2016 34,567.00
2 Europe FRANCE 123 Metal Trade Heavy 201602 2016 12,500.00
3 Europe ENGLAND 255 Abraxo Cleaner Co. Chemicals 201601 2016 8,400.00
LineNum 1 shows transaction for Abraxo Cleaner Co. registered on 201605. And LineNum 3 is also for Abraxo Cleaner Co. but registered on 201601. What I would need the query to do, is to flag LineNum 1 as 'Existing'. Because there was a previous transaction registered.
On the other hand, LineNum 3 was the first time transactions was registered for Abraxo Cleaner Co. so the line would be flagged as 'New'.
To sum up, I would like for each row of data to be treated individually. And to check if there are any previous records of data for CUSTOMER & FISCALYEARMONTH - 24 months.
Thank you in advance for the help.
You can use LAG function:
SELECT
"REGION"
, "COUNTRY"
, "IDNum"
, "CUSTOMER"
, "BUSSINESS"
, "FISCALYEARMONTH"
, "FISCALYEAR"
, SUM("VALUE")
, MAX(CASE WHEN to_date(prev_fym,'YYYYMM') >= ADD_MONTHS (to_date("FISCALYEARMONTH",'YYYYMM'), -24) THEN 'Existing'
ELSE NULL END) "HISTORICAL_PURCHASE_FLAG"
FROM
(
SELECT
T1."REGION"
, T1."COUNTRY"
, T1."IDNum"
, T1."CUSTOMER"
, T1."BUSSINESS"
, T3."FISCALYEARMONTH"
, T3."FISCALYEAR"
, T4."VALUE"
, LAG ("FISCALYEARMONTH", 1) OVER (PARTITION BY T1."IDNum" ORDER BY T3."FISCALYEARMONTH" DESC) prev_fym
FROM
"DATABASE"."SALES" T4
, "DATABASE"."CUSTOMER" T1
, "DATABASE"."PRODUCT" T2
, "DATABASE"."TIME" T3
WHERE
T4."CUSTOMERID" = T1."CUSTOMERID"
AND T4."PRODUCTID" = T2."PRODUCTID"
AND T4."DATEID" = T3."DATEID"
AND T1."COUNTRY" IN ('ENGLAND', 'France')
AND T3."FISCALYEAR" IN ('2014','2015','2016')
)
WHERE "FISCALYEAR" IN ('2016')
GROUP BY
"REGION"
, "COUNTRY"
, "IDNum"
, "CUSTOMER"
, "BUSSINESS"
, "FISCALYEARMONTH"
, "FISCALYEAR"
;
Using a simplified "input" table... You can use the LAG() analytic function and a comparison condition to populate your last column. I assume your fiscalyearmonth is a number - if it is a character field, wrap fiscalyearmonth within TO_NUMBER(). (It would be much better if in fact you stored these as true Oracle dates, perhaps date 2016-06-01 instead of 201606, but I worked with what you have currently... and took advantage that in numeric format, "24 months ago" simply means "subtract 200").
with inputs (linenum, idnum, fiscalyearmonth) as (
select 1, 255, 201605 from dual union all
select 2, 123, 201602 from dual union all
select 3, 255, 201601 from dual union all
select 4, 255, 201210 from dual
)
select linenum, idnum, fiscalyearmonth,
case when fiscalyearmonth
- lag(fiscalyearmonth)
over (partition by idnum order by fiscalyearmonth) < 200
then 'Existing' else 'New' end as flag
from inputs
order by linenum;
LINENUM IDNUM FISCALYEARMONTH FLAG
---------- ---------- --------------- --------
1 255 201605 Existing
2 123 201602 New
3 255 201601 New
4 255 201210 New
Another solution might be to outer Join "DATABASE"."SALES" T4 a second time as T5, wilter the fiscal year via WHERE to < t4.FiscalYear-2. If the Column is NULL, the record is new, if the outer join results in a value, the record is historic.
You can achieve using row_number() function as below... modify as per your need...I assumed 2 years (means previous 24 months from sysdate ).
You can run the sub-queries separately to check how its working.
Select
"REGION"
,"COUNTRY"
,"IDNum"
,"CUSTOMER"
,"BUSSINESS"
,"FISCALYEARMONTH"
,"FISCALYEAR"
,"VALUE"
, ( case when ( TXNNO = 1 or TOTAL_TXN_LAST24MTH = 0 ) then 'New' else 'Existing' end ) as "HISTORICAL_PURCHASE_FLAG" -- if no txn in last 24 month or its first txn then 'new' else 'existing'
from
(
select
SubQry."REGION"
, SubQry."COUNTRY"
, SubQry."IDNum"
, SubQry."CUSTOMER"
, SubQry."BUSSINESS"
, SubQry."FISCALYEARMONTH"
, SUBQRY."FISCALYEAR"
, SUBQRY."VALUE"
, ROW_NUMBER() over (partition by SUBQRY."REGION",SUBQRY."COUNTRY",SUBQRY."IDNum",SUBQRY."CUSTOMER",SUBQRY."BUSSINESS" order by SUBQRY."FISCALYEARMONTH") as TXNNO,
, SUM(case when (TO_NUMBER(TO_CHAR(sysdate,'YYYYMM')) - SUBQRY."FISCALYEARMONTH") < 24 then 1 else 0 end) as TOTAL_TXN_LAST24MTH
From
(
SELECT
T1."REGION"
, T1."COUNTRY"
, T1."IDNum"
, T1."CUSTOMER"
, T1."BUSSINESS"
, T3."FISCALYEARMONTH"
, T3."FISCALYEAR"
, SUM(T4."VALUE") as VALUE
FROM
"DATABASE"."SALES" T4
, "DATABASE"."CUSTOMER" T1
, "DATABASE"."PRODUCT" T2
, "DATABASE"."TIME" T3
WHERE
T4."CUSTOMERID" = T1."CUSTOMERID"
AND T4."PRODUCTID" = T2."PRODUCTID"
AND T4."DATEID" = T3."DATEID"
AND T3."FISCALYEAR" IN ('2016')
AND T1."COUNTRY" IN ('ENGLAND', 'France')
GROUP BY
T1."REGION"
, T1."COUNTRY"
, T1."IDNum"
, T1."CUSTOMER"
, T1."BUSSINESS"
, T3."FISCALYEARMONTH"
, T3."FISCALYEAR"
) SUBQRY
);

BQ command line: Query too large

i am running a query from command line to store in a new table. In this query I have several subqueries, each accessing multiple tables with TABLE_DATE_RANGE.
For each table stub there is one table per day. SO there are 4 subqueries, each accessign 180 Tables (90 days in two TABLE_DATE_RANGE queries). This amounts to 720 Tables total. So I should not max out the 1k table limit.
I have maxed out the 1k table limit before and got an error stating "too many tables" or similar.
This query however gives me the error "Query too large". As you can see below, I do allow large results. Does anyone know a solution to this?
bq query -n0 --allow_large_results --replace --destination_table="cdate-prod:crm_adhoc.tmp_email_details_event_date" 'select event_date
,contact_id
,message_name
,message_name_join
,message_id
,email
,REGEXP_EXTRACT(email,r'([^#]*$)') as email_domain
,REGEXP_EXTRACT(REGEXP_EXTRACT(email,r'([^#]*$)'),r'(^[^\.]*)') as email_provider
,sent
,sent_unique_hlp
,open
,open_unique_hlp
,open_unique_msg_hlp
,click
,click_unique_hlp
,click_unique_msg_hlp
,soft_bounce
,medium_bounce
,hard_bounce
,activity
,type
,case when type = 1 then 'PPM'
when type = 2 then 'NPM'
when type = 3 then 'PENDING'
when type = 4 then 'CB'
when type = 5 then 'REDEBIT'
when type = 6 then 'INTCO'
when type = 7 then 'EXTCO'
else 'XX'
end as type_str
from
(select send_date as event_date
,contact_id
,message_name
,substr(message_name,7) as message_name_join
,message_id
,email
, 1 as sent
, contact_id as sent_unique_hlp
, 0 as open
, string('') as open_unique_hlp
, string('') as open_unique_msg_hlp
, 0 as click
, string('') as click_unique_hlp
, string('') as click_unique_msg_hlp
, 0 as soft_bounce
, 0 as medium_bounce
, 0 as hard_bounce
, IFNULL(activity,0) as activity
, IFNULL(type,0) as type
from TABLE_DATE_RANGE(crm_data.campaign_messages,date_add(CURRENT_DATE(),-90,"day"),date_add(CURRENT_DATE(),-1,"day")),
TABLE_DATE_RANGE(crm_data.interface_messages,date_add(CURRENT_DATE(),-90,"day"),date_add(CURRENT_DATE(),-1,"day"))) ms,
(select open_date as event_date
,contact_id
,message_name
,substr(message_name,7) as message_name_join
,message_id
,email
, 0 as sent
, string('')as sent_unique_hlp
, 1 as open
, contact_id as open_unique_hlp
, concat(contact_id,string(TIMESTAMP_TO_MSEC(send_date))) open_unique_msg_hlp
, 0 as click
, string('') as click_unique_hlp
, string('') as click_unique_msg_hlp
, 0 as soft_bounce
, 0 as medium_bounce
, 0 as hard_bounce
, IFNULL(activity,0) as activity
, IFNULL(type,0) as type
from TABLE_DATE_RANGE(crm_data.interface_openings,date_add(CURRENT_DATE(),-90,"day"),date_add(CURRENT_DATE(),-1,"day")),
TABLE_DATE_RANGE(crm_data.campaign_openings,date_add(CURRENT_DATE(),-90,"day"),date_add(CURRENT_DATE(),-1,"day"))) op,
(select click_date as event_date
,contact_id
,message_name
,substr(message_name,7) as message_name_join
,message_id
,email
, 0 as sent
, string('')as sent_unique_hlp
, 0 as open
, string('') as open_unique_hlp
, string('') as open_unique_msg_hlp
, 1 as click
, contact_id as click_unique_hlp
, concat(contact_id,string(TIMESTAMP_TO_MSEC(send_date))) click_unique_msg_hlp
, 0 as soft_bounce
, 0 as medium_bounce
, 0 as hard_bounce
, IFNULL(activity,0) as activity
, IFNULL(type,0) as type
from TABLE_DATE_RANGE(crm_data.interface_clicks,date_add(CURRENT_DATE(),-90,"day"),date_add(CURRENT_DATE(),-1,"day")),
TABLE_DATE_RANGE(crm_data.campaign_clicks,date_add(CURRENT_DATE(),-90,"day"),date_add(CURRENT_DATE(),-1,"day"))) cl,
(select bounce_date as event_date
,contact_id
,message_name
,substr(message_name,7) as message_name_join
,message_id
,email
, 0 as sent
, string('')as sent_unique_hlp
, 0 as open
, string('') as open_unique_hlp
, string('') as open_unique_msg_hlp
, 0 as click
, string('') as click_unique_hlp
, string('') as click_unique_msg_hlp
,case when bounce_category = 1 then 1 end soft_bounce
,case when bounce_category = 2 then 1 end medium_bounce
,case when bounce_category in (3,4,5) then 1 end hard_bounce
, IFNULL(activity,0) as activity
, IFNULL(type,0) as type
from TABLE_DATE_RANGE(crm_data.interface_bounces,date_add(CURRENT_DATE(),-90,"day"),date_add(CURRENT_DATE(),-1,"day")),
TABLE_DATE_RANGE(crm_data.campaign_bounces,date_add(CURRENT_DATE(),-90,"day"),date_add(CURRENT_DATE(),-1,"day"))) bo'
Waiting on bqjob_r71fbdcc95fa950e5_0000014f82f52d18_1 ... (0s) Current status: RUNNING
Waiting on bqjob_r71fbdcc95fa950e5_0000014f82f52d18_1 ... (1s) Current status: RUNNING
Waiting on bqjob_r71fbdcc95fa950e5_0000014f82f52d18_1 ... (1s) Current status: DONE
Error in query string: Error processing job
'cdate-prod:bqjob_r71fbdcc95fa950e5_0000014f82f52d18_1': Query too large
My guess, from what I know:
A query can only be up to x characters long. The query presented here is shorter than that, but...
TABLE_DATE_RANGE works by internally expanding the query to contain explicitly all the in-range table names. This is usually fine, but...
This query refers to 720 tables. The query will be expanded by explicitly mentioning 720*length(table_name). That pushes it over the limit.
Suggestion: Could you union older tables into monthly entities instead of daily ones?

SQL query help to generate data

Below the query I created to get certain itemnumbers, qty ordered and price and others from the database. The problem is that sometimes an order doesn't contain 20 itemsnumbers but only 2. Now my question is if it's possible to fill the spaces with other itemnumbers random from the DB. It doesn't need to be correct because it's just for testing.
So can anybody help?
select
t.*,
-- THE THREE SUMVAT VALUES BELOW ARE VERY IMPORTANT. THEY ARE ONLY CORRECT HOWEVER WHEN THERE ARE NO NULL VALUES INVOLVED IN THE MATH,
-- I.E. WHEN THERE ARE 20 ITEMS/QTYS/PRICES INVOLVED WITH A CERTAIN ORDER_NO
((t.QTY1*t.PRICE1)+(t.QTY2*t.PRICE2)+(t.QTY3*t.PRICE3)+(t.QTY4*t.PRICE4)+(t.QTY5*t.PRICE5)) SUMVAT0, -- example: 5123.45 <- lines 1-5: Q*P
((t.QTY6*t.PRICE6)+(t.QTY7*t.PRICE7)+(t.QTY8*t.PRICE8)+(t.QTY9*t.PRICE9)+(t.QTY10*t.PRICE10)+(t.QTY11*t.PRICE11)+(t.QTY12*t.PRICE12)+(t.QTY13*t.PRICE13)+(t.QTY14*t.PRICE14)+(t.QTY15*t.PRICE15))
SUMVAT6, -- example: 1234.56 <- lines 6-15: Q*P
((t.QTY16*t.PRICE16)+(t.QTY17*t.PRICE17)+(t.QTY18*t.PRICE18)+(t.QTY19*t.PRICE19)+(t.QTY20*t.PRICE20)) SUMVAT19 -- example: 4567.89 <- lines 16-20: Q*P
from (
select
(to_char(p.vdate, 'YYYYMMDD') || to_char(sysdate, 'HH24MISS')) DT,
(to_char(p.vdate, 'YYYY-MM-DD') ||'T' || to_char(sysdate, 'HH24:MI:') || '00') DATETIME,
(to_char(orh.written_date, 'YYYY-MM-DD') ||'T00:00:00') DATETIME2,
orh.supplier FAKE_GLN,
y.*
from (
select
x.order_no ORDNO
, max(decode(r,1 ,x.item,null)) FAKE_GTIN1
, max(decode(r,2 ,x.item,null)) FAKE_GTIN2
, max(decode(r,3 ,x.item,null)) FAKE_GTIN3
, max(decode(r,4 ,x.item,null)) FAKE_GTIN4
, max(decode(r,5 ,x.item,null)) FAKE_GTIN5
, max(decode(r,6 ,x.item,null)) FAKE_GTIN6
, max(decode(r,7 ,x.item,null)) FAKE_GTIN7
, max(decode(r,8 ,x.item,null)) FAKE_GTIN8
, max(decode(r,9 ,x.item,null)) FAKE_GTIN9
, max(decode(r,10,x.item,null)) FAKE_GTIN10
, max(decode(r,11,x.item,null)) FAKE_GTIN11
, max(decode(r,12,x.item,null)) FAKE_GTIN12
, max(decode(r,13,x.item,null)) FAKE_GTIN13
, max(decode(r,14,x.item,null)) FAKE_GTIN14
, max(decode(r,15,x.item,null)) FAKE_GTIN15
, max(decode(r,16,x.item,null)) FAKE_GTIN16
, max(decode(r,17,x.item,null)) FAKE_GTIN17
, max(decode(r,18,x.item,null)) FAKE_GTIN18
, max(decode(r,19,x.item,null)) FAKE_GTIN19
, max(decode(r,20,x.item,null)) FAKE_GTIN20
, max(decode(r,1 ,x.qty_ordered,null)) QTY1
, max(decode(r,2 ,x.qty_ordered,null)) QTY2
, max(decode(r,3 ,x.qty_ordered,null)) QTY3
, max(decode(r,4 ,x.qty_ordered,null)) QTY4
, max(decode(r,5 ,x.qty_ordered,null)) QTY5
, max(decode(r,6 ,x.qty_ordered,null)) QTY6
, max(decode(r,7 ,x.qty_ordered,null)) QTY7
, max(decode(r,8 ,x.qty_ordered,null)) QTY8
, max(decode(r,9 ,x.qty_ordered,null)) QTY9
, max(decode(r,10,x.qty_ordered,null)) QTY10
, max(decode(r,11,x.qty_ordered,null)) QTY11
, max(decode(r,12,x.qty_ordered,null)) QTY12
, max(decode(r,13,x.qty_ordered,null)) QTY13
, max(decode(r,14,x.qty_ordered,null)) QTY14
, max(decode(r,15,x.qty_ordered,null)) QTY15
, max(decode(r,16,x.qty_ordered,null)) QTY16
, max(decode(r,17,x.qty_ordered,null)) QTY17
, max(decode(r,18,x.qty_ordered,null)) QTY18
, max(decode(r,19,x.qty_ordered,null)) QTY19
, max(decode(r,20,x.qty_ordered,null)) QTY20
, max(decode(r,1 ,x.unit_cost,null)) PRICE1
, max(decode(r,2 ,x.unit_cost,null)) PRICE2
, max(decode(r,3 ,x.unit_cost,null)) PRICE3
, max(decode(r,4 ,x.unit_cost,null)) PRICE4
, max(decode(r,5 ,x.unit_cost,null)) PRICE5
, max(decode(r,6 ,x.unit_cost,null)) PRICE6
, max(decode(r,7 ,x.unit_cost,null)) PRICE7
, max(decode(r,8 ,x.unit_cost,null)) PRICE8
, max(decode(r,9 ,x.unit_cost,null)) PRICE9
, max(decode(r,10,x.unit_cost,null)) PRICE10
, max(decode(r,11,x.unit_cost,null)) PRICE11
, max(decode(r,12,x.unit_cost,null)) PRICE12
, max(decode(r,13,x.unit_cost,null)) PRICE13
, max(decode(r,14,x.unit_cost,null)) PRICE14
, max(decode(r,15,x.unit_cost,null)) PRICE15
, max(decode(r,16,x.unit_cost,null)) PRICE16
, max(decode(r,17,x.unit_cost,null)) PRICE17
, max(decode(r,18,x.unit_cost,null)) PRICE18
, max(decode(r,19,x.unit_cost,null)) PRICE19
, max(decode(r,20,x.unit_cost,null)) PRICE20
from (
select
rank() over (partition by oh.order_no order by ol.item asc) r,
oh.supplier,
oh.order_no,
oh.written_date,
ol.item,
ol.qty_ordered,
ol.unit_cost
from
ordhead oh
JOIN ordloc ol ON oh.order_no = ol.order_no
where
-- count(numrows) = 1500
not unit_cost is null
-- and ol.order_no in (6181,6121)
) x
group by x.order_no
) y
JOIN ordhead orh ON orh.order_no = y.ORDNO,
period p
) t
;
Without being able to really test this, you might try something like this. Replace the inline view 'x' with this:
FROM (
WITH q AS (
SELECT LEVEL r, TO_CHAR(TRUNC(dbms_random.value*1000,0)) item
, TRUNC(dbms_random.value*100,0) qty_ordered
, TRUNC(dbms_random.value*10,2) unit_cost
FROM dual CONNECT BY LEVEL <= 20
)
SELECT COALESCE(x1.r, q.r) r, supplier, order_no, written_date
, COALESCE(x1.item, q.item) item
, COALESCE(x1.qty_ordered, q.qty_ordered) qty_ordered
, COALESCE(x1.unit_cost, q.unit_cost) unit_cost
FROM (SELECT ROW_NUMBER() OVER (PARTITION BY oh.order_no ORDER BY ol.item ASC) r
, oh.supplier
, oh.order_no
, oh.written_date
, ol.item
, ol.qty_ordered
, ol.unit_cost
FROM ordhead oh JOIN ordloc ol ON oh.order_no = ol.order_no
WHERE NOT unit_cost IS NULL) x1 RIGHT JOIN q ON x1.r = q.r
) x
GROUP BY x.order_no
The WITH clause will give you a table with 20 rows of random data. Outer join that with your old 'x' data and you will be guaranteed 20 rows of data. You might not need to cast the item as a varchar2 depending on data. (N.B., I finally found a query that it makes sense to use a RIGHT JOIN with. See this SO question)
I'm not quite sure what you're trying to do with the GROUP BY and MAX stuff? In the future it would be helpful to condense your examples into something others can easily test, a minimal case that gets your point across.
I also incorporated #Kevin's good suggestion to use ROW_NUMBER instead of RANK.
very difficult to understand...
i think you might be ok if you put a 0 instead of null in the price values...
, max(decode(r,18,x.unit_cost,0)) PRICE18
and
, max(decode(r,20,x.qty_ordered,0)) QTY20
then at least the math should work.
Rank will not guarantee a sequential count of the items in the groups there, may be gaps when you have several rows with the same value.
for a decent explanation see:
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:2920665938600
I think you need to use row_number