Getting differing values based on 3 criteria - sql

Looked for an answer on this for a while and not quite sure how to ask it, much less answer it. I have a setup like the one below:
warehouse
company
charge code
date
price
other data
1
comp 1
boxes
2022-1-1
3.00
blah blah
2
comp 1
bags
2022-1-1
1.00
blah blah
3
comp 1
bag2
2022-2-5
1.00
blah blah
1
comp 2
boxes
2022-1-1
3.00
blah blah
2
comp 2
bags
2022-1-1
1.50
blah blah
3
comp 2
bag2
2022-2-5
2.00
blah blah
I am trying to make a query that will get me the prices that are different compared to the other companies in the same warehouse with the same charge code. For example, if it were to be run on the table above, it would result in
warehouse
company
charge code
date
price
other data
2
comp 1
bags
2022-1-1
1.00
blah blah
2
comp 2
bags
2022-1-1
1.50
blah blah
3
comp 1
bag2
2022-2-5
1.00
blah blah
3
comp 2
bag2
2022-2-5
2.00
blah blah
Since the box prices were the same for both companies in the same warehouse, they would be removed.
My code is
SELECT * FROM
(
WITH subquery AS (***LARGE IRRELEVANT SUBQUERY***)
SELECT distinct
warehouse, company, charge_code, date, price, other1, other2
FROM
subquery
WHERE(price)
IN
(SELECT distinct i1.price
FROM M_CHG_DATE_D i1 join M_CHG_DATE_D i2
ON i1.charge_code = i2.charge_code AND
i2.warehouse = i2.warehouse AND
i1.company != i2.company)
AND (warehouse, company, charge_code, date)
IN
(SELECT warehouse, company, charge_code, MAX(date)
FROM subquery
GROUP BY warehouse, company, charge_code)
)
WHERE company IN
('comp1', 'comp2', 'comp3', ... , 'comp n')
AND
warehouse NOT IN('list of warehouses')
ORDER BY company, charge_code, warehouse
Currently, instances where the companies have the same price in the same warehouse for the same charge code are not being filtered out. I would appreciate any help. Thanks.
update with actual data from the table:
warehouse
company
charge code
date
price
other data
C1
GEN
BB
2022-2-5
.032
the same
C1
MUL
BB
2022-2-5
.032
the same
C1
RAV
BB
2022-1-1
.0476
the same
C1
RMF
BB
2022-1-1
.0476
the same
C2
BAM
BB
2022-1-1
.0553
the same
C2
BUM
BB
2022-1-1
.0553
the same
which should result in
warehouse
company
charge code
date
price
other data
C1
GEN
BB
2022-2-5
.032
the same
C1
MUL
BB
2022-2-5
.032
the same
C1
RAV
BB
2022-1-1
.0476
the same
C1
RMF
BB
2022-1-1
.0476
the same

Use a subquery to determine which combinations of warehouse and charge code has more than one price, and fetch the matching rows:
select warehouse, company, charge_code, chg_date, price, other1, other2
from m_chg_date_d
where (warehouse, charge_code) in
(
select warehouse, charge_code
from m_chg_date_d
group by warehouse, charge_code
having count(distinct price) > 1
);
The query assumes that (warehouse, company, charge_code) is unique, that is to say, that no company has the same charge code more than once in the same warehouse. This also means that there is no need to use distinct in the outermost query, since all rows must be unique when the result includes warehouse, company and charge code.
In the query above, I renamed the date column to chg_date, as date is a reserved word in Oracle.

Or, create an in-line grouping table to join the original table back with ; I don't know if the date column (I called it dt to avoid conflicts with reserved words) must or must not be part of the grouping criteria - works both ways ...
WITH
2 -- your input , don't use in final query
3 indata(warehouse,company,charge_code,dt,price,other_data) AS (
4 SELECT 1,'comp 1','boxes',DATE '2022-1-1',3.00,'blah blah'
5 UNION ALL SELECT 2,'comp 1','bags' ,DATE '2022-1-1',1.00,'blah blah'
6 UNION ALL SELECT 3,'comp 1','bag2' ,DATE '2022-2-5',1.00,'blah blah'
7 UNION ALL SELECT 1,'comp 2','boxes',DATE '2022-1-1',3.00,'blah blah'
8 UNION ALL SELECT 2,'comp 2','bags' ,DATE '2022-1-1',1.50,'blah blah'
9 UNION ALL SELECT 3,'comp 2','bag2' ,DATE '2022-2-5',2.00,'blah blah'
10 )
11 -- real query starts here - replace following comma with "WITH"
12 ,
13 select_criteria AS (
14 SELECT
15 warehouse
16 , charge_code
17 , dt
18 , COUNT(*) AS itemcount
19 , COUNT(DISTINCT price) AS pricecount
20 FROM indata
21 GROUP BY
22 warehouse
23 , charge_code
24 , dt
25 HAVING pricecount = itemcount
26 )
27 SELECT
28 indata.*
29 FROM indata
30 JOIN select_criteria USING (
31 warehouse
32 , charge_code
33 , dt
34 )
35 ORDER BY 1,2
36 ;
37 -- out warehouse | company | charge_code | dt | price | other_data
38 -- out -----------+---------+-------------+------------+-------+------------
39 -- out 2 | comp 1 | bags | 2022-01-01 | 1.00 | blah blah
40 -- out 2 | comp 2 | bags | 2022-01-01 | 1.50 | blah blah
41 -- out 3 | comp 1 | bag2 | 2022-02-05 | 1.00 | blah blah
42 -- out 3 | comp 2 | bag2 | 2022-02-05 | 2.00 | blah blah
43 -- out (4 rows)

Use EXISTS to check whether there is a matching row with another price.
SELECT *
FROM m_chg_date_d
WHERE EXISTS
(
SELECT NULL
FROM m_chg_date_d other
WHERE other.warehouse = m_chg_date_d.warehouse
AND other.charge_code = m_chg_date_d.charge_code
AND other.price <> m_chg_date_d.price
)
ORDER BY warehouse, charge_code, company;

Related

SQL displaying results based on a value in column

So I have 2 tables in web SQL , one of them looks like this(there are thousands of rows):
customer_number | order_number
--------------------------------------------
1234 12
1234 13
1234 14
6793 20
6793 22
3210 53
etc.
And the other table like this(also thousands of rows):
customer_number | first_purchase_year
----------------------------------------------------
1234 2010
5313 2001
1632 2018
9853 2017
6793 2000
3210 2005
etc.
I have this code to select 10 customers from the first table and list all their purchases:
select top 10 * from
(select distinct t1.customer_number,
stuff((select '' + t2.order_number
from orders t2
where t1.customer_number = t2.customer_number
for xml path(''), type
).value('.','NVARCHAR(MAX)')
,1,0,'')DATA
from orders t1) a
Whch outputs this:
customer_number | order_number
--------------------------------------------
1234 12 13 14
6793 20 22
3210 53
What I need to do is ONLY display 10 random customers that have first_purchase_year > 2010.
I am not sure how to check if first_purchase_year corresponding to a customer_number is greater than 2010.
Thank you!
You just need to fix the subquery in the outer from clause:
select c.customer_number,
stuff((select '' + o2.order_number
from orders o2
where c.customer_number = o2.customer_number
for xml path(''), type
).value('.','NVARCHAR(MAX)'
), 1, 0, ''
) as data
from (select top (10) c.customer_number
from table2 c
where c.first_purchase_year > 2010
) c;

Select 1+ most recent rows

Given is a table with articles. The following exemplary table contains one article in different variations:
ID ARTICLE_NUMBER STORE_ID COUNTRY TYPE VALID_FROM
----------------------------------------------------------------
100 1 22 DE A 2015-11-01
101 1 22 DE A 2015-11-02
102 1 22 DE A 2015-11-03
103 1 22 DE A 2015-11-04
104 1 22 DE B 2015-11-10
105 1 22 DE B 2015-11-11
106 1 22 DE B 2015-11-11
What I need is a query which returns just the ID of the article with
article_number = 1 AND
store_id = 22 AND
country = 'DE' AND
the latest valid_from timestamp.
So far, the query should return ID = 105 or 106 (both have the same valid_from date, but I want only the one or the other in my result, no matter which, but not both). AND: because there are two types for this article (A + B), I also need ID = 103 in my result set.
How must the query look like?
You could try the HAVING parameter in your filter and selecting MAX(ID)
Or with a subselect:
SELECT [Type],(SELECT TOP(1) ID from dbo.articles S WHERE S.[Type] = A.Type AND S.Valid_From = MAX(A.Valid_From))
FROM dbo.articles A
WHERE
ARTICLE_NUMBER = 1
AND STORE_ID = 22
AND Country = 'DE'
-- AND Valid_FROM = (SELECT MAX(VALID_FROM) FROM dbo.articles)
GROUP BY [Type]

Treat multiple lines as one item in SQL Server

I have an invoice_detail table that stores all invoice information. Obviously the detail table stores each line item, to break out the invoice like this:
Ticket_Detail_ID Ticket_Number Customer_ID Service_Code Total
1 1 15 Book1 4.00
2 1 15 Book2 5.00
3 1 15 Book3 6.00
4 2 16 Book1 4.00
5 2 16 Book2 5.00
6 3 17 Book1 4.00
7 3 17 Book2 5.00
8 3 17 Book3 6.00
I want to Select a count of distinct tickets based on Ticket_number That does not have a "Book3" service code. So in this example I would count:
Ticket 16, since it did not have a "Book3"
It would return:
1
My query right now is:
Select Count (Distinct Ticket_Number) as Total
From Invoice_Details
Where Service_Code <> 'Book3'
This returns:
6
Use NOT EXISTS:
SELECT COUNT(DISTINCT Ticket_Number)
FROM dbo.YourTable T
WHERE NOT EXISTS(SELECT 1 FROM dbo.YourTable
WHERE Service_Code = 'Book3'
AND Ticket_Number = T.Ticket_Number)
Here is an sqlfiddle with a demo of this.

SQL - Function to divide value among rows

I normally don't do database programming so Im rusty on how to do certain stuff. But I have an issue where I'm to take an item and if this item is in the same location but in different placements, divide the value of said item among the sum count between placements.
here is my table structure:
LOCATION PLACEMENT VALUE COUNT ITEM
25 12345 100 10 55555 <----
25 67890 100 20 55555 <----
25 11111 50 5 00000
25 22222 75 5 11111
In other words Item (55555) is in 2 placements and the value of this item is 100
The new value should be: PLACEMENT 12345 will be (10/30) *100 = 33.3 and PLACEMENT 67890 will be (20/30) * 100 = 66.7
Any idea how to do this in SQL or HQL?
create table new as
select item,count(distinct placement) as dist_placement,count(count)as count,sum(count) as s_count
from mytable
group by item,location;
hive> select * from new;
OK
00000 1 1 5
11111 1 1 5
55555 2 2 30
Create table final as
select b.location as location,b.placement as placement, CASE
WHEN a.count=2 and a.dist_placement=2 then cast(((b.count/a.s_count)*b.value) as double)
ELSE cast(b.value as double)
END , b.count as count, b.item as item
from new a
join mytable b
on a.item=b.item;
select * from final;
output
location placement value count item
25 12345 33.33333333333333 10 55555
25 67890 66.66666666666666 20 55555
25 11111 50.0 5 00000
25 22222 75.0 5 11111
if you give input with same placement and different item
LOCATION PLACEMENT VALUE COUNT ITEM
25 12345 100 10 55555 <----
25 12345 100 20 55555 <----
25 11111 50 5 00000
25 22222 75 5 11111
output will be
LOCATION PLACEMENT VALUE COUNT ITEM
25 12345 100.0 10 55555
25 12345 100.0 20 55555
25 11111 50.0 5 00000
25 22222 75.0 5 11111
May I right, let me know if you have other requirements.
There may be a more efficient way to do it, but in two steps you can add up the item_count and then create the new value by dividing it through.
create table new as select
item, sum(count) as item_count
from old
group by item, location, placement;
create table new2 as select
a.*,
b.item_count,
a.count/b.item_count as new_count
from old a
left join new b
on a.item=b.item;
Your sample table
SELECT * INTO #TEMP FROM
(
SELECT 25 LOCATION,12345 PLACEMENT,100 VALUE ,10 [COUNT], 55555 ITEM
UNION ALL
SELECT 25 , 67890 , 100 , 20,55555
UNION ALL
SELECT 25 , 11111 , 50 , 5,00000
UNION ALL
SELECT 25 , 22222 , 75 , 5, 11111
)TAB
Your result is below
SELECT *,
CAST(([COUNT]/CAST(SUM([COUNT]) OVER(PARTITION BY ITEM)AS NUMERIC(20,2)))*VALUE AS NUMERIC(20,1)) Result
FROM #TEMP

LISTAGG equivalent with windowing clause

In oracle, the LISTAGG function allows me to use it analytically with a OVER (PARTITION BY column..) clause. However, it does not support use of windowing with the ROWS or RANGE keywords.
I have a data set from a store register (simplified for the question). Note that the register table's quantity is always 1 - one item, one transaction line.
TranID TranLine ItemId OrderID Dollars Quantity
------ -------- ------ ------- ------- --------
1 101 23845 23 2.99 1
1 102 23845 23 2.99 1
1 103 23845 23 2.99 1
1 104 23845 23 2.99 1
1 105 23845 23 2.99 1
I have to "match" this data to a table in an special order system where items are grouped by quantity. Note that the system can have the same item ID on multiple lines (components ordered may be different even if the item is the same).
ItemId OrderID Order Line Dollars Quantity
------ ------- ---------- ------- --------
23845 23 1 8.97 3
23845 23 2 5.98 2
The only way I can match this data is by order id, item id and dollar amount.
Essentially I need to get to the following result.
ItemId OrderID Order Line Dollars Quantity Tran ID Tran Lines
------ ------- ---------- ------- -------- ------- ----------
23845 23 1 8.97 3 1 101;102;103
23845 23 2 5.98 2 1 104;105
I don't specifically care if the tran lines are ordered in any way, all I care is that the dollar amounts match and that I don't "re-use" a line from the register in computing the total on the special order. I don't need the tran lines broken out into a table - this is for reporting purposes and the granularity never goes back down to the register transaction line level.
My initial thinking was that I can do this with analytic functions to do a "best match" to identify the the first set of rows that match to the dollar amount and quantity in the ordering system, giving me a result set like:
TranID TranLine ItemId OrderID Dollars Quantity CumDollar CumQty
------ -------- ------ ------- ------- -------- -------- ------
1 101 23845 23 2.99 1 2.99 1
1 102 23845 23 2.99 1 5.98 2
1 103 23845 23 2.99 1 8.97 3
1 104 23845 23 2.99 1 11.96 4
1 105 23845 23 2.99 1 14.95 5
So far so good. But I then try to add LISTAGG to my query:
SELECT tranid, tranline, itemid, orderid, dollars, quantity,
SUM(dollars) OVER (partition by tranid, itemid, orderid order by tranline) cumdollar,
SUM(quantity) OVER (partition by tranid, itemid, orderid order by tranline) cumqty
LISTAGG (tranline) within group (order by tranid, itemid, orderid, tranline) OVER (partition by tranid, itemid, orderid)
FROM table
I discover that it always returns a full agg instead of a cumulative agg:
TranID TranLine ItemId OrderID Dollars Quantity CumDollar CumQty ListAgg
------ -------- ------ ------- ------- -------- -------- ------ -------
1 101 23845 23 2.99 1 2.99 1 101;102;103;104;105
1 102 23845 23 2.99 1 5.98 2 101;102;103;104;105
1 103 23845 23 2.99 1 8.97 3 101;102;103;104;105
1 104 23845 23 2.99 1 11.96 4 101;102;103;104;105
1 105 23845 23 2.99 1 14.95 5 101;102;103;104;105
So this isn't useful.
I would much prefer to do this in SQL if at all possible. I am aware that I can do this with cursors & procedural logic.
Is there any way to do windowing with the LISTAGG analytic function, or perhaps another analytic function which would support this?
I'm on 11gR2.
The only way I can think of to achieve this is with a correlated subquery:
WITH CTE AS
( SELECT TranID,
TranLine,
ItemID,
OrderID,
Dollars,
Quantity,
SUM(dollars) OVER (PARTITION BY TranID, ItemID, OrderID ORDER BY TranLine) AS CumDollar,
SUM(Quantity) OVER (PARTITION BY TranID, ItemID, OrderID ORDER BY TranLine) AS CumQuantity
FROM T
)
SELECT TranID,
TranLine,
ItemID,
OrderID,
Dollars,
Quantity,
CumDollar,
CumQuantity,
( SELECT LISTAGG(Tranline, ';') WITHIN GROUP(ORDER BY CumQuantity)
FROM CTE T2
WHERE T1.CumQuantity >= T2.CumQuantity
AND T1.ItemID = T2.ItemID
AND T1.OrderID = T2.OrderID
AND T1.TranID = T2.TranID
GROUP BY tranid, itemid, orderid
) AS ListAgg
FROM CTE T1;
I realise this doesn't give the exact output you were asking for, but hopefully it is enough to overcome the problem of the cumulative LISTAGG and get you on your way.
I've set up an SQL Fiddle to demonstrate the solution.
In your example, your store register table contains 5 rows and your special order system table contains 2 rows. Your expected result set contains the two rows from your special order system table and all "tranlines" of your store register table should be mentioned in the "Tran Line" column.
This means you need to aggregate those 5 rows to 2 rows. Meaning you don't need the LISTAGG analytic function, but the LISTAGG aggregate function.
Your challenge is to join the rows of the store register table to the right row in the special order system table. You were well on your way by calculating the running sum of dollars and quantities. The only step missing is to define ranges of dollars and quantities by which you can assign each store register row to each special order system row.
Here is an example. First define the tables:
SQL> create table store_register_table (tranid,tranline,itemid,orderid,dollars,quantity)
2 as
3 select 1, 101, 23845, 23, 2.99, 1 from dual union all
4 select 1, 102, 23845, 23, 2.99, 1 from dual union all
5 select 1, 103, 23845, 23, 2.99, 1 from dual union all
6 select 1, 104, 23845, 23, 2.99, 1 from dual union all
7 select 1, 105, 23845, 23, 2.99, 1 from dual
8 /
Table created.
SQL> create table special_order_system_table (itemid,orderid,order_line,dollars,quantity)
2 as
3 select 23845, 23, 1, 8.97, 3 from dual union all
4 select 23845, 23, 2, 5.98, 2 from dual
5 /
Table created.
And the query:
SQL> with t as
2 ( select tranid
3 , tranline
4 , itemid
5 , orderid
6 , sum(dollars) over (partition by itemid,orderid order by tranline) running_sum_dollars
7 , sum(quantity) over (partition by itemid,orderid order by tranline) running_sum_quantity
8 from store_register_table srt
9 )
10 , t2 as
11 ( select itemid
12 , orderid
13 , order_line
14 , dollars
15 , quantity
16 , sum(dollars) over (partition by itemid,orderid order by order_line) running_sum_dollars
17 , sum(quantity) over (partition by itemid,orderid order by order_line) running_sum_quantity
18 from special_order_system_table
19 )
20 , t3 as
21 ( select itemid
22 , orderid
23 , order_line
24 , dollars
25 , quantity
26 , 1 + lag(running_sum_dollars,1,0) over (partition by itemid,orderid order by order_line) begin_sum_dollars
27 , running_sum_dollars end_sum_dollars
28 , 1 + lag(running_sum_quantity,1,0) over (partition by itemid,orderid order by order_line) begin_sum_quantity
29 , running_sum_quantity end_sum_quantity
30 from t2
31 )
32 select t3.itemid "ItemID"
33 , t3.orderid "OrderID"
34 , t3.order_line "Order Line"
35 , t3.dollars "Dollars"
36 , t3.quantity "Quantity"
37 , t.tranid "Tran ID"
38 , listagg(t.tranline,';') within group (order by t3.itemid,t3.orderid) "Tran Lines"
39 from t3
40 inner join t
41 on ( t.itemid = t3.itemid
42 and t.orderid = t3.orderid
43 and t.running_sum_dollars between t3.begin_sum_dollars and t3.end_sum_dollars
44 and t.running_sum_quantity between t3.begin_sum_quantity and t3.end_sum_quantity
45 )
46 group by t3.itemid
47 , t3.orderid
48 , t3.order_line
49 , t3.dollars
50 , t3.quantity
51 , t.tranid
52 /
ItemID OrderID Order Line Dollars Quantity Tran ID Tran Lines
---------- ---------- ---------- ---------- ---------- ---------- --------------------
23845 23 1 8.97 3 1 101;102;103
23845 23 2 5.98 2 1 104;105
2 rows selected.
Regards,
Rob.