Set new field priority in SELECT SQL - sql

I have a table of bills with the following structure:
id | store_name | sum | payment_date
1 | Amazon | 10 | 11.05.2022
2 | Amazon | 20 | 11.05.2022
3 | Ebay | 15 | 11.05.2022
4 | AppleStore | 13 | 11.05.2022
5 | Google Play| 6 | 11.05.2022
What I need is to select all data from table and set additional field "Priority" based on a sum of bill. First 2 rows get priority 1, next 2 rows get priority 2, others get 0:
id | store_name | sum | payment_date | priority
2 | Amazon | 20 | 11.05.2022 | 1
3 | Ebay | 15 | 11.05.2022 | 1
4 | AppleStore | 13 | 11.05.2022 | 2
1 | Amazon | 10 | 11.05.2022 | 2
5 | Google Play| 6 | 11.05.2022 | 0
In addition table contains data about bills from various days (field payment_date) and this priority should be set based on data inside each single day.

Order the rows for each day and then assign priority based on the row number:
SELECT t.*,
CASE ROW_NUMBER()
OVER (PARTITION BY TRUNC(payment_date) ORDER BY sum DESC)
WHEN 1 THEN 1
WHEN 2 THEN 1
WHEN 3 THEN 2
WHEN 4 THEN 2
ELSE 0
END AS priority
FROM table_name t
Which, for the sample data:
CREATE TABLE table_name (id, store_name, sum, payment_date) AS
SELECT 1, 'Amazon', 10, DATE '2022-05-11' FROM DUAL UNION ALL
SELECT 2, 'Amazon', 20, DATE '2022-05-11' FROM DUAL UNION ALL
SELECT 3, 'Ebay', 15, DATE '2022-05-11' FROM DUAL UNION ALL
SELECT 4, 'Apple Store', 13, DATE '2022-05-11' FROM DUAL UNION ALL
SELECT 5, 'Google Play', 6, DATE '2022-05-11' FROM DUAL;
Outputs:
ID
STORE_NAME
SUM
PAYMENT_DATE
PRIORITY
2
Amazon
20
2022-05-11 00:00:00
1
3
Ebay
15
2022-05-11 00:00:00
1
4
Apple Store
13
2022-05-11 00:00:00
2
1
Amazon
10
2022-05-11 00:00:00
2
5
Google Play
6
2022-05-11 00:00:00
0
db<>fiddle here

Related

Create column with timeframe relative to other column in SQL

Suppose I have the following table t_1 where every row represents a day:
+------+------------+-------+
| week | date | val |
+------+------------+-------+
| 1 | 2022-02-07 | 1 | <- Monday
| 1 | 2022-02-08 | 2 |
| 1 | 2022-02-09 | 3 |
| 1 | 2022-02-10 | 4 | <- Thursday
| 1 | 2022-02-11 | 5 |
| 1 | 2022-02-12 | 6 |
| 1 | 2022-02-13 | 7 |
| 2 | 2022-02-14 | 8 | <- Monday
| 2 | 2022-02-15 | 9 |
| 2 | 2022-02-16 | 10 |
| 2 | 2022-02-17 | 11 | <- Thursday
| 2 | 2022-02-18 | 12 |
| 2 | 2022-02-19 | 13 |
| 2 | 2022-02-20 | 14 |
+------+------------+-------+
How can I create the following table t2 from t1?
+------------+------------+-----------+------------+
| date_start | date_end | val_cur. | val_prev |
+------------+------------+-----------+------------+
| 2022-01-14 | 2022-01-17 | 38 | 10 |
+------------+------------+-----------+------------+
Here val_cur is defined as the sum of values of the current timeframe (i.e. the sum of values between date_start and date_end) and val_prev is defined as the sum of values of the previous timeframe (i.e. the current timeframe minus one week).
-- Bigquery Standard SQL
WITH t_1 AS
(SELECT 1 AS week, '2022-02-07' AS date, 1 AS val UNION ALL
SELECT 1, '2022-02-08', 2 UNION ALL
SELECT 1, '2022-02-09', 3 UNION ALL
SELECT 1, '2022-02-10', 4 UNION ALL
SELECT 1, '2022-02-11', 5 UNION ALL
SELECT 1, '2022-02-12', 6 UNION ALL
SELECT 1, '2022-02-13', 7 UNION ALL
SELECT 2, '2022-02-14', 8 UNION ALL
SELECT 2, '2022-02-15', 9 UNION ALL
SELECT 2, '2022-02-16', 10 UNION ALL
SELECT 2, '2022-02-17', 11 UNION ALL
SELECT 2, '2022-02-18', 12 UNION ALL
SELECT 2, '2022-02-19', 13 UNION ALL
SELECT 2, '2022-02-20', 14)
SELECT '2022-02-14' AS date_start, '2022-02-17' AS date_stop, sum(val) AS val_cur
FROM t_1
WHERE date >= '2022-02-14' AND date <= '2022-02-17'
Output:
+-----+------------+------------+---------+
| Row | date_start | date_stop | val_cur |
+-----+------------+------------+---------+
| 1 | 2022-02-14 | 2022-02-17 | 38 |
+-----+------------+------------+---------+
But how do I get the last column?
Consider below approach
with your_table as (
select 1 as week, date '2022-02-07' as date, 1 as val union all
select 1, '2022-02-08', 2 union all
select 1, '2022-02-09', 3 union all
select 1, '2022-02-10', 4 union all
select 1, '2022-02-11', 5 union all
select 1, '2022-02-12', 6 union all
select 1, '2022-02-13', 7 union all
select 2, '2022-02-14', 8 union all
select 2, '2022-02-15', 9 union all
select 2, '2022-02-16', 10 union all
select 2, '2022-02-17', 11 union all
select 2, '2022-02-18', 12 union all
select 2, '2022-02-19', 13 union all
select 2, '2022-02-20', 14
), timeframe as (
select date '2022-02-14' as date_start, date '2022-02-17' as date_stop
)
select date_start, date_stop,
sum(if(date between date_start and date_stop,val, 0)) as val_cur,
sum(if(date between date_start - 7 and date_stop - 7,val, 0)) as val_prev
from your_table, timeframe
group by date_start, date_stop
with output

Oracle SQL Count orders by customers in the last 10 days based on order id

In Oracle SQL my goal is to receive the total number of orders made by a customer in the last 10 days but shown by the order id and not the customer id.
The following situation is given:
My desired outcome is:
How can I facilitate this in the most efficient way?
You can try to use self-join with date of subtraction condition, the condition need to judge what rows are during 10 days
Query 1:
SELECT t1.Customer_ID,
t1.Order_NR,
COUNT(*) Number_Orders
FROM T t1
INNER JOIN T t2
ON t1.Customer_ID = t2.Customer_ID AND t2.Order_Date BETWEEN t1.Order_Date - 10 AND t1.Order_Date
GROUP BY t1.Customer_ID,
t1.Order_NR
ORDER BY t1.Order_NR
Results:
| CUSTOMER_ID | ORDER_NR | NUMBER_ORDERS |
|-------------|----------|---------------|
| 100 | 1 | 1 |
| 200 | 2 | 1 |
| 300 | 3 | 1 |
| 100 | 4 | 2 |
| 100 | 5 | 2 |
| 200 | 6 | 1 |
| 700 | 7 | 1 |
| 800 | 8 | 2 |
| 800 | 9 | 2 |
| 800 | 10 | 3 |
You do not need a self-join and can simply use the COUNT analytic function with a RANGE window:
SELECT customer_id,
order_no,
order_date,
COUNT(*) OVER (
PARTITION BY customer_id
ORDER BY order_date
RANGE BETWEEN INTERVAL '10' DAY PRECEDING AND INTERVAL '0' DAY FOLLOWING
) AS numer_orders
FROM table_name
ORDER BY order_no
Which, for the sample data:
CREATE TABLE table_name (customer_id, order_no, order_date) AS
SELECT 100, 1, DATE '2021-01-01' FROM DUAL UNION ALL
SELECT 200, 2, DATE '2021-01-02' FROM DUAL UNION ALL
SELECT 300, 3, DATE '2021-01-05' FROM DUAL UNION ALL
SELECT 100, 4, DATE '2021-01-09' FROM DUAL UNION ALL
SELECT 100, 5, DATE '2021-01-15' FROM DUAL UNION ALL
SELECT 200, 6, DATE '2021-01-18' FROM DUAL UNION ALL
SELECT 700, 7, DATE '2021-01-20' FROM DUAL UNION ALL
SELECT 800, 8, DATE '2021-01-25' FROM DUAL UNION ALL
SELECT 800, 9, DATE '2021-01-25' FROM DUAL UNION ALL
SELECT 800, 10, DATE '2021-01-28' FROM DUAL;
Outputs:
CUSTOMER_ID
ORDER_NO
ORDER_DATE
NUMER_ORDERS
100
1
01-JAN-21
1
200
2
02-JAN-21
1
300
3
05-JAN-21
1
100
4
09-JAN-21
2
100
5
15-JAN-21
2
200
6
18-JAN-21
1
700
7
20-JAN-21
1
800
8
25-JAN-21
2
800
9
25-JAN-21
2
800
10
28-JAN-21
3
db<>fiddle here

sql group by monthly sum and sum year by month

RDBMS - Latest Oracle
I'm out of my element here. I need to organize account transaction information by account and by month, and also use another column to show summed transactions for year to date. Here is a depiction of what I'm trying to get
ACCT_ID | ACCT_MM | FISCAL_YYYY | FISCAL_MM_AMT | YTD_AMT
------------------------------------------------------------
1 | 11 | 2018 | 25 | 100
1 | 12 | 2018 | 50 | 150
1 | 01 | 2019 | 20 | 20
I know you can get FISCAL_MM_AMT with a group by ACCT_MM, FISCAL_YYYY
this is all I have figured out so far.
SELECT ACCT_ID,ACCT_MM,FISCAL_YYYY,SUM(NVL(ACCT_TRNSCTN_AMT,0))
FROM TBL_ACCT_DETAIL
GROUP BY ACCT_ID,ACCT_MM,FISCAL_YYYY
Now how to combine this with the additional column YTD_AMT to that adds up all totals for that year up to the current month is what has me baffled. sql noob ftw.
Try analytical function SUM as following:
SELECT T.*,
SUM(FISCAL_MM_AMT)
OVER (PARTITION BY ACCT_ID, FISCAL_YYYY
ORDER BY ACCT_MM) AS YTD_AMT
FROM
(SELECT ACCT_ID,ACCT_MM,FISCAL_YYYY,SUM(NVL(ACCT_TRNSCTN_AMT,0)) AS FISCAL_MM_AMT
FROM TBL_ACCT_DETAIL
GROUP BY ACCT_ID,ACCT_MM,FISCAL_YYYY);
Cheers!!
You can use the cumulative sum window function:
SELECT ACCT_ID, ACCT_MM, FISCAL_YYYY,
COALESCE(SUM(ACCT_TRNSCTN_AMT), 0),
SUM(SUM(ACCT_TRNSCTN_AMT)) OVER (PARTITION BY ACCT_ID, FISCAL_YYYY ORDER BY ACCT_MM) AS YTD
FROM TBL_ACCT_DETAIL
GROUP BY ACCT_ID, ACCT_MM, FISCAL_YYYY
Do you want to have one additional column bringing the sum of the year with it, and a year-to-date column?
OLAP functions are a prerequisite. But every respectable RDBMS should offer those by now.
Then, I think (adding what I presume should be the input) ... you should go:
WITH
---- this is just the input so I have example data
input( acct_id,acct_mm,fiscal_yyyy,fiscal_mm_amt) AS (
SELECT 1, 1, 2018, 5
UNION ALL SELECT 1, 3, 2018, 5
UNION ALL SELECT 1, 4, 2018, 5
UNION ALL SELECT 1, 5, 2018, 10
UNION ALL SELECT 1, 6, 2018, 10
UNION ALL SELECT 1, 7, 2018, 10
UNION ALL SELECT 1, 8, 2018, 10
UNION ALL SELECT 1, 9, 2018, 10
UNION ALL SELECT 1, 10, 2018, 10
UNION ALL SELECT 1, 11, 2018, 25
UNION ALL SELECT 1, 12, 2018, 50
UNION ALL SELECT 1, 01, 2019, 20
)
---- end of input -----
SELECT
*
, SUM(fiscal_mm_amt) OVER(
PARTITION BY acct_id,fiscal_yyyy
) AS fiscal_yy_amt
, SUM(fiscal_mm_amt) OVER(
PARTITION BY acct_id,fiscal_yyyy
ORDER BY acct_mm
) AS ytd_amt
FROM input;
-- out acct_id | acct_mm | fiscal_yyyy | fiscal_mm_amt | fiscal_yy_amt | ytd_amt
-- out ---------+---------+-------------+---------------+---------------+---------
-- out 1 | 1 | 2018 | 5 | 150 | 5
-- out 1 | 3 | 2018 | 5 | 150 | 10
-- out 1 | 4 | 2018 | 5 | 150 | 15
-- out 1 | 5 | 2018 | 10 | 150 | 25
-- out 1 | 6 | 2018 | 10 | 150 | 35
-- out 1 | 7 | 2018 | 10 | 150 | 45
-- out 1 | 8 | 2018 | 10 | 150 | 55
-- out 1 | 9 | 2018 | 10 | 150 | 65
-- out 1 | 10 | 2018 | 10 | 150 | 75
-- out 1 | 11 | 2018 | 25 | 150 | 100
-- out 1 | 12 | 2018 | 50 | 150 | 150
-- out 1 | 1 | 2019 | 20 | 20 | 20
-- out (12 rows)
-- out
-- out Time: First fetch (12 rows): 76.235 ms. All rows formatted: 76.288 ms

SQL - first in first out

I want to implement FIFO in my stock table
table looks like:
---+------------+-----------+--------+--------------+-----------
id | shift_type | item_type | amount | name | date
---+------------+-----------+--------+--------------+-----------
1 | in | apple | 50 | apple type 1 | 2017-12-01
2 | out | apple | 30 | apple type 1 | 2017-12-02
3 | in | apple | 40 | apple type 2 | 2017-12-04
4 | in | apple | 60 | apple type 3 | 2017-12-05
5 | out | apple | 20 | apple type 1 | 2017-12-07
6 | out | apple | 10 | apple type 1 | 2017-12-07
7 | in | apple | 20 | apple type 3 | 2017-12-09
and it keeps info about stock shifts.
If I want to take just the oldest incomes - I can take records with lowest id or oldest date.
But... there are also "out" shifts.
So if I want to take for example 50 apples, query should return me 2 records:
---+------------+-----------+--------+--------------+------------+-----
id | shift_type | item_type | amount | name | date | take
---+------------+-----------+--------+--------------+------------+-----
1 | in | apple | 50 | apple type 1 | 2017-12-01 | 20
3 | in | apple | 40 | apple type 2 | 2017-12-04 | 30
because - first out(id 2) takes 30 apples, so there is 20 left from income id 1, and rest of it should be taken from income id 3
How can I implement it with SQL?
I did something here. But need to test with different inputs.
Given a no, say 50 in the outer select query it will give you list of stacks that you need to take apples from.
WITH view_t
AS (
SELECT 1 id
,'in' shift_type
,'apple' item_type
,50 amount
,'apple type 1' NAME
,TO_DATE('2017-12-01', 'yyyy-mm-dd') date1
FROM dual
UNION ALL
SELECT 2
,'out'
,'apple'
,30
,'apple type 1'
,TO_DATE('2017-12-02', 'yyyy-mm-dd')
FROM dual
UNION ALL
SELECT 3
,'in'
,'apple'
,40
,'apple type 2'
,TO_DATE('2017-12-04', 'yyyy-mm-dd')
FROM dual
UNION ALL
SELECT 4
,'in'
,'apple'
,60
,'apple type 3'
,TO_DATE('2017-12-05', 'yyyy-mm-dd')
FROM dual
UNION ALL
SELECT 5
,'out'
,'apple'
,30
,'apple type 2'
,TO_DATE('2017-12-07', 'yyyy-mm-dd')
FROM dual
)
SELECT id
,date1
,shift_type
,NAME
,itemrem
,nvl((
50 - LAG(itemrem) OVER (
ORDER BY id
)
), itemrem) take
FROM (
SELECT id
,date1
,shift_type
,NAME
,SUM(amtretain) OVER (
ORDER BY id
) itemrem
FROM (
SELECT id
,date1
,shift_type
,amount
,signamt
,NAME
,CASE
WHEN LEAD(shift_type) OVER (
ORDER BY id
) = 'out'
THEN LEAD(signamt) OVER (
ORDER BY id
) + signamt
ELSE signamt
END amtretain
--date1,shift_type,sum(signamt) over(order by id)
FROM (
SELECT id
,shift_type
,NAME
,item_type
,date1
,DECODE(shift_type, 'in', amount, 'out', amount * - 1) signamt
,amount
FROM view_t
)
)
WHERE shift_type = 'in'
);
The idea behind the approach is to,
convert the signs of the amount based on the shift_type
Use lead to subtract the signamt based on its shift type
Use a running total to identify the remaining apples at each date
Finally using LAG with the input no of apples to calculate how much to take

HIVE: Finding running totals

I have a Table called Program which have following columns:
ProgDate(Date)
Episode(String)
Impression_id(int)
ProgName(String)
I want to find out total impressions for each date and episode for which i have the below query which is working fine
Select progdate, episode, count(distinct impression_id) Impression from Program where progname='BBC' group by progdate, episode order by progdate, episode;
Result:
ProgDate Episode Impression
20160919 1 5
20160920 1 15
20160921 1 10
20160922 1 5
20160923 2 25
20160924 2 10
20160925 2 25
But I also want to find out the cumulative total for each episode. I tried searching on how to find running total but it is adding up all the previous totals. i want running total for each episode, like below:
Date Episode Impression CumulativeImpressionsPerChannel
20160919 1 5 5
20160920 1 15 20
20160921 1 10 30
20160922 1 5 35
20160923 2 25 25
20160924 2 10 35
20160925 2 25 60
Recent versions of Hive HQL support windowed analytic functions (ref 1) (ref 2) including SUM() OVER()
Assuming you have such a version I have mimicked the syntax using PostgreSQL at SQL Fiddle
CREATE TABLE d
(ProgDate int, Episode int, Impression int)
;
INSERT INTO d
(ProgDate, Episode, Impression)
VALUES
(20160919, 1, 5),
(20160920, 1, 15),
(20160921, 1, 10),
(20160922, 1, 5),
(20160923, 2, 25),
(20160924, 2, 10),
(20160925, 2, 25)
;
Query 1:
select
ProgDate, Episode, Impression
, sum(Impression) over(partition by Episode order by ProgDate) CumImpsPerChannel
, sum(Impression) over(order by ProgDate) CumOverall
from (
Select progdate, episode, count(distinct impression_id) Impression
from Program
where progname='BBC'
group by progdate, episode order by progdate, episode
) d
Results:
| progdate | episode | impression | cumimpsperchannel |
|----------|---------|------------|-------------------|
| 20160919 | 1 | 5 | 5 |
| 20160920 | 1 | 15 | 20 |
| 20160921 | 1 | 10 | 30 |
| 20160922 | 1 | 5 | 35 |
| 20160923 | 2 | 25 | 25 |
| 20160924 | 2 | 10 | 35 |
| 20160925 | 2 | 25 | 60 |