Replace nulls of a column with column value from another table - sql

I have data flowing from two tables, table A and table B. I'm doing an inner join on a common column from both the tables and creating two more new columns based on different conditions. Below is a sample dataset:
Table A
| Id | StartDate |
|-----|------------|
| 119 | 01-01-2018 |
| 120 | 01-02-2019 |
| 121 | 03-05-2018 |
| 123 | 05-08-2021 |
TABLE B
| Id | CodeId | Code | RedemptionDate |
|-----|--------|------|----------------|
| 119 | 1 | abc | null |
| 119 | 2 | abc | null |
| 119 | 3 | def | null |
| 119 | 4 | def | 2/3/2019 |
| 120 | 5 | ghi | 04/7/2018 |
| 120 | 6 | ghi | 4/5/2018 |
| 121 | 7 | jkl | null |
| 121 | 8 | jkl | 4/4/2019 |
| 121 | 9 | mno | 3/18/2020 |
| 123 | 10 | pqr | null |
What I'm basically doing is joining the tables on column 'Id' when StartDate>2018 and create two new columns - 'unlock' by counting CodeId when RedemptionDate is null and 'Redeem' by counting CodeId when RedmeptionDate is not null. Below is the SQL query:
WITH cte1 AS (
SELECT a.id, COUNT(b.CodeId) AS 'Unlock'
FROM TableA AS a
JOIN TableB AS b ON a.Id=b.Id
WHERE YEAR(a.StartDate) >= 2018 AND b.RedemptionDate IS NULL
GROUP BY a.id
), cte2 AS (
SELECT a.id, COUNT(b.CodeId) AS 'Redeem'
FROM TableA AS a
JOIN TableB AS b ON a.Id=b.Id
WHERE YEAR(a.StartDate) >= 2018 AND b.RedemptionDate IS NOT NULL
GROUP BY a.id
)
SELECT cte1.Id, cte1.Unlocked, cte2.Redeemed
FROM cte1
FULL OUTER JOIN cte2 ON cte1.Id = cte2.Id
If I break down the output of this query, result from cte1 will look like below:
| Id | Unlock |
|-----|--------|
| 119 | 3 |
| 121 | 1 |
| 123 | 1 |
And from cte2 will look like below:
| Id | Redeem |
|-----|--------|
| 119 | 1 |
| 120 | 2 |
| 121 | 2 |
The last select query will produce the following result:
| Id | Unlock | Redeem |
|------|--------|--------|
| 119 | 3 | 1 |
| null | null | 2 |
| 121 | 1 | 2 |
| 123 | 1 | null |
How can I replace the null value from Id with values from 'b.Id'? If I try coalesce or a case statement, they create new columns. I don't want to create additional columns, rather replace the null values from the column values coming from another table.
My final output should like:
| Id | Unlock | Redeem |
|-----|--------|--------|
| 119 | 3 | 1 |
| 120 | null | 2 |
| 121 | 1 | 2 |
| 123 | 1 | null |

If I'm following correctly, you can use apply with aggregation:
select a.*, b.*
from a cross apply
(select count(RedemptionDate) as num_redeemed,
count(*) - count(RedemptionDate) as num_unlock
from b
where b.id = a.id
) b;
However, the answer to your question is to use coalesce(cte1.id, cte2.id) as id.

Related

Calculate a column value backwards over a series of previous rows/RECURSIVE/CONNECTED BY

need your help. I guess/hope there is a function for that. I found "CONNECT DBY" and "WITH RECURSIVE AS ..." but it doesn't seem to solve my problem.
GIVEN TABLES:
Table A
+------+------------+----------+
| id | prev_id | date |
+------------------------------+
| 1 | | 20200101 |
| 23 | 1 | 20200104 |
| 34 | 23 | 20200112 |
| 41 | 34 | 20200130 |
+------------------------------+
Table B
+------+-----------+
| ref_id | key |
+------------------+
| 41 | abc |
+------------------+
(points always to the lates entry in table "A". Update, no history)
Join Statement:
SELECT
id, prev_id, key, date
FROM A
LEFT OUTER JOIN B ON B.ref_id = A.id
GIVEN psql result set:
+------+------------+----------+-----------+
| id | prev_id | key | date |
+------------------------------+-----------+
| 1 | | | 20200101 |
| 23 | 1 | | 20200104 |
| 34 | 23 | | 20200112 |
| 41 | 34 | abc | 20200130 |
+------------------------------+-----------+
DESIRED output:
+------+------------+----------+-----------+
| id | prev_id | key | date |
+------------------------------+-----------+
| 1 | | abc | 20200101 |
| 23 | 1 | abc | 20200104 |
| 34 | 23 | abc | 20200112 |
| 41 | 34 | abc | 20200130 |
+------------------------------+-----------+
The rows of the result set are connected by columns 'id' and 'prev_id'.
I want to calculate the "key" column in a reasonable time.
Keep in mind, this is a very simplified example. Normally there are a lot of more rows and different keys and id's
I understand that you want to bring the hierarchy of each row in tableb. Here is one approach using a recursive query:
with recursive cte as (
select a.id, a.prev_id, a.date, b.key
from tablea a
inner join tableb b on b.ref_id = a.id
union all
select a.id, a.prev_id, a.date, c.key
from cte c
inner join tablea a on a.id = c.prev_id
)
select * from cte

Select minimum of two rows and then sort (with no complete grouping)

I have a Microsoft Access table with the following values:
id | C | D | ED | T |
---+-------+-----+------------+---+
1 | 33105 | ABC | 2020/01/04 | 1 |
2 | 33105 | ABC | 2020/01/08 | 2 |
3 | 33102 | DEF | 2020/02/01 | 2 |
4 | 34145 | GHI | 2020/02/09 | 1 |
5 | 34145 | GHI | 2020/02/10 | 2 |
6 | 34162 | JKL | 2020/02/08 | 1 |
I would like to extract the rows with the same C but lowest T (with this precedence) and finally sort the results by date (ED) descending. So my expected result is the following:
id | C | D | ED | T |
---+-------+-----+------------+---+
4 | 34145 | GHI | 2020/02/09 | 1 |
6 | 34162 | JKL | 2020/02/08 | 1 |
3 | 33102 | DEF | 2020/02/01 | 2 |
1 | 33105 | ABC | 2020/01/04 | 1 |
What's the fastest way in SQL to do so (the table is actually pretty large)?
You can do it with NOT EXISTS:
SELECT t.*
FROM tablename AS t
WHERE NOT EXISTS (SELECT 1 FROM tablename WHERE C = t.C AND T < t.T)
Or with a correlated subquery:
SELECT t.*
FROM tablename AS t
WHERE t.T = (SELECT MIN(T) FROM tablename WHERE C = t.C)

How to fill forward time series data in Postgres

I am looking to join three tables together and fill forward null values on the resulting table.
Three tables:
Table 1 (raw.fb_historical_data) - this is the main table on which I would like to join the other two on to. Each row of this table is related to one or more rows in the other two tables through a combination of columns id, clk and timestamp (mkt_id and row_id in the other tables).
+---------------------+-----+-----+--------------+
| timestamp | clk | id | some_columns |
+---------------------+-----+-----+--------------+
| 2016-06-19 06:11:13 | 123 | 126 | a |
| 2016-06-19 06:16:13 | 124 | 127 | b |
| 2016-06-19 06:21:13 | 234 | 126 | c |
| 2016-06-19 06:41:13 | 456 | 127 | d |
| ... | ... | ... | ... |
+---------------------+-----+-----+--------------+
Table 2 (raw.fb_runner_changes) - this table essentially gives price changes for a wide range of different markets
+---------------------+--------+--------+-------+
| timestamp | row_id | mkt_id | price |
+---------------------+--------+--------+-------+
| 2016-06-19 06:11:13 | 123 | 126 | 1 |
| 2016-06-19 06:21:13 | 123 | 126 | 2 |
| 2016-06-19 06:41:13 | 123 | 126 | 3 |
| 2016-06-06 18:54:06 | 124 | 127 | 1 |
| 2016-06-06 18:56:06 | 124 | 127 | 2 |
| 2016-06-06 18:57:06 | 124 | 127 | 3 |
| ... | ... | ... | ... |
+---------------------+--------+--------+-------+
Table 3 (raw.fb_runners) - a table with extra information about market changes that I would like to join
+---------------------+--------+--------+---------------+
| timestamp | row_id | mkt_id | other_columns |
+---------------------+--------+--------+---------------+
| 2016-06-19 06:15:13 | 234 | 126 | ab |
| 2016-06-19 06:31:13 | 234 | 126 | cd |
| 2016-06-19 06:56:13 | 234 | 126 | ef |
| 2016-06-06 18:54:06 | 456 | 127 | gh |
| 2016-06-06 18:56:06 | 456 | 127 | jk |
| 2016-06-06 18:57:06 | 456 | 127 | lm |
| ... | ... | ... | ... |
+---------------------+--------+--------+---------------+
Essentially what I want to do is fill NULL information forward (ordered by timestamp) while grouping by market id.
So far, I have tried to join the tables together using
SELECT *
FROM raw.fb_historical_data AS h
LEFT JOIN raw.fb_runner_changes AS rc
ON rc.row_id = h.clk
AND rc.timestamp = h.timestamp
AND rc.mkt_id = h.id
LEFT JOIN raw.fb_runners AS r
ON r.row_id = h.clk
AND r.timestamp = h.timestamp
AND r.mkt_id = h.id
Which has worked as intended, though now there are nulls in the resulting dataset which i'd like to fill in with the last available value for that market.
With some of the other SQL dialects, fill forward could be done using the window function last_value in combination with the instruction ignore nulls.
Since this is not supported in PostgreSQL (check the note at the bottom of this page), we are using a 2 steps work-around.
select ts, val, val_seq, min(val) over (partition by val_seq) val_fill_fw
from (select ts, val, count(val) over(order by ts) as val_seq
from t
) t
-
+----+----------+---------+-------------+
| ts | val | val_seq | val_fill_fw |
+----+----------+---------+-------------+
| 1 | (null) | 0 | (null) |
| 2 | (null) | 0 | (null) |
| 3 | hello | 1 | hello |
| 4 | (null) | 1 | hello |
| 5 | (null) | 1 | hello |
| 6 | darkness | 2 | darkness |
| 7 | my | 3 | my |
| 8 | (null) | 3 | my |
| 9 | old | 4 | old |
| 10 | (null) | 4 | old |
| 11 | (null) | 4 | old |
| 12 | (null) | 4 | old |
| 13 | friend | 5 | friend |
| 14 | (null) | 5 | friend |
+----+----------+---------+-------------+
SQL Fiddle
This seems to correctly do 'forward fill' in postgres. However I am a postgres newbie so I would appreciate feedback if it's wrong.
DROP TABLE IF EXISTS example;
create temporary table example(id int, str text, val integer);
insert into example values
(1, 'a', null),
(1, null, 1),
(2, 'b', 2),
(2,null ,null );
select * from example
select id, (case
when str is null
then lag(str,1) over (order by id)
else str
end) as str,
(case
when val is null
then lag(val,1) over (order by id)
else val
end) as val
from example

SQL Insert multiple value to one key staff from two tables (try to improve)

Basically, I want to insert multiple values to a signal staff form two tables, Value_table and Staff_table, which look like this:
Staff_table
| staff_num | staff_name | staff_role |
+---------------------+------------------+------------------+
| 1 | Bill | 1 |
| 2 | James | 1 |
| 3 | Gina | 2 |
| 4 | Tim | 3 |
Value_table
| value | value_name | value_state |
+------------------+------------------+------------------+
| 123 | Food | 0 |
| 476 | Drink | 1 |
| 656 | Dinner | 1 |
| 77 | Phone | 1 |
And the result should look like this:
| staff_num | value |
+---------------------+------------------+
| 1 | 123 |
| 1 | 476 |
| 1 | 656 |
| 1 | 77 |
| 2 | 123 |
| 2 | 476 |
| 2 | 656 |
| 2 | 77 |
.
.
.
.
I find a SQL way to do it which is the following code.
INSERT INTO Result_table
SELECT DISTINCT a.staff_code, c.func_num
FROM Staff_table a
JOIN Value_table c ON c.value BETWEEN 0 AND 656;
It works but I don't think the last line
JOIN Value_table c ON c.value BETWEEN 0 AND 656;
is a good way to select all the value from the table.
Is there a better way to do it in SQL?
Alternatively you may use CROSS JOIN as
INSERT INTO Result_table
SELECT distinct a.staff_code,c.func_num
FROM Staff_table a
CROSS JOIN Value_table c
Since, there's no need to restrict the value column which is already between 0 and 656 depending on the sample data.
Edit : If you want to restrict add a WHERE condition below as
INSERT INTO Result_table
SELECT distinct a.staff_code,c.func_num
FROM Staff_table a
CROSS JOIN Value_table c
WHERE c.value BETWEEN 77 and 123

Get the Id of the matched data from other table. No duplicates of ID from both tables

Here is my table A.
| Id | GroupId | StoreId | Amount |
| 1 | 20 | 7 | 15000 |
| 2 | 20 | 7 | 1230 |
| 3 | 20 | 7 | 14230 |
| 4 | 20 | 7 | 9540 |
| 5 | 20 | 7 | 24230 |
| 6 | 20 | 7 | 1230 |
| 7 | 20 | 7 | 1230 |
Here is my table B.
| Id | GroupId | StoreId | Credit |
| 12 | 20 | 7 | 1230 |
| 14 | 20 | 7 | 15000 |
| 15 | 20 | 7 | 14230 |
| 16 | 20 | 7 | 1230 |
| 17 | 20 | 7 | 7004 |
| 18 | 20 | 7 | 65523 |
I want to get this result without getting duplicate Id of both table.
I need to get the Id of table B and A where the Amount = Credit.
| A.ID | B.ID | Amount |
| 1 | 14 | 15000 |
| 2 | 12 | 1230 |
| 3 | 15 | 14230 |
| 4 | null | 9540 |
| 5 | null | 24230 |
| 6 | 16 | 1230 |
| 7 | null | 1230 |
My problem is when I have 2 or more same Amount in table A, I get duplicate ID of table B. which should be null. Please help me. Thank you.
I think you want a left join. But this is tricky because you have duplicate amounts, but you only want one to match. The solution is to use row_number():
select . . .
from (select a.*, row_number() over (partition by amount order by id) as seqnum
from a
) a left join
(select b.*, row_number() over (partition by credit order by id) as seqnum
from b
)b
on a.amount = b.credit and a.seqnum = b.seqnum;
Another approach, I think simplier and shorter :)
select ID [A.ID],
(select top 1 ID from TABLE_B where Credit = A.Amount) [B.ID],
Amount
from TABLE_A [A]