Filling gaps with next not null value - sql

I've been trying to find a solution to this since some days ago. I have the following dataset.
|id|order|certain_event|order_of_occurrence|
|--|-----|-------------|-------------------|
|a |1 |NULL |NULL |
|a |2 |NULL |NULL |
|a |3 |NULL |NULL |
|a |4 |NULL |NULL |
|a |5 |4 |1 |
|a |6 |NULL |NULL |
|a |7 |NULL |NULL |
|a |8 |4 |2 |
|a |9 |NULL |NULL |
The desired output consists in replacing the null values from the order_of_occurrence column with the next non-null value. Like this:
|id|order|certain_event|order_of_occurrence|
|--|-----|-------------|-------------------|
|a |1 |NULL |1 |
|a |2 |NULL |1 |
|a |3 |NULL |1 |
|a |4 |NULL |1 |
|a |5 |4 |1 |
|a |6 |NULL |2 |
|a |7 |NULL |2 |
|a |8 |4 |2 |
|a |9 |NULL |NULL |
I've tried using a subquery for retrieving the non-null values from the order of occurrence column, but I get more than one value returned. Like the following:
SELECT a.*,
CASE
WHEN a.order_of_occurrence IS NOT NULL THEN a.order_of_occurence
WHEN a.order_of_occurence IS NULL THEN (SELECT B.ORDER_OF_OCCURENCE FROM dataset AS B
WHERE B.ORDER_OF_OCCURRENCE IS NOT NULL)
END AS corrected_order
FROM dataset AS a
Thanks!

This is a simple task for the IGNORE NULLS option in FIRST/LAST_VALUE:
last_value(order_of_occurrence IGNORE NULLS)
over (partition by id
order by "order" DESC
rows unbounded preceding)

Related

Add rows of data to each group in a Spark dataframe

I have this dataframe -
data = [(0,1,1,201505,3),
(1,1,1,201506,5),
(2,1,1,201507,7),
(3,1,1,201508,2),
(4,2,2,201750,3),
(5,2,2,201751,0),
(6,2,2,201752,1),
(7,2,2,201753,1)
]
cols = ['id','item','store','week','sales']
data_df = spark.createDataFrame(data=data,schema=cols)
display(data_df)
What I want it this -
data_new = [(0,1,1,201505,3,0),
(1,1,1,201506,5,0),
(2,1,1,201507,7,0),
(3,1,1,201508,2,0),
(4,1,1,201509,0,0),
(5,1,1,201510,0,0),
(6,1,1,201511,0,0),
(7,1,1,201512,0,0),
(8,2,2,201750,3,0),
(9,2,2,201751,0,0),
(10,2,2,201752,1,0),
(11,2,2,201753,1,0),
(12,2,2,201801,0,0),
(13,2,2,201802,0,0),
(14,2,2,201803,0,0),
(15,2,2,201804,0,0)]
cols_new = ['id','item','store','week','sales','flag',]
data_df_new = spark.createDataFrame(data=data_new,schema=cols_new)
display(data_df_new)
So basically, I want 8 (this can also be 6 or 10) weeks of data for each item-store groupby combination. Wherever the 52/53 weeks for the year ends, I need the weeks for the next year, as I have mentioned in the sample. I need this in PySpark, thanks in advance!
See my attempt below. Could have made it shorter but felt should be as explicit as I can so I dint chain the soultions. code below
from pyspark.sql import functions as F
spark.sql("set spark.sql.legacy.timeParserPolicy=LEGACY")
# Convert week of the year to date
s=data_df.withColumn("week", expr("cast (week as string)")).withColumn('date', F.to_date(F.concat("week",F.lit("6")), "yyyywwu"))
s = (s.groupby('item', 'store').agg(F.collect_list('sales').alias('sales'),F.collect_list('date').alias('date'))#Put sales and dates in an array
.withColumn("id", sequence(lit(0), lit(6)))#Create sequence ids with the required expansion range per group
)
#Explode datframe back with each item/store combination in a row
s =s.selectExpr('item','store','inline(arrays_zip(date,id,sales))')
#Create partition window broadcasting from start to end for each item/store combination
w = Window.partitionBy('item','store').orderBy('id').rowsBetween(-sys.maxsize, sys.maxsize)
#Create partition window broadcasting from start to end for each item/store/date combination. the purpose here is to aggregate over null dates as group
w1 = Window.partitionBy('item','store','date').orderBy('id').rowsBetween(Window.unboundedPreceding, Window.currentRow)
s=(s.withColumn('increment', F.when(col('date').isNull(),(row_number().over(w1))*7).otherwise(0))#Create increment values per item/store combination
.withColumn('date1', F.when(col('date').isNull(),max('date').over(w)).otherwise(col('date')))#get last date in each item/store combination
)
# #Compute the week of year and drop columns not wanted
s = s.withColumn("weekofyear", expr("weekofyear(date_add(date1, cast(increment as int)))")).drop('date','increment','date1').na.fill(0)
s.show(truncate=False)
Outcome
+----+-----+---+-----+----------+
|item|store|id |sales|weekofyear|
+----+-----+---+-----+----------+
|1 |1 |0 |3 |5 |
|1 |1 |1 |5 |6 |
|1 |1 |2 |7 |7 |
|1 |1 |3 |2 |8 |
|1 |1 |4 |0 |9 |
|1 |1 |5 |0 |10 |
|1 |1 |6 |0 |11 |
|2 |2 |0 |3 |50 |
|2 |2 |1 |0 |51 |
|2 |2 |2 |1 |52 |
|2 |2 |3 |1 |1 |
|2 |2 |4 |0 |2 |
|2 |2 |5 |0 |3 |
|2 |2 |6 |0 |4 |
+----+-----+---+-----+----------+

Postgres - How to achieve UNION behaviour with UNION ALL?

I have a table with parent and child ids.
create table if not exists stack (
parent int,
child int
)
Each parent can have multiple children and each child can have multiple children again.
insert into stack (parent, child) values
(1,2),
(2,3),
(3,4),
(4,5),
(5,6),
(6,7),
(7,8),
(8,9),
(9,null),
(1,7),
(7,8),
(8,9),
(9,null);
The data looks like this.
|parent|child|
|------|-----|
|1 |2 |
|2 |3 |
|3 |4 |
|4 |5 |
|5 |6 |
|6 |7 |
|7 |8 |
|8 |9 |
|9 |NULL |
|1 |7 |
|7 |8 |
|8 |9 |
|9 |NULL |
I'd like to find all children. I can use a recursive cte with a UNION ALL.
with recursive cte as (
select
child
from
stack
where
stack.parent = 1
union
select
stack.child
from
cte
left join stack on
cte.child = stack.parent
where
cte.child is not null
)
select * from cte;
This gives me the result I'd like to achieve.
|child|
|-----|
|2 |
|7 |
|3 |
|8 |
|4 |
|9 |
|5 |
|NULL |
|6 |
However I'd like to include the depth / level and also the path for each node. I can do this using a different recursive cte.
with recursive cte as (
select
parent,
child,
0 as level,
array[parent,
child] as path
from
stack
where
stack.parent = 1
union all
select
stack.parent,
stack.child,
cte.level + 1,
cte.path || stack.child
from
cte
left join stack on
cte.child = stack.parent
where
cte.child is not null
)
select * from cte;
That gives me this data.
|parent|child|level|path |
|------|-----|-----|--------------------|
|1 |2 |0 |{1,2} |
|1 |7 |0 |{1,7} |
|2 |3 |1 |{1,2,3} |
|7 |8 |1 |{1,7,8} |
|7 |8 |1 |{1,7,8} |
|3 |4 |2 |{1,2,3,4} |
|8 |9 |2 |{1,7,8,9} |
|8 |9 |2 |{1,7,8,9} |
|8 |9 |2 |{1,7,8,9} |
|8 |9 |2 |{1,7,8,9} |
|4 |5 |3 |{1,2,3,4,5} |
|9 | |3 |{1,7,8,9,} |
|9 | |3 |{1,7,8,9,} |
|9 | |3 |{1,7,8,9,} |
|9 | |3 |{1,7,8,9,} |
|9 | |3 |{1,7,8,9,} |
|9 | |3 |{1,7,8,9,} |
|9 | |3 |{1,7,8,9,} |
|9 | |3 |{1,7,8,9,} |
|5 |6 |4 |{1,2,3,4,5,6} |
|6 |7 |5 |{1,2,3,4,5,6,7} |
|7 |8 |6 |{1,2,3,4,5,6,7,8} |
|7 |8 |6 |{1,2,3,4,5,6,7,8} |
|8 |9 |7 |{1,2,3,4,5,6,7,8,9} |
|8 |9 |7 |{1,2,3,4,5,6,7,8,9} |
|8 |9 |7 |{1,2,3,4,5,6,7,8,9} |
|8 |9 |7 |{1,2,3,4,5,6,7,8,9} |
|9 | |8 |{1,2,3,4,5,6,7,8,9,}|
|9 | |8 |{1,2,3,4,5,6,7,8,9,}|
|9 | |8 |{1,2,3,4,5,6,7,8,9,}|
|9 | |8 |{1,2,3,4,5,6,7,8,9,}|
|9 | |8 |{1,2,3,4,5,6,7,8,9,}|
|9 | |8 |{1,2,3,4,5,6,7,8,9,}|
|9 | |8 |{1,2,3,4,5,6,7,8,9,}|
|9 | |8 |{1,2,3,4,5,6,7,8,9,}|
My problem is that I have a lot of duplicate data. I'd like to get the same result as the UNION query but with the level and the path.
I tried something like
where
cte.child is not null
and stack.parent not in (cte.parent)
or
where
cte.child is not null
and not exists (select parent from cte where cte.parent = stack.parent)
but the first does not change anything and the second returns an error.
ERROR: recursive reference to query "cte" must not appear within a subquery
Any ideas? Thank you very much!
Your problem is inappropriate table data. Your table contains the information that 8 is a direct child to 7 twice for instance. I suggest you remove the duplicate data and implement a unique constraint on the pairs.
If you cannot do so for some reason, make the rows distinct in your query:
with recursive
good_stack as (select distinct * from stack)
,cte as
(
select
parent,
child,
0 as level,
array[parent,
child] as path
from good_stack
where good_stack.parent = 1
union all
select
good_stack.parent,
good_stack.child,
cte.level + 1,
cte.path || good_stack.child
from cte
left join good_stack on cte.child = good_stack.parent
where cte.child is not null and good_stack.child is not null
)
select * from cte;
Demo: https://dbfiddle.uk/?rdbms=postgres_13&fiddle=acb1d7a1a1d26c3fd9caf0e7dedc12b2
(You may also make the columns not nullable. The entries 9|null add no information. If the table were lacking these entries, 9 would still be without a child.)

How to select rows based on exact count of array elements in a different column

Suppose I have a dataframe like this, where B_C is concat of col B and col C, and column selected_B_C is an array formed by picking a few B_C col from within the group.
+-----------+-----------+--------+--------+-----------------+--------+--------------------------------------+
|A |grp_count_A|B |C |B_C |D |selected_B_C |
+-----------+-----------+--------+--------+-----------------+--------+--------------------------------------+
|1 |6 |30261.41|20091201|30261.41_20091201|99945.83|[30261.41_20091201, 39879.85_20080601]|
|1 |6 |30261.41|20081201|30261.41_20081201|99945.83|[30261.41_20091201, 39879.85_20080601]|
|1 |6 |39879.85|20080601|39879.85_20080601|99945.83|[30261.41_20091201, 39879.85_20080601]|
|1 |6 |69804.42|20080117|69804.42_20080117|99945.83|[30261.41_20091201, 39879.85_20080601]|
|1 |6 |99950.3 |20090301|99950.3_20090301 |99945.83|[30261.41_20091201, 39879.85_20080601]|
|1 |6 |99999.23|20080118|99999.23_20080118|99945.83|[30261.41_20091201, 39879.85_20080601]|
|2 |4 |76498.0 |20150501|76498.0_20150501 |183600.0|[[76498.0_20150501, 76498.0_20150501]]|
|2 |4 |76498.0 |20150501|76498.0_20150501 |183600.0|[[76498.0_20150501, 76498.0_20150501]]|
|2 |4 |76498.0 |20150501|76498.0_20150501 |183600.0|[[76498.0_20150501, 76498.0_20150501]]|
|2 |4 |351378.0|20180620|351378.0_20180620|183600.0|[[76498.0_20150501, 76498.0_20150501]]|
+-----------+-----------+--------+--------+-----------------+--------+--------------------------------------+
I want to append a column selected where it takes a value 1, if for a row, col B_C is found in colselected_B_C, otherwise 0, so the final dataframe looks like this.
+-----------+-----------+--------+--------+-----------------+--------+--------------------------------------+--------+
|A |grp_count_A|B |C |B_C |D |selected_B_C |selected|
+-----------+-----------+--------+--------+-----------------+--------+--------------------------------------+--------+
|1 |6 |30261.41|20081201|30261.41_20081201|99945.83|[30261.41_20091201, 39879.85_20080601]|0 |
|1 |6 |30261.41|20091201|30261.41_20091201|99945.83|[30261.41_20091201, 39879.85_20080601]|1 |
|1 |6 |39879.85|20080601|39879.85_20080601|99945.83|[30261.41_20091201, 39879.85_20080601]|1 |
|1 |6 |69804.42|20080117|69804.42_20080117|99945.83|[30261.41_20091201, 39879.85_20080601]|0 |
|1 |6 |99950.3 |20090301|99950.3_20090301 |99945.83|[30261.41_20091201, 39879.85_20080601]|0 |
|1 |6 |99999.23|20080118|99999.23_20080118|99945.83|[30261.41_20091201, 39879.85_20080601]|0 |
|2 |4 |76498.0 |20150501|76498.0_20150501 |183600.0|[[76498.0_20150501, 76498.0_20150501]]|1 |
|2 |4 |76498.0 |20150501|76498.0_20150501 |183600.0|[[76498.0_20150501, 76498.0_20150501]]|1 |
|2 |4 |76498.0 |20150501|76498.0_20150501 |183600.0|[[76498.0_20150501, 76498.0_20150501]]|0 |
|2 |4 |351378.0|20180620|351378.0_20180620|183600.0|[[76498.0_20150501, 76498.0_20150501]]|0 |
+-----------+-----------+--------+--------+-----------------+--------+--------------------------------------+--------+
The tricky part for col selected is that I only want the exact number of occurrences of a value in selected_B_C to have value 1 for selected
For example in group 2, even though there are 3 records with value of 76498.0_20150501 for col B_C, I want only two records from group 2 whose value is 76498.0_20150501 to have value of 1 for selected, as selected_B_C for group 2 has exactly 2 elements with value 76498.0_20150501 in col selected_B_C

Oracle: Recursively self referential join with nth level record

I have self referential table like this:
id |level | parent_id
----------------------
1 |1 |null
2 |1 |null
3 |2 |1
4 |2 |1
5 |2 |2
6 |3 |5
7 |3 |3
8 |4 |7
9 |4 |6
------------------------
I need nth level parent in result. for example 2nd level parent
id |level | parent_id| second_level_parent_id
------------------------------------------------
1 |1 |null |null
2 |1 |null |null
3 |2 |1 |null
4 |2 |1 |null
5 |2 |2 |null
6 |3 |5 |5
7 |3 |3 |3
8 |4 |7 |3
9 |4 |6 |5
-------------------------------------------------
this works for me.
SELECT m.*,
CONNECT_BY_ROOT id AS second_level_parent_id
FROM my_table m
WHERE CONNECT_BY_ROOT level =2
CONNECT BY prior id = parent_id;
thanks #Jozef DĂșc

Pivot rows into columns Firebird 2.5

The sequence:
table1
=====
id - Description
----------------
|1 |Proj-x
|2 |Settlers
|3 |Bank
|4 |Newiest
table2
=====
id table1Id value alternate-value
---------------------------------
|1| 1 |12 |null
|1| 4 |6 | null
|1| null |22 |Desktop
|2| 2 |7 |null
|2| 3 |11 |null
|2| null |2 |Camby Jere
|3| 1 |8 |null
|3| 4 |6 |null
|3| null |7 |Camby Jere
The select instruction must return
|table1.id|Proj-x|Settlers|Bank |Newiest|Desktop|Camby Jere
----------------------------------------------------------
|1 |12 |null |null |null |null |null
|1 |null |null |6 |null |null |null
|1 |null |null |null |null |22 |null
|2 |null |7 |null |null |null |null
|2 |null |null |11 |null |null |null
|2 |null |null |null |null |null |2
|3 |8 |null |null |null |null |null
|3 |null |null |null |6 |null |null
|3 |null |null |null |null |null |7
The columns are description from table1 when id exists in table2 or the column "alternate-value" when table1Id is null.
Is it possible? Or do I need construct the query dynamically?
Well, yes, it is possible (if done in two steps), but it is a bit complex so I'm not certain whether you should do it. First, you could execute the following select:
with tmp1(MyFieldName) as
(select distinct coalesce(t2.alternate_value, t1.Description)
from table2 t2
left join table1 t1 on t2.Table1ID = t1.id),
tmp2(MyPivotSource) as
(select 'iif(coalesce(t2.alternate_value, t1.Description) = '''||MyFieldName||''', t2.MyValue, 0) as "'||MyFieldName||'"'
from tmp1)
select 'select t2.id as "table1.id", '||list(MyPivotSource)||'from table2 t2
left join table1 t1 on t2.Table1ID = t1.id'
from rdb$database
cross join tmp2
And then you would have to run the result. Note that I used MyValue rather than Value and that the columns may not appear in the order you desire (although that could also be possible).
Pivottables are not something that easily converts to SQL in Firebird and I generally prefer to create Pivot tables in Excel rather than Firebird, but as you can see it is possible.