Pyspark Dataframe Merge Rows by eliminating null values - dataframe

i have a Pyspark Data Frame like this one
+-----------+-------+----------+-------+-------+---------+
| ID_PRODUCT| VALUE | TIMESTAMP| SPEED | CODE | FIRMWARE|
+-----------+-------+----------+-------+-------+---------+
| 3| 1| null| 124,21| null| null|
| 5| 2| null| 124,23| null| null|
| 5| 2| null| 124,26| null| null|
| 6| 4| null| 124,24| null| null|
| 3| 1| null| null| 6764| null|
| 5| 2| null| null| 6772| null|
| 5| 2| null| null| 6782| null|
| 6| 4| null| null| 6932| null|
| 3| 1| null| null| null| 1|
| 5| 2| null| null| null| 1|
| 5| 2| null| null| null| 1|
| 6| 4| null| null| null| 1|
| 3| 1| 17:18:04| null| null| null|
| 5| 2| 18:22:40| null| null| null|
| 5| 2| 18:25:29| null| null| null|
| 6| 4| 18:32:18| null| null| null|
+-----------+-------+----------+-------+-------+---------+
and i want to merge the columns of it, it should look like (for example):
+-----------+-------+----------+-------+-------+---------+
| ID_PRODUCT| VALUE | TIMESTAMP| SPEED | CODE | FIRMWARE|
+-----------+-------+----------+-------+-------+---------+
| 3| 1| 17:18:04| 124,21| 6764| 1|
| 5| 2| 18:22:40| 124,23| 6772| 1|
| 5| 2| 18:25:29| 124,26| 6782| 1|
| 6| 4| 18:32:18| 124,24| 6932| 1|
+-----------+-------+----------+-------+-------+---------+
I tried to use:
df = df.groupBy('id').agg(*[f.first(x,ignorenulls=True) for x in df.columns])
however, this is just giving me just the first value of the column and i need all the records. Because to one ID i have different registered Timestamps and different registered values, which im now loosing.
Thanks for the advice

I'm not sure if this is what you wanted, but essentially you can do a collect_list for each id and column, and explode all resulting lists. In this way, you can have multiple entries per id.
from functools import reduce
import pyspark.sql.functions as F
df2 = reduce(
lambda x, y: x.withColumn(y, F.explode_outer(y)),
df.columns[2:],
df.groupBy('id_product', 'value').agg(*[F.collect_list(c).alias(c) for c in df.columns[2:]])
).distinct()

Related

sum of row values within a window range in spark dataframe

I have a dataframe as shown below where count column is having number of columns that has to added to get a new column.
+---+----+----+------------+
| ID|date| A| count|
+---+----+----+------------+
| 1| 1| 10| null|
| 1| 2| 10| null|
| 1| 3|null| null|
| 1| 4| 20| 1|
| 1| 5|null| null|
| 1| 6|null| null|
| 1| 7| 60| 2|
| 1| 7| 60| null|
| 1| 8|null| null|
| 1| 9|null| null|
| 1| 10|null| null|
| 1| 11| 80| 3|
| 2| 1| 10| null|
| 2| 2| 10| null|
| 2| 3|null| null|
| 2| 4| 20| 1|
| 2| 5|null| null|
| 2| 6|null| null|
| 2| 7| 60| 2|
+---+----+----+------------+
The expected output is as shown below.
+---+----+----+-----+-------+
| ID|date| A|count|new_col|
+---+----+----+-----+-------+
| 1| 1| 10| null| null|
| 1| 2| 10| null| null|
| 1| 3| 10| null| null|
| 1| 4| 20| 2| 30|
| 1| 5| 10| null| null|
| 1| 6| 10| null| null|
| 1| 7| 60| 3| 80|
| 1| 7| 60| null| null|
| 1| 8|null| null| null|
| 1| 9|null| null| null|
| 1| 10| 10| null| null|
| 1| 11| 80| 2| 90|
| 2| 1| 10| null| null|
| 2| 2| 10| null| null|
| 2| 3|null| null| null|
| 2| 4| 20| 1| 20|
| 2| 5|null| null| null|
| 2| 6| 20| null| null|
| 2| 7| 60| 2| 80|
+---+----+----+-----+-------+
I tried with window function as follows
val w2 = Window.partitionBy("ID").orderBy("date")
val newDf = df
.withColumn("new_col", when(col("A").isNotNull && col("count").isNotNull, sum(col("A).over(Window.partitionBy("ID").orderBy("date").rowsBetween(Window.currentRow - (col("count")), Window.currentRow)))
But I am getting error as below.
error: overloaded method value - with alternatives:
(x: Long)Long <and>
(x: Int)Long <and>
(x: Char)Long <and>
(x: Short)Long <and>
(x: Byte)Long
cannot be applied to (org.apache.spark.sql.Column)
seems like the column value provided inside window function is causing the issue.
Any idea about how to resolve this error to achieve the requirement or any other alternative solutions?
Any leads appreciated !

Pyspark sum of columns after union of dataframe

How can I sum all columns after unioning two dataframe ?
I have this first df with one row per user:
df = sqlContext.createDataFrame([("2022-01-10", 3, 2,"a"),("2022-01-10",3,4,"b"),("2022-01-10", 1,3,"c")], ["date", "value1", "value2", "userid"])
df.show()
+----------+------+------+------+
| date|value1|value2|userid|
+----------+------+------+------+
|2022-01-10| 3| 2| a|
|2022-01-10| 3| 4| b|
|2022-01-10| 1| 3| c|
+----------+------+------+------+
date value will always be the today's date.
and I have another df, with multiple row per userid this time, so one value for each day:
df2 = sqlContext.createDataFrame([("2022-01-01", 13, 12,"a"),("2022-01-02",13,14,"b"),("2022-01-03", 11,13,"c"),
("2022-01-04", 3, 2,"a"),("2022-01-05",3,4,"b"),("2022-01-06", 1,3,"c"),
("2022-01-10", 31, 21,"a"),("2022-01-07",31,41,"b"),("2022-01-09", 11,31,"c")], ["date", "value3", "value4", "userid"])
df2.show()
+----------+------+------+------+
| date|value3|value4|userid|
+----------+------+------+------+
|2022-01-01| 13| 12| a|
|2022-01-02| 13| 14| b|
|2022-01-03| 11| 13| c|
|2022-01-04| 3| 2| a|
|2022-01-05| 3| 4| b|
|2022-01-06| 1| 3| c|
|2022-01-10| 31| 21| a|
|2022-01-07| 31| 41| b|
|2022-01-09| 11| 31| c|
+----------+------+------+------+
After unioning the two of them with this function, here what I have:
def union_different_tables(df1, df2):
columns_df1 = df1.columns
columns_df2 = df2.columns
data_types_df1 = [i.dataType for i in df1.schema.fields]
data_types_df2 = [i.dataType for i in df2.schema.fields]
for col, _type in zip(columns_df1, data_types_df1):
if col not in df2.columns:
df2 = df2.withColumn(col, f.lit(None).cast(_type))
for col, _type in zip(columns_df2, data_types_df2):
if col not in df1.columns:
df1 = df1.withColumn(col, f.lit(None).cast(_type))
union = df1.unionByName(df2)
return union
+----------+------+------+------+------+------+
| date|value1|value2|userid|value3|value4|
+----------+------+------+------+------+------+
|2022-01-10| 3| 2| a| null| null|
|2022-01-10| 3| 4| b| null| null|
|2022-01-10| 1| 3| c| null| null|
|2022-01-01| null| null| a| 13| 12|
|2022-01-02| null| null| b| 13| 14|
|2022-01-03| null| null| c| 11| 13|
|2022-01-04| null| null| a| 3| 2|
|2022-01-05| null| null| b| 3| 4|
|2022-01-06| null| null| c| 1| 3|
|2022-01-10| null| null| a| 31| 21|
|2022-01-07| null| null| b| 31| 41|
|2022-01-09| null| null| c| 11| 31|
+----------+------+------+------+------+------+
What I want to get is the sum of all columns in df2 (I have 10 of them in the real case) till the date of the day for each userid, so one row per user:
+----------+------+------+------+------+------+
| date|value1|value2|userid|value3|value4|
+----------+------+------+------+------+------+
|2022-01-10| 3| 2| a| 47 | 35 |
|2022-01-10| 3| 4| b| 47 | 59 |
|2022-01-10| 1| 3| c| 23 | 47 |
+----------+------+------+------+------+------+
Since I have to do this operation for multiple tables, here what I tried:
user_window = Window.partitionBy(['userid']).orderBy('date')
list_tables = [df2]
list_col_original = df.columns
for table in list_tables:
df = union_different_tables(df, table)
list_column = list(set(table.columns) - set(list_col_original))
list_col_original.extend(list_column)
df = df.select('userid',
*[f.sum(f.col(col_name)).over(user_window).alias(col_name) for col_name in list_column])
df.show()
+------+------+------+
|userid|value4|value3|
+------+------+------+
| c| 13| 11|
| c| 16| 12|
| c| 47| 23|
| c| 47| 23|
| b| 14| 13|
| b| 18| 16|
| b| 59| 47|
| b| 59| 47|
| a| 12| 13|
| a| 14| 16|
| a| 35| 47|
| a| 35| 47|
+------+------+------+
But that give me a sort of cumulative sum, plus I didn't find a way to add all the columns in the resulting df.
The only thing is that I can't do any join ! My df are very very large and any join is taking too long to compute.
Do you know how I can fix my code to have the result I want ?
After union of df1 and df2, you can group by userid and sum all columns except date for which you get the max.
Note that for the union part, you can actually use DataFrame.unionByName if you have the same data types but only number of columns can differ:
df = df1.unionByName(df2, allowMissingColumns=True)
Then group by and agg:
import pyspark.sql.functions as F
result = df.groupBy("userid").agg(
F.max("date").alias("date"),
*[F.sum(c).alias(c) for c in df.columns if c not in ("date", "userid")]
)
result.show()
#+------+----------+------+------+------+------+
#|userid| date|value1|value2|value3|value4|
#+------+----------+------+------+------+------+
#| a|2022-01-10| 3| 2| 47| 35|
#| b|2022-01-10| 3| 4| 47| 59|
#| c|2022-01-10| 1| 3| 23| 47|
#+------+----------+------+------+------+------+
This supposes the second dataframe contains only dates prior to the today date in the first one. Otherwise, you'll need to filter df2 before union.

Apache spark window, chose previous last item based on some condition

I have an input data which has id, pid, pname, ppid which are id (can think it is time), pid (process id), pname (process name), ppid (parent process id) who created pid
+---+---+-----+----+
| id|pid|pname|ppid|
+---+---+-----+----+
| 1| 1| 5| -1|
| 2| 1| 7| -1|
| 3| 2| 9| 1|
| 4| 2| 11| 1|
| 5| 3| 5| 1|
| 6| 4| 7| 2|
| 7| 1| 9| 3|
+---+---+-----+----+
now need to find ppname (parent process name) which is the last pname (previous pnames) of following condition previous.pid == current.ppid
expected result for previous example:
+---+---+-----+----+------+
| id|pid|pname|ppid|ppname|
+---+---+-----+----+------+
| 1| 1| 5| -1| -1|
| 2| 1| 7| -1| -1| no item found above with pid=-1
| 3| 2| 9| 1| 7| last pid = 1(ppid) above, pname=7
| 4| 2| 11| 1| 7|
| 5| 3| 5| 1| 7|
| 6| 4| 7| 2| 11| last pid = 2(ppid) above, pname=11
| 7| 1| 9| 3| 5| last pid = 3(ppid) above, pname=5
+---+---+-----+----+------+
I can join by itself based on pid==ppid then take diff between ids and pick row with min positive difference maybe then join back again for the cases where we didn't find any positive diffs (-1 case).
But I am thinking it is almost like a cross join, which I might not afford since I have 100M rows.

Counting number of nulls in pyspark dataframe by row

So I want to count the number of nulls in a dataframe by row.
Please note, there are 50+ columns, I know I could do a case/when statement to do this, but I would prefer a neater solution.
For example, a subset:
columns = ['id', 'item1', 'item2', 'item3']
vals = [(1, 2, 0, None),(2, None, 1, None),(3,None,9, 1)]
df=spark.createDataFrame(vals,columns)
df.show()
+---+-----+-----+-----+
| id|item1|item2|item3|
+---+-----+-----+-----+
| 1| 2| 'A'| null|
| 2| null| 1| null|
| 3| null| 9| 'C'|
+---+-----+-----+-----+
After running the code, the desired output is:
+---+-----+-----+-----+--------+
| id|item1|item2|item3|numNulls|
+---+-----+-----+-----+--------+
| 1| 2| 'A'| null| 1|
| 2| null| 1| null| 2|
| 3| null| 9| 'C'| 1|
+---+-----+-----+-----+--------+
EDIT: Not all non null values are ints.
Convert null to 1 and others to 0 and then sum all the columns:
df.withColumn('numNulls', sum(df[col].isNull().cast('int') for col in df.columns)).show()
+---+-----+-----+-----+--------+
| id|item1|item2|item3|numNulls|
+---+-----+-----+-----+--------+
| 1| 2| 0| null| 1|
| 2| null| 1| null| 2|
| 3| null| 9| 1| 1|
+---+-----+-----+-----+--------+

Pyspark: Add new Column contain a value in a column counterpart another value in another column that meets a specified condition

Add new Column contain a value in a column counterpart another value in another column that meets a specified condition
For instance,
original DF as follows:
+-----+-----+-----+
|col1 |col2 |col3 |
+-----+-----+-----+
| A| 17| 1|
| A| 16| 2|
| A| 18| 2|
| A| 30| 3|
| B| 35| 1|
| B| 34| 2|
| B| 36| 2|
| C| 20| 1|
| C| 30| 1|
| C| 43| 1|
+-----+-----+-----+
I need to repeat the value in col2 that counterpart to 1 in col3 for each col1's groups. and if there are more value =1 in col3 for any group from col1 repeat the minimum value
the desired Df as follows:
+----+----+----+----------+
|col1|col2|col3|new_column|
+----+----+----+----------+
| A| 17| 1| 17|
| A| 16| 2| 17|
| A| 18| 2| 17|
| A| 30| 3| 17|
| B| 35| 1| 35|
| B| 34| 2| 35|
| B| 36| 2| 35|
| C| 20| 1| 20|
| C| 30| 1| 20|
| C| 43| 1| 20|
+----+----+----+----------+
df3=df.filter(df.col3==1)
+----+----+----+
|col1|col2|col3|
+----+----+----+
| B| 35| 1|
| C| 20| 1|
| C| 30| 1|
| C| 43| 1|
| A| 17| 1|
+----+----+----+
df3.createOrReplaceTempView("mytable")
To obtain minimum value of col2 I followed the accepted answer in this link How to find exact median for grouped data in Spark
df6=spark.sql("select col1, min(col2) as minimum from mytable group by col1 order by col1")
df6.show()
+----+-------+
|col1|minimum|
+----+-------+
| A| 17|
| B| 35|
| C| 20|
+----+-------+
df_a=df.join(df6,['col1'],'leftouter')
+----+----+----+-------+
|col1|col2|col3|minimum|
+----+----+----+-------+
| B| 35| 1| 35|
| B| 34| 2| 35|
| B| 36| 2| 35|
| C| 20| 1| 20|
| C| 30| 1| 20|
| C| 43| 1| 20|
| A| 17| 1| 17|
| A| 16| 2| 17|
| A| 18| 2| 17|
| A| 30| 3| 17|
+----+----+----+-------+
Is there way better than this solution?