Shift pyspark column value to left by one - dataframe

I have a pyspark dataframe that looks like this:
|name|age|height |weight
+-------------+--------------------+------------------------+------------------------+-------------------------+--------------------+------------------+------------------+------------+
| |Mike |20|6-7|
As you can see the values and the column names are not aligned. For example, "Mike" should be under the column of "name", instead of age.
How can I shift the values to left by one so it can match the column name?
The ideal dataframe looks like:
|name|age|height |weight
+-------------+--------------------+------------------------+------------------------+-------------------------+--------------------+------------------+------------------+------------+
| Mike |20 |6-0|160|
Please note that the above data is just an example. In reality I have more than 200 columns and more than 1M rows of data.

Try with .toDF with new column names by dropping name column from the dataframe.
Example:
df=spark.createDataFrame([('','Mike',20,'6-7',160)],['name','age','height','weight'])
df.show()
#+----+----+------+------+---+
#|name| age|height|weight| _5|
#+----+----+------+------+---+
#| |Mike| 20| 6-7|160|
#+----+----+------+------+---+
#select all columns except name
df1=df.select(*[i for i in df.columns if i != 'name'])
drop_col=df.columns.pop()
req_cols=[i for i in df.columns if i != drop_col]
df1.toDF(*req_cols).show()
#+----+---+------+------+
#|name|age|height|weight|
#+----+---+------+------+
#|Mike| 20| 6-7| 160|
#+----+---+------+------+
Using spark.createDataFrame():
cols=['name','age','height','weight']
spark.createDataFrame(df.select(*[i for i in df.columns if i != 'name']).rdd,cols).show()
#+----+---+------+------+
#|name|age|height|weight|
#+----+---+------+------+
#|Mike| 20| 6-7| 160|
#+----+---+------+------+
If you are creating dataframe while reading a file then define schema having first column name as dummy then once you read the data drop the column using .drop() function.
spark.read.schema(<struct_type schema>).csv(<path>).drop('<dummy_column_name>')
spark.read.option("header","true").csv(<path>).toDF(<columns_list_with dummy_column>).drop('<dummy_column_name>')

Related

How to find the uncommon rows between two Pyspark DataFrames? [duplicate]

I have to compare two dataframes to find out the columns differences based on one or more key fields using pyspark in a most performance efficient approach since I have to deal with huge dataframes
I have already built a solution for comparing two dataframes using hash match without key field matching like data_compare.df_subtract(self.df_db1_hash,self.df_db2_hash)
but scenario is different if I want to use key field match
Note: I have provided sample expected dataframe. Actual requirement is any differences from DataFrame 2 in any columns should be retrieved in output/expected dataframe.
DataFrame 1:
+------+---------+--------+----------+-------+--------+
|emp_id| emp_city|emp_name| emp_phone|emp_sal|emp_site|
+------+---------+--------+----------+-------+--------+
| 3| Chennai| rahman|9848022330| 45000|SanRamon|
| 1|Hyderabad| ram|9848022338| 50000| SF|
| 2|Hyderabad| robin|9848022339| 40000| LA|
| 4| sanjose| romin|9848022331| 45123|SanRamon|
+------+---------+--------+----------+-------+--------+
DataFrame 2:
+------+---------+--------+----------+-------+--------+
|emp_id| emp_city|emp_name| emp_phone|emp_sal|emp_site|
+------+---------+--------+----------+-------+--------+
| 3| Chennai| rahman|9848022330| 45000|SanRamon|
| 1|Hyderabad| ram|9848022338| 50000| SF|
| 2|Hyderabad| robin|9848022339| 40000| LA|
| 4| sandiego| romino|9848022331| 45123|SanRamon|
+------+---------+--------+----------+-------+--------+
Expected dataframe after comparing dataframe 1 and 2
+------+---------+--------+----------+
|emp_id| emp_city|emp_name| emp_phone|
+------+---------+--------+----------+
| 4| sandiego| romino|9848022331|
+------+---------+--------+----------+
subract function is what you are looking for, which will check all the columns value for each row and gives you a dataframe which are different from the other dataframe.
df2.subtract(df1).select("emp_id","emp_city","emp_name","emp_phone")
As the api document says
Return a new :class:DataFrame containing rows in this frame but not in another frame.
This is equivalent to EXCEPT in SQL.

Extract key value from dataframe in PySpark

I have the below dataframe which I have read from a JSON file.
1
2
3
4
{"todo":["wakeup", "shower"]}
{"todo":["brush", "eat"]}
{"todo":["read", "write"]}
{"todo":["sleep", "snooze"]}
I need my output to be as below Key and Value. How do I do this? Do I need to create a schema?
ID
todo
1
wakeup, shower
2
brush, eat
3
read, write
4
sleep, snooze
The key-value which you refer to is a struct. "keys" are struct field names, while "values" are field values.
What you want to do is called unpivoting. One of the ways to do it in PySpark is using stack. The following is a dynamic approach, where you don't need to provide existent column names.
Input dataframe:
df = spark.createDataFrame(
[((['wakeup', 'shower'],),(['brush', 'eat'],),(['read', 'write'],),(['sleep', 'snooze'],))],
'`1` struct<todo:array<string>>, `2` struct<todo:array<string>>, `3` struct<todo:array<string>>, `4` struct<todo:array<string>>')
Script:
to_melt = [f"\'{c}\', `{c}`.todo" for c in df.columns]
df = df.selectExpr(f"stack({len(to_melt)}, {','.join(to_melt)}) (ID, todo)")
df.show()
# +---+----------------+
# | ID| todo|
# +---+----------------+
# | 1|[wakeup, shower]|
# | 2| [brush, eat]|
# | 3| [read, write]|
# | 4| [sleep, snooze]|
# +---+----------------+
Use from_json to convert string to array. Explode to cascade each unique element to row.
data
df = spark.createDataFrame(
[(('{"todo":"[wakeup, shower]"}'),('{"todo":"[brush, eat]"}'),('{"todo":"[read, write]"}'),('{"todo":"[sleep, snooze]"}'))],
('value1','values2','value3','value4'))
code
new = (df.withColumn('todo', explode(flatten(array(*[map_values(from_json(x, "MAP<STRING,STRING>")) for x in df.columns])))) #From string to array to indivicual row
.withColumn('todo', translate('todo',"[]",'')#Remove corner brackets
) ).show(truncate=False)
outcome
+---------------------------+-----------------------+------------------------+--------------------------+--------------+
|value1 |values2 |value3 |value4 |todo |
+---------------------------+-----------------------+------------------------+--------------------------+--------------+
|{"todo":"[wakeup, shower]"}|{"todo":"[brush, eat]"}|{"todo":"[read, write]"}|{"todo":"[sleep, snooze]"}|wakeup, shower|
|{"todo":"[wakeup, shower]"}|{"todo":"[brush, eat]"}|{"todo":"[read, write]"}|{"todo":"[sleep, snooze]"}|brush, eat |
|{"todo":"[wakeup, shower]"}|{"todo":"[brush, eat]"}|{"todo":"[read, write]"}|{"todo":"[sleep, snooze]"}|read, write |
|{"todo":"[wakeup, shower]"}|{"todo":"[brush, eat]"}|{"todo":"[read, write]"}|{"todo":"[sleep, snooze]"}|sleep, snooze |
+---------------------------+-----------------------+------------------------+--------------------------+--------------+

How to add multiple column dynamically based on filter condition

I am trying to create multiple columns dynamically based on filter condition after comparing two data frame with below code
source_df
+---+-----+-----+----+
|key|val11|val12|date|
+---+-----+-----+-----+
|abc| 1.1| john|2-3-21
|def| 3.0| dani|2-2-21
+---+-----+-----+------
dest_df
+---+-----+-----+------+
|key|val11|val12|date |
+---+-----+-----+------
|abc| 2.1| jack|2-3-21|
|def| 3.0| dani|2-2-21|
-----------------------
columns= source_df.columns[1:]
joined_df=source_df\
.join(dest_df, 'key', 'full')
for column in columns:
column_name="difference_in_"+str(column)
report = joined_df\
.filter((source_df[column] != dest_df[column]))\
.withColumn(column_name, F.concat(F.lit('[src:'), source_df[column], F.lit(',dst:'),dest_df[column],F.lit(']')))
The output I expect is
#Expected
+---+-----------------+------------------+
|key| difference_in_val11| difference_in_val12 |
+---+-----------------+------------------+
|abc|[src:1.1,dst:2.1]|[src:john,dst:jack]|
+---+-----------------+-------------------+
I get only one column result
#Actual
+---+-----------------+-
|key| difference_in_val12 |
+---+-----------------+-|
|abc|[src:john,dst:jack]|
+---+-----------------+-
How to generate multiple columns based on filter condition dynamically?
Dataframes are immutable objects. Having said that, you need to create another dataframe using the one that got generated in the 1st iteration. Something like below -
from pyspark.sql import functions as F
columns= source_df.columns[1:]
joined_df=source_df\
.join(dest_df, 'key', 'full')
for column in columns:
if column != columns[-1]:
column_name="difference_in_"+str(column)
report = joined_df\
.filter((source_df[column] != dest_df[column]))\
.withColumn(column_name, F.concat(F.lit('[src:'), source_df[column], F.lit(',dst:'),dest_df[column],F.lit(']')))
else:
column_name="difference_in_"+str(column)
report1 = report.filter((source_df[column] != dest_df[column]))\
.withColumn(column_name, F.concat(F.lit('[src:'), source_df[column], F.lit(',dst:'),dest_df[column],F.lit(']')))
report1.show()
#report.show()
Output -
+---+-----+-----+-----+-----+-------------------+-------------------+
|key|val11|val12|val11|val12|difference_in_val11|difference_in_val12|
+---+-----+-----+-----+-----+-------------------+-------------------+
|abc| 1.1| john| 2.1| jack| [src:1.1,dst:2.1]|[src:john,dst:jack]|
+---+-----+-----+-----+-----+-------------------+-------------------+
You could also do this with a union of both dataframes and then collect list only if collect_set size is greater than 1 , this can avoid joining the dataframes:
from pyspark.sql import functions as F
cols = source_df.drop("key").columns
output = (source_df.withColumn("ref",F.lit("src:"))
.unionByName(dest_df.withColumn("ref",F.lit("dst:"))).groupBy("key")
.agg(*[F.when(F.size(F.collect_set(i))>1,F.collect_list(F.concat("ref",i))).alias(i)
for i in cols]).dropna(subset = cols, how='all')
)
output.show()
+---+------------------+--------------------+
|key| val11| val12|
+---+------------------+--------------------+
|abc|[src:1.1, dst:2.1]|[src:john, dst:jack]|
+---+------------------+--------------------+

Pyspark Coalesce with first non null and most recent nonnull values

I have a dataframe with a column has null sofr first few and last few rows. How do I coalesce this column using the first non-null value and the last non-null record?
For example say I have the following dataframe:
What'd I'd want to produce is the following:
So as you can see the first two rows get populated with 0.6 because that is the first non-null record. The last several rows become 3 because that was the last non-null record.
You can use last() for filling and Window for sorting:
from pyspark.sql import Row, Window, functions as F
df = sql_context.createDataFrame([
Row(Month=datetime.date(2021,1,1), Rating=None),
Row(Month=datetime.date(2021,2,1), Rating=None),
Row(Month=datetime.date(2021,3,1), Rating=0.6),
Row(Month=datetime.date(2021,4,1), Rating=1.2),
Row(Month=datetime.date(2021,5,1), Rating=1.),
Row(Month=datetime.date(2021,6,1), Rating=None),
Row(Month=datetime.date(2021,7,1), Rating=None),
])
(
df
.withColumn('Rating',
F.when(F.isnull('Rating'),
F.last('Rating', ignorenulls=True).over(Window.orderBy('Month'))
).otherwise(F.col('Rating')))
# This second run below is only required for the first rows in the DF
.withColumn('Rating',
F.when(F.isnull('Rating'),
F.last('Rating', ignorenulls=True).over(Window.orderBy(F.desc('Month')))
).otherwise(F.col('Rating')))
.sort('Month') # Only required for output formatting
.show()
)
# Output
+----------+------+
| Month|Rating|
+----------+------+
|2021-01-01| 0.6|
|2021-02-01| 0.6|
|2021-03-01| 0.6|
|2021-04-01| 1.2|
|2021-05-01| 1.0|
|2021-06-01| 1.0|
|2021-07-01| 1.0|
+----------+------+

pyspark withColumn, how to vary column name

is there any way to create/fill columns with pyspark 2.1.0 where the name of the column is the value of a different column?
I tried the following
def createNewColumnsFromValues(dataFrame, colName, targetColName):
"""
Set value of column colName to targetColName's value
"""
cols = dataFrame.columns
#df = dataFrame.withColumn(f.col(colName), f.col(targetColName))
df = dataFrame.withColumn('x', f.col(targetColName))
return df
The out commented line does not work, when calling the method I get the error
TypeError: 'Column' object is not callable
whereas the fixed name (as a string) is no problem. Any idea of how to also make the name of the column come from another one, not just the value? I also tried to use a UDF function definition as a workaround with the same no success result.
Thanks for help!
Edit:
from pyspark.sql import functions as f
I figured a solution which scales nicely for few (or not many) distinct values I need columns for. Which is necessarily the case or the number of columns would explode.
def createNewColumnsFromValues(dataFrame, colName, targetCol):
distinctValues = dataFrame.select(colName).distinct().collect()
for value in distinctValues:
dataFrame = dataFrame.withColumn(str(value[0]), f.when(f.col(colName) == value[0], f.col(targetCol)).otherwise(f.lit(None)))
return dataFrame
You might want to try the following code:
test_df = spark.createDataFrame([
(1,"2",5,1),(3,"4",7,8),
], ("col1","col2","col3","col4"))
def createNewColumnsFromValues(dataFrame, sourceCol, colName, targetCol):
"""
Set value column colName to targetCol
"""
for value in sourceCol:
dataFrame = dataFrame.withColumn(str(value[0]), when(col(colName)==value[0], targetCol).otherwise(None))
return dataFrame
createNewColumnsFromValues(test_df, test_df.select("col4").collect(), "col4", test_df["col3"]).show()
The trick here is to do select("COLUMNNAME").collect() to get a list of the values in the column. Then colName contains this list, which is a list of rows, where each row has a single element. So you can directly iterate through the list and access the element at position 0. In this case a cast to string was necessary to ensure the column name of the new column is a string. The target column is used for the values for each of the individual columns. So the result would look like:
+----+----+----+----+----+----+
|col1|col2|col3|col4| 1| 8|
+----+----+----+----+----+----+
| 1| 2| 5| 1| 5|null|
| 3| 4| 7| 8|null| 7|
+----+----+----+----+----+----+