Pyspark - how to drop records by primary keys? - sql

I want to delete the records from an old_df if a new_df has a del flag for the key metric_id. What is the right way to achieve this?
old_df (flag here is filled with nulls on purpose)
+---------+--------+-------------+
|metric_id| flag | value|
+---------+--------+-------------+
| 10| null| value2|
| 10| null| value9|
| 12| null|updated_value|
| 15| null| test_value2|
+---------+--------+-------------+
new_df
+---------+--------+-------------+
|metric_id| flag | value|
+---------+--------+-------------+
| 10| del| value2|
| 12| pass|updated_value|
| 15| del| test_value2|
+---------+--------+-------------+
result_df
+---------+--------+-------------+
|metric_id| flag | value|
+---------+--------+-------------+
| 12| pass|updated_value|
+---------+--------+-------------+

One easy way to do this is to join then filter:
result_df = (
old_df.join(new_df, on='metric_id', how='left')
.where((new_df['flag'].isNull()) | (new_df['flag'] != lit('del')))
.select('metric_id', new_df['flag'], new_df['value'])
)
Which produces
+---------+----+-------------+
|metric_id|flag| value|
+---------+----+-------------+
| 12|pass|updated_value|
+---------+----+-------------+
I'm using a left join because there might be records in old_df for which the primary key is not present in new_df (and you don't want to delete those).

Related

PySpark: How to concatenate two distinct dataframes?

I have multiple dataframes that I need to concatenate together, row-wise. In pandas, we would typically write: pd.concat([df1, df2]).
This thread: How to concatenate/append multiple Spark dataframes column wise in Pyspark? appears close, but its respective answer:
df1_schema = StructType([StructField("id",IntegerType()),StructField("name",StringType())])
df1 = spark.sparkContext.parallelize([(1, "sammy"),(2, "jill"),(3, "john")])
df1 = spark.createDataFrame(df1, schema=df1_schema)
df2_schema = StructType([StructField("secNo",IntegerType()),StructField("city",StringType())])
df2 = spark.sparkContext.parallelize([(101, "LA"),(102, "CA"),(103,"DC")])
df2 = spark.createDataFrame(df2, schema=df2_schema)
schema = StructType(df1.schema.fields + df2.schema.fields)
df1df2 = df1.rdd.zip(df2.rdd).map(lambda x: x[0]+x[1])
spark.createDataFrame(df1df2, schema).show()
Yields the following error when done on my data at scale: Can only zip RDDs with same number of elements in each partition
How can I join 2 or more data frames that are identical in row length but are otherwise independent of content (they share a similar repeating structure/order but contain no shared data)?
Example expected data looks like:
+---+-----+ +-----+----+ +---+-----+-----+----+
| id| name| |secNo|city| | id| name|secNo|city|
+---+-----+ +-----+----+ +---+-----+-----+----+
| 1|sammy| + | 101| LA| => | 1|sammy| 101| LA|
| 2| jill| | 102| CA| | 2| jill| 102| CA|
| 3| john| | 103| DC| | 3| john| 103| DC|
+---+-----+ +-----+----+ +---+-----+-----+----+
You can create unique IDs with
df1 = df1.withColumn("unique_id", expr("row_number() over (order by (select null))"))
df2 = df2.withColumn("unique_id", expr("row_number() over (order by (select null))"))
then, you can left join them
df1.join(df2, Seq("unique_id"), "left").drop("unique_id")
Final output looks like
+---+----+---+-------+
| id|name|age|address|
+---+----+---+-------+
| 1| a| 7| x|
| 2| b| 8| y|
| 3| c| 9| z|
+---+----+---+-------+

Compare columns from two different dataframes based on id

I have two dataframes to compare, the order of records are different, the name of columns might be different. Have to compare columns (more than one) based on the unique key (id)
Example: consider cataframes df1 and df2
df1:
+---+-------+-----+
| id|student|marks|
+---+-------+-----+
| 1| Vijay| 23|
| 4| Vithal| 24|
| 2| Ram| 21|
| 3| Rahul| 25|
+---+-------+-----+
df2:
+-----+--------+------+
|newId|student1|marks1|
+-----+--------+------+
| 3| Rahul| 25|
| 2| Ram| 23|
| 1| Vijay| 23|
| 4| Vithal| 24|
+-----+--------+------+
Here based on id and newId, I need to compare values studentName and Marks, and need to check that whether the student with same id has same name and marks
In this example student with id 2 has 21 marks but in df2 23 marks
df1.exceptAll(df2).show()
// +---+-------+-----+
// | id|student|marks|
// +---+-------+-----+
// | 2| Ram| 21|
// +---+-------+-----+
I think diff will give the result you are looking for.
scala> df1.diff(df2)
res0: Seq[org.apache.spark.sql.Row] = List([2,Ram,21])

Joining tables and finding difference

I have a table which contains the following schema:
Table1
+------------------+--------------------+-------------------+-------------+-------------+
|student_id|project_id|name|project_name|approved|evaluation_type|grade| cohort_number|
I have another table with the following:
Table2
+-------------+----------+
|cohort_number|project_id|
My problem is: I want to get for each student_id the projects that he has not completed (no rows). The way i know all the projects he should have done is by checking the cohort_number. Basically I need the "diference" between the 2 tables. I want to fill table 1 with the missing entries, by comparing with table 2 project_id for that cohort_number.
I am not sure if I was clear.
I tried using LEFT JOIN, but I only get records where it matches. (I need the opposite)
Example:
Table1
|student_id|project_id|name| project_name| approved|evaluation_type| grade|cohort_number|
+----------+----------+--------------------+------+--------------------+--------+---------------+------------------
| 13| 18|Name| project/sd-03-bloc...| true| standard| 1.0| 3|
| 13| 7|Name| project/sd-03-bloc...| true| standard| 1.0| 3|
| 13| 27|Name| project/sd-03-bloc...| true| standard| 1.0| 3|
Table2
+-------------+----------+
|cohort_number|project_id|
+-------------+----------+
| 3| 18|
| 3| 27|
| 4| 15|
| 3| 7|
| 3| 35|
I want:
|student_id|project_id|name| project_name| approved|evaluation_type| grade|cohort_number|
+----------+----------+--------------------+------+--------------------+--------+---------------+------------------
| 13| 18|Name| project/sd-03-bloc...| true| standard| 1.0| 3|
| 13| 7|Name| project/sd-03-bloc...| true| standard| 1.0| 3|
| 13| 27|Name| project/sd-03-bloc...| true| standard| 1.0| 3|
| 13| 35|Name| project/sd-03-bloc...| false| standard| 0| 3|
Thanks in advance
If I followed you correctly, you can get all distinct (student_id, cohort_number, name) tuples from table1, and then bring all corresponding rows from table2. This basically gives you one row for each project that a student should have completed.
You can then bring table1 with a left join. "Missing" projects are identified by null values in columns project_name, approved, evaluation_type, grade.
select
s.student_id,
t2.project_id,
s.name,
t1.project_name,
t1.approved,
t1.evaluation_type,
t1.grade,
s.cohort_number
from (select distinct student_id, cohort_number, name from table1) s
inner join table2 t2
on t2.cohort_number = s.cohort_number
left join table1 t1
on t1.student_id = s.student_id
and t1.project_id = t.project_id

Coalesce columns in pyspark dataframes

res=to.join(tc, to.id1 == tc.id,how='left').select(to.id1.alias('Employee_id'), tc.name.alias('Employee_Name'), to.dept.alias('Employee_Dept'))
res.show()
+-----------+-------------+-------------+
|Employee_id|Employee_Name|Employee_Dept|
+-----------+-------------+-------------+
| 12| Prad| Physics|
| 13| null| Chem|
| 14| null| Maths|
+-----------+-------------+-------------+
I want to replace the null with say NONAME. Please advise the select syntax
Try something like this:
df.withColumn("EmployeeNameNoNull",coalesce(df.Employee_Name,lit('NONAME'))).show()

Join two data frames, select all columns from one and some columns from the other

Let's say I have a spark data frame df1, with several columns (among which the column id) and data frame df2 with two columns, id and other.
Is there a way to replicate the following command:
sqlContext.sql("SELECT df1.*, df2.other FROM df1 JOIN df2 ON df1.id = df2.id")
by using only pyspark functions such as join(), select() and the like?
I have to implement this join in a function and I don't want to be forced to have sqlContext as a function parameter.
Asterisk (*) works with alias. Ex:
from pyspark.sql.functions import *
df1 = df1.alias('df1')
df2 = df2.alias('df2')
df1.join(df2, df1.id == df2.id).select('df1.*')
Not sure if the most efficient way, but this worked for me:
from pyspark.sql.functions import col
df1.alias('a').join(df2.alias('b'),col('b.id') == col('a.id')).select([col('a.'+xx) for xx in a.columns] + [col('b.other1'),col('b.other2')])
The trick is in:
[col('a.'+xx) for xx in a.columns] : all columns in a
[col('b.other1'),col('b.other2')] : some columns of b
Without using alias.
df1.join(df2, df1.id == df2.id).select(df1["*"],df2["other"])
Here is a solution that does not require a SQL context, but maintains the metadata of a DataFrame.
a = sc.parallelize([['a', 'foo'], ['b', 'hem'], ['c', 'haw']]).toDF(['a_id', 'extra'])
b = sc.parallelize([['p1', 'a'], ['p2', 'b'], ['p3', 'c']]).toDF(["other", "b_id"])
c = a.join(b, a.a_id == b.b_id)
Then, c.show() yields:
+----+-----+-----+----+
|a_id|extra|other|b_id|
+----+-----+-----+----+
| a| foo| p1| a|
| b| hem| p2| b|
| c| haw| p3| c|
+----+-----+-----+----+
I believe that this would be the easiest and most intuitive way:
final = (df1.alias('df1').join(df2.alias('df2'),
on = df1['id'] == df2['id'],
how = 'inner')
.select('df1.*',
'df2.other')
)
drop duplicate b_id
c = a.join(b, a.a_id == b.b_id).drop(b.b_id)
Here is the code snippet that does the inner join and select the columns from both dataframe and alias the same column to different column name.
emp_df = spark.read.csv('Employees.csv', header =True);
dept_df = spark.read.csv('dept.csv', header =True)
emp_dept_df = emp_df.join(dept_df,'DeptID').select(emp_df['*'], dept_df['Name'].alias('DName'))
emp_df.show()
dept_df.show()
emp_dept_df.show()
Output for 'emp_df.show()':
+---+---------+------+------+
| ID| Name|Salary|DeptID|
+---+---------+------+------+
| 1| John| 20000| 1|
| 2| Rohit| 15000| 2|
| 3| Parth| 14600| 3|
| 4| Rishabh| 20500| 1|
| 5| Daisy| 34000| 2|
| 6| Annie| 23000| 1|
| 7| Sushmita| 50000| 3|
| 8| Kaivalya| 20000| 1|
| 9| Varun| 70000| 3|
| 10|Shambhavi| 21500| 2|
| 11| Johnson| 25500| 3|
| 12| Riya| 17000| 2|
| 13| Krish| 17000| 1|
| 14| Akanksha| 20000| 2|
| 15| Rutuja| 21000| 3|
+---+---------+------+------+
Output for 'dept_df.show()':
+------+----------+
|DeptID| Name|
+------+----------+
| 1| Sales|
| 2|Accounting|
| 3| Marketing|
+------+----------+
Join Output:
+---+---------+------+------+----------+
| ID| Name|Salary|DeptID| DName|
+---+---------+------+------+----------+
| 1| John| 20000| 1| Sales|
| 2| Rohit| 15000| 2|Accounting|
| 3| Parth| 14600| 3| Marketing|
| 4| Rishabh| 20500| 1| Sales|
| 5| Daisy| 34000| 2|Accounting|
| 6| Annie| 23000| 1| Sales|
| 7| Sushmita| 50000| 3| Marketing|
| 8| Kaivalya| 20000| 1| Sales|
| 9| Varun| 70000| 3| Marketing|
| 10|Shambhavi| 21500| 2|Accounting|
| 11| Johnson| 25500| 3| Marketing|
| 12| Riya| 17000| 2|Accounting|
| 13| Krish| 17000| 1| Sales|
| 14| Akanksha| 20000| 2|Accounting|
| 15| Rutuja| 21000| 3| Marketing|
+---+---------+------+------+----------+
I got an error: 'a not found' using the suggested code:
from pyspark.sql.functions import col df1.alias('a').join(df2.alias('b'),col('b.id') == col('a.id')).select([col('a.'+xx) for xx in a.columns] + [col('b.other1'),col('b.other2')])
I changed a.columns to df1.columns and it worked out.
function to drop duplicate columns after joining.
check it
def dropDupeDfCols(df):
newcols = []
dupcols = []
for i in range(len(df.columns)):
if df.columns[i] not in newcols:
newcols.append(df.columns[i])
else:
dupcols.append(i)
df = df.toDF(*[str(i) for i in range(len(df.columns))])
for dupcol in dupcols:
df = df.drop(str(dupcol))
return df.toDF(*newcols)
I just dropped the columns I didn't need from df2 and joined:
sliced_df = df2.select(columns_of_interest)
df1.join(sliced_df, on=['id'], how='left')
**id should be in `columns_of_interest` tho
df1.join(df2, ['id']).drop(df2.id)
If you need multiple columns from other pyspark dataframe then you can use this
based on single join condition
x.join(y, x.id == y.id,"left").select(x["*"],y["col1"],y["col2"],y["col3"])
based on multiple join condition
x.join(y, (x.id == y.id) & (x.no == y.no),"left").select(x["*"],y["col1"],y["col2"],y["col3"])
I very much like Xehron's answer above, and I suspect it's mechanically identical to my solution. This works in databricks, and presumably works in a typical spark environment (replacing keyword "spark" with "sqlcontext"):
df.createOrReplaceTempView('t1') #temp table t1
df2.createOrReplaceTempView('t2') #temp table t2
output = (
spark.sql("""
select
t1.*
,t2.desired_field(s)
from
t1
left (or inner) join t2 on t1.id = t2.id
"""
)
)
You could just make the join and after that select the wanted columns https://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=dataframe%20join#pyspark.sql.DataFrame.join