Check repeated values in a dataframe and implement a ignoreNulls parameter - dataframe

I created a function to check if there is repeated values in a dataframe based on a Seq of columns.
I want to implement an "ignoreNulls", to be passed as a Boolean parameter into the function
If true, will ignore and not group and count the nulls values. So for the nulls values, the "newColName" will return false.
If false (default), will consider nulls values as a group and return true if theres multiples values with nulls for the key that I'm checking.
I don't know how could I do this.
Should I use an if or case?
There's some expression to ignore nulls on partitionBy statement?
Anyone could help me?
Here's the current function
def checkRepeatedKey(newColName: String, keys: Seq[String])(dataframe: DataFrame): DataFrame = {
val repeatedCondition = $"sum" > 1
val windowCondition = Window.partitionBy(keys.head, keysToCheck.tail: _*)
dataframe
.withColumn("count", lit(1))
.withColumn("sum", sum("count").over(windowCondition))
.withColumn(newColName, repeatedCondition)
.drop("count", "sum")
}
Some test data
val testDF = Seq(
("1", Some("name-1")),
("2", Some("repeated-name")),
("3", Some("repeated-name")),
("4", Some("name-4")),
("5", None),
("6", None)
).toDF("name_key", "name")
Testing the function
val results = testDF.transform(checkRepeatedKey("has_repeated_name", Seq("name"))
Output (without the ignoreNulls implementation)
+--------+---------------+--------------------+
|name_key| name | has_repeated_name |
+--------+---------------+--------------------+
| 1 | name-1 | false |
+--------+---------------+--------------------+
| 2 | repeated-name | true |
+--------+---------------+--------------------+
| 3 | repeated-name | true |
+--------+---------------+--------------------+
| 4 | name-4 | false |
+--------+---------------+--------------------+
| 5 | null | true |
+--------+---------------+--------------------+
| 6 | null | true |
+--------+---------------+--------------------+
And with the ignoreNulls=true implementation should be like this
-- function header with ignoreNulls parameter
def checkRepeatedKey(newColName: String, keys: Seq[String], ignoreNulls: Boolean)(dataframe: DataFrame): DataFrame =
-- using the function, passing true for ignoreNulls
testDF.transform(checkRepeatedKey("has_repeated_name", Seq("name"), true)
-- expected output for nulls
+--------+---------------+--------------------+
| 5 | null | false |
+--------+---------------+--------------------+
| 6 | null | false |
+--------+---------------+--------------------+

Firstly, you should define properly the logic in case that only part of the columns in keys are null - should it be counted as null values or null value is defined only if all the columns in keys are null?
For the sake of simplicity, lets assume that the there is only one column in keys (you can easily extend the logic for multiple columns). You can just add a simple if into your checkRepeatedKey function:
def checkIfNullValue(keys: Seq[String]): Column = {
// for the sake of simplicity checking only the first key
col(keys.head).isNull
}
def checkRepeatedKey(newColName: String, keys: Seq[String], ignoreNulls: Boolean)(dataframe: DataFrame): DataFrame = {
...
...
val df = dataframe
.withColumn("count", lit(1))
.withColumn("sum", sum("count").over(windowCondition))
.withColumn(newColName, repeatedCondition)
.drop("count", "sum")
if (ignoreNulls)
df.withColumn(newColName, when(checkIfNullValue(keys), df(newColName)).otherwise(lit(false))
else df
}

Related

data frame parsing column scala

I have some problem with parsing Dataframe
val result = df_app_clickstream.withColumn(
"attributes",
explode(expr(raw"transform(attributes, x -> str_to_map(regexp_replace(x, '{\\}',''), ' '))"))
).select(
col("userId"),
col("attributes").getField("campaign_id").alias("app_campaign_id"),
col("attributes").getField("channel_id").alias("app_channel_id")
)
result.show()
I have input like this :
-------------------------------------------------------------------------------
| userId | attributes |
-------------------------------------------------------------------------------
| f6e8252f-b5cc-48a4-b348-29d89ee4fa9e |{'campaign_id':082,'channel_id':'Chnl'}|
-------------------------------------------------------------------------------
and need to get output like this :
--------------------------------------------------------------------
| userId | campaign_id | channel_id|
--------------------------------------------------------------------
| f6e8252f-b5cc-48a4-b348-29d89ee4fa9e | 082 | Facebook |
--------------------------------------------------------------------
but have error
you can try below solution
import org.apache.spark.sql.functions._
val data = Seq(("f6e8252f-b5cc-48a4-b348-29d89ee4fa9e", """{'campaign_id':082, 'channel_id':'Chnl'}""")).toDF("user_id", "attributes")
val out_df = data.withColumn("splitted_col", split(regexp_replace(col("attributes"),"'|\\}|\\{", ""), ","))
.withColumn("campaign_id", split(element_at(col("splitted_col"), 1), ":")(1))
.withColumn("channel_id", split(element_at(col("splitted_col"), 2), ":")(1))
out_df.show(truncate = false)
+------------------------------------+----------------------------------------+-----------------------------------+-----------+----------+
|user_id |attributes |splitted_col |campaign_id|channel_id|
+------------------------------------+----------------------------------------+-----------------------------------+-----------+----------+
|f6e8252f-b5cc-48a4-b348-29d89ee4fa9e|{'campaign_id':082, 'channel_id':'Chnl'}|[campaign_id:082, channel_id:Chnl]|082 |Chnl |
+------------------------------------+----------------------------------------+-----------------------------------+-----------+----------+

Spark dataframe inner join without duplicate match

I want to join two dataframes based on certain condition is spark scala. However the catch is if row in df1 matches any row in df2, it should not try to match same row of df1 with any other row in df2. Below is sample data and outcome I am trying to get.
DF1
--------------------------------
Emp_id | Emp_Name | Address_id
1 | ABC | 1
2 | DEF | 2
3 | PQR | 3
4 | XYZ | 1
DF2
-----------------------
Address_id | City
1 | City_1
1 | City_2
2 | City_3
REST | Some_City
Output DF
----------------------------------------
Emp_id | Emp_Name | Address_id | City
1 | ABC | 1 | City_1
2 | DEF | 2 | City_3
3 | PQR | 3 | Some_City
4 | XYZ | 1 | City_1
Note:- REST is like wild card. Any value can be equal to REST.
So in above sample emp_name "ABC" can match with City_1, City_2 or Some_City. Output DF contains only City_1 because it finds it first.
You seem to have a custom logic for your join. Basically I've been to come up with the below UDF.
Note that you may want to change the logic for the UDF as per your requirement.
import spark.implicits._
import org.apache.spark.sql.functions.to_timestamp
import org.apache.spark.sql.functions.udf
import org.apache.spark.sql.functions.first
//dataframe 1
val df_1 = Seq(("1", "ABC", "1"), ("2", "DEF", "2"), ("3", "PQR", "3"), ("4", "XYZ", "1")).toDF("Emp_Id", "Emp_Name", "Address_Id")
//dataframe 2
val df_2 = Seq(("1", "City_1"), ("1", "City_2"), ("2", "City_3"), ("REST","Some_City")).toDF("Address_Id", "City_Name")
// UDF logic
val join_udf = udf((a: String, b: String) => {
(a,b) match {
case ("1", "1") => true
case ("1", _) => false
case ("2", "2") => true
case ("2", _) => false
case(_, "REST") => true
case(_, _) => false
}})
val dataframe_join = df_1.join(df_2, join_udf(df_1("Address_Id"), df_2("Address_Id")), "inner").drop(df_2("Address_Id"))
.orderBy($"City_Name")
.groupBy($"Emp_Id", $"Emp_Name", $"Address_Id")
.agg(first($"City_Name"))
.orderBy($"Emp_Id")
dataframe_join.show(false)
Basically post applying UDF, what you get is all possible combinations of the matches.
Post that when you apply groupBy and make use of first function of agg, you would only get the filtered values as what you are looking for.
+------+--------+----------+-----------------------+
|Emp_Id|Emp_Name|Address_Id|first(City_Name, false)|
+------+--------+----------+-----------------------+
|1 |ABC |1 |City_1 |
|2 |DEF |2 |City_3 |
|3 |PQR |3 |Some_City |
|4 |XYZ |1 |City_1 |
+------+--------+----------+-----------------------+
Note that I've made use of Spark 2.3 and hope this helps!
{
import org.apache.spark.sql.{SparkSession}
import org.apache.spark.sql.functions._
object JoinTwoDataFrame extends App {
val spark = SparkSession.builder()
.master("local")
.appName("DataFrame-example")
.getOrCreate()
import spark.implicits._
val df1 = Seq(
(1, "ABC", "1"),
(2, "DEF", "2"),
(3, "PQR", "3"),
(4, "XYZ", "1")
).toDF("Emp_id", "Emp_Name", "Address_id")
val df2 = Seq(
("1", "City_1"),
("1", "City_2"),
("2", "City_3"),
("REST", "Some_City")
).toDF("Address_id", "City")
val restCity: Option[String] = Some(df2.filter('Address_id.equalTo("REST")).select('City).first()(0).toString)
val res = df1.join(df2, df1.col("Address_id") === df2.col("Address_id") , "left_outer")
.select(
df1.col("Emp_id"),
df1.col("Emp_Name"),
df1.col("Address_id"),
df2.col("City")
)
.withColumn("city2", when('City.isNotNull, 'City).otherwise(restCity.getOrElse("")))
.drop("City")
.withColumnRenamed("city2", "City")
.orderBy("Address_id", "City")
.groupBy("Emp_id", "Emp_Name", "Address_id")
.agg(collect_list("City").alias("cityList"))
.withColumn("City", 'cityList.getItem(0))
.drop("cityList")
.orderBy("Emp_id")
res.show(false)
// +------+--------+----------+---------+
// |Emp_id|Emp_Name|Address_id|City |
// +------+--------+----------+---------+
// |1 |ABC |1 |City_1 |
// |2 |DEF |2 |City_3 |
// |3 |PQR |3 |Some_City|
// |4 |XYZ |1 |City_1 |
// +------+--------+----------+---------+
}
}

How to convert dataframe value into Map[String,List[String]]?

I want to convert below dataframe into Map[String,List[String]]. I have changed initial dataframe to get Name columns in List format(using collect_list) but I am not able to convert it into Map[String,List[String]].
DataFrame
+---------+-------+
|City | Name |
+---------+-------+
|Mumbai |[A,B] |
|Pune |[C,D] |
|Delhi |[A,D] |
+---------+-------+
Expected Output:
Map(Mumbai -> List(A,B), Pune -> List(C,D), Delhi-> List(A,D))
You can convert to rdd and collect as Map as below
val df = Seq(
("Mumbai", List("A", "B")),
("Pune", List("C", "D")),
("Delhi", List("A", "D"))
).toDF("city", "name")
val map: collection.Map[String, List[String]] = df.rdd
.map(row => (row.getAs[String]("city"), row.getAs[List[String]]("name")))
.collectAsMap()
Hope this helps!

Spark: Multiple filter inside agg and concat not null values

I'm trying to concatenate not null values from a List column.
I know this can be done easily by using UDF but would like to know how to handle this by using multiple filter conditions inside agg function.
Don't know what's missing here?
val df = sc.parallelize(Seq(("foo", List(null,"bar",null)),
("bar", List("one","two",null)),
("rio", List("Ria","","Kevin")))).toDF("key", "value")
+---+-----------------+
|key| value|
+---+-----------------+
|foo|[null, bar, null]|
|bar| [one, two, null]|
|rio| [Ria, , Kevin]|
+---+-----------------+
df.groupBy("key")
.agg(concat_ws(",",first(when(($"value".isNotNull || $"value" =!= ""),$"value"))).as("RemovedNullSeq"))
.show(false)
+---+--------------+
|key|RemovedNullSeq|
+---+--------------+
|bar|one,two |
|rio|Ria,,Kevin |
|foo|bar |
+---+--------------+
I don't need that blank value in the second record.
Thanks
I'm not immediately sure if using aggregate functions is necessary based on the example provided.
If you're just trying to concatenate the values in an array then the following works:
val df = Seq(List(null,"abc", null),
List(null, null, null),
List(null, "def", "ghi", "kjl"),
List("mno", null, "pqr")).toDF("list")
df.withColumn("concat", concat_ws(",",$"list")).show(false)
+---------------------+-----------+
|list |concat |
+---------------------+-----------+
|[null, abc, null] |abc |
|[null, null, null] | |
|[null, def, ghi, kjl]|def,ghi,kjl|
|[mno, null, pqr] |mno,pqr |
+---------------------+-----------+
If there is a need to group first:
val df2 = Seq((123,List(null,"abc", null)),
(123,List(null,"def", "hij"))).toDF("key","list")
df2.show(false)
+---+-----------------+
|key|list |
+---+-----------------+
|123|[null, abc, null]|
|123|[null, def, hij] |
+---+-----------------+
You might think you could do something like
val grouped = df2.groupBy($"key").agg(collect_list($"list").as("collected"))
And then apply some functions to the array of arrays to obtain your concatenated result. However, I have been unable to find a way to do this without resorting to UDFs.
In this case, exploding before the grouping does the trick:
val grouped = df2.groupBy($"key").agg(collect_list($"list").as("collected"))
.groupBy($"key").agg(collect_list($"listItem").as("collected"))
.withColumn("concat", concat_ws(",",$"collected")).show(false)
+---+---------------+-----------+
|key|collected |concat |
+---+---------------+-----------+
|123|[abc, def, hij]|abc,def,hij|
+---+---------------+-----------+
Note however that there is no guarantee of the order in which the lists will be collected.
Hope this helps
import org.apache.spark.sql.functions._
val df = sc.parallelize(Seq(("foo", List(null,"bar",null)),
("bar", List("one","two",null)),
("rio", List("Ria","","Kevin")))).toDF("key", "value")
val filtd = df.select($"key" as "key", explode($"value") as "val").where (length($"val") > 0)
val rsult = filtd.select($"*").groupBy($"key").agg(collect_list("val"))
rsult.show(5)
You can add ultiple conditions like this
val filtd = df.select($"key" as "key", explode($"value") as "val").where (length($"val") > 0 && $"val".isNotNull)
Output
+---+-----------------+
|key|collect_list(val)|
+---+-----------------+
|bar| [one, two]|
|rio| [Ria, Kevin]|
|foo| [bar]|
+---+-----------------+

Including null values in an Apache Spark Join

I would like to include null values in an Apache Spark join. Spark doesn't include rows with null by default.
Here is the default Spark behavior.
val numbersDf = Seq(
("123"),
("456"),
(null),
("")
).toDF("numbers")
val lettersDf = Seq(
("123", "abc"),
("456", "def"),
(null, "zzz"),
("", "hhh")
).toDF("numbers", "letters")
val joinedDf = numbersDf.join(lettersDf, Seq("numbers"))
Here is the output of joinedDf.show():
+-------+-------+
|numbers|letters|
+-------+-------+
| 123| abc|
| 456| def|
| | hhh|
+-------+-------+
This is the output I would like:
+-------+-------+
|numbers|letters|
+-------+-------+
| 123| abc|
| 456| def|
| | hhh|
| null| zzz|
+-------+-------+
Spark provides a special NULL safe equality operator:
numbersDf
.join(lettersDf, numbersDf("numbers") <=> lettersDf("numbers"))
.drop(lettersDf("numbers"))
+-------+-------+
|numbers|letters|
+-------+-------+
| 123| abc|
| 456| def|
| null| zzz|
| | hhh|
+-------+-------+
Be careful not to use it with Spark 1.5 or earlier. Prior to Spark 1.6 it required a Cartesian product (SPARK-11111 - Fast null-safe join).
In Spark 2.3.0 or later you can use Column.eqNullSafe in PySpark:
numbers_df = sc.parallelize([
("123", ), ("456", ), (None, ), ("", )
]).toDF(["numbers"])
letters_df = sc.parallelize([
("123", "abc"), ("456", "def"), (None, "zzz"), ("", "hhh")
]).toDF(["numbers", "letters"])
numbers_df.join(letters_df, numbers_df.numbers.eqNullSafe(letters_df.numbers))
+-------+-------+-------+
|numbers|numbers|letters|
+-------+-------+-------+
| 456| 456| def|
| null| null| zzz|
| | | hhh|
| 123| 123| abc|
+-------+-------+-------+
and %<=>% in SparkR:
numbers_df <- createDataFrame(data.frame(numbers = c("123", "456", NA, "")))
letters_df <- createDataFrame(data.frame(
numbers = c("123", "456", NA, ""),
letters = c("abc", "def", "zzz", "hhh")
))
head(join(numbers_df, letters_df, numbers_df$numbers %<=>% letters_df$numbers))
numbers numbers letters
1 456 456 def
2 <NA> <NA> zzz
3 hhh
4 123 123 abc
With SQL (Spark 2.2.0+) you can use IS NOT DISTINCT FROM:
SELECT * FROM numbers JOIN letters
ON numbers.numbers IS NOT DISTINCT FROM letters.numbers
This is can be used with DataFrame API as well:
numbersDf.alias("numbers")
.join(lettersDf.alias("letters"))
.where("numbers.numbers IS NOT DISTINCT FROM letters.numbers")
val numbers2 = numbersDf.withColumnRenamed("numbers","num1") //rename columns so that we can disambiguate them in the join
val letters2 = lettersDf.withColumnRenamed("numbers","num2")
val joinedDf = numbers2.join(letters2, $"num1" === $"num2" || ($"num1".isNull && $"num2".isNull) ,"outer")
joinedDf.select("num1","letters").withColumnRenamed("num1","numbers").show //rename the columns back to the original names
Based on K L's idea, you could use foldLeft to generate join column expression:
def nullSafeJoin(rightDF: DataFrame, columns: Seq[String], joinType: String)(leftDF: DataFrame): DataFrame =
{
val colExpr: Column = leftDF(columns.head) <=> rightDF(columns.head)
val fullExpr = columns.tail.foldLeft(colExpr) {
(colExpr, p) => colExpr && leftDF(p) <=> rightDF(p)
}
leftDF.join(rightDF, fullExpr, joinType)
}
then, you could call this function just like:
aDF.transform(nullSafejoin(bDF, columns, joinType))
Complementing the other answers, for PYSPARK < 2.3.0 you would not have Column.eqNullSafe neither IS NOT DISTINCT FROM.
You still can build the <=> operator with an sql expression to include it in the join, as long as you define alias for the join queries:
from pyspark.sql.types import StringType
import pyspark.sql.functions as F
numbers_df = spark.createDataFrame (["123","456",None,""], StringType()).toDF("numbers")
letters_df = spark.createDataFrame ([("123", "abc"),("456", "def"),(None, "zzz"),("", "hhh") ]).\
toDF("numbers", "letters")
joined_df = numbers_df.alias("numbers").join(letters_df.alias("letters"),
F.expr('numbers.numbers <=> letters.numbers')).\
select('letters.*')
joined_df.show()
+-------+-------+
|numbers|letters|
+-------+-------+
| 456| def|
| null| zzz|
| | hhh|
| 123| abc|
+-------+-------+
Based on timothyzhang's idea one can further improve it by removing duplicate columns:
def dropDuplicateColumns(df: DataFrame, rightDf: DataFrame, cols: Seq[String]): DataFrame
= cols.foldLeft(df)((df, c) => df.drop(rightDf(c)))
def joinTablesWithSafeNulls(rightDF: DataFrame, leftDF: DataFrame, columns: Seq[String], joinType: String): DataFrame =
{
val colExpr: Column = leftDF(columns.head) <=> rightDF(columns.head)
val fullExpr = columns.tail.foldLeft(colExpr) {
(colExpr, p) => colExpr && leftDF(p) <=> rightDF(p)
}
val finalDF = leftDF.join(rightDF, fullExpr, joinType)
val filteredDF = dropDuplicateColumns(finalDF, rightDF, columns)
filteredDF
}
Try the following method to include the null rows to the result of JOIN operator:
def nullSafeJoin(leftDF: DataFrame, rightDF: DataFrame, columns: Seq[String], joinType: String): DataFrame = {
var columnsExpr: Column = leftDF(columns.head) <=> rightDF(columns.head)
columns.drop(1).foreach(column => {
columnsExpr = columnsExpr && (leftDF(column) <=> rightDF(column))
})
var joinedDF: DataFrame = leftDF.join(rightDF, columnsExpr, joinType)
columns.foreach(column => {
joinedDF = joinedDF.drop(leftDF(column))
})
joinedDF
}