How to filter out elements in each row of a List[StringType] column in a spark Dataframe? - dataframe

I want to retain the elements which are present in a row of a ArrayType column. For instance, if a row of my spark dataframe is:
val df = sc.parallelize(Seq(Seq("A", "B"),Seq("C","X"),Seq("A", "B", "C", "D", "Z"))).toDF("column")
+---------------+
| column|
+---------------+
| [A, B]|
| [C, X]|
|[A, B, C, D, Z]|
+---------------+
and I have a dictionary_list = ["A", "Z", "X", "Y"], I want the row of my output dataframe to be:
val outp_df
+---------------+
| column|
+---------------+
| [A]|
| [X]|
| [A, Z]|
+---------------+
I tried array_contains, array_overlap, etc. but the output I'm getting is like this:
val result = df.where(array_contains(col("column"), "A"))
+---------------+
| column|
+---------------+
| [A, B]|
|[A, B, C, D, Z]|
+---------------+
The rows are getting filtered, but I want to filter inside the list/row itself. Any way I can do this?

The result you're getting makes perfect sense, you are ONLY selecting rows, which their column array value contains "A", which is the first row and the last row. What you need is a function (NOT a SQL filter function) which receives an input sequence of string, and returns a sequence which contains only values that exist in your dictionary. You can use udfs like this:
// rename this as you wish
val myCustomFilter: Seq[String] => Seq[String] =
input => input.filter(dictionaryList.contains)
// register to your spark context
// rename this as you wish
spark.udf.register("custom_filter", myCustomFilter)
And then, you need select operator, not a filter operator! a filter operator only selects rows that can satisfy the predicate, this is not what you want.
Spark shell result:
scala> df.select(expr("custom_filter(column)")).show
+--------------+
|cusfil(column)|
+--------------+
| [A]|
| [X]|
| [A, Z]|
+--------------+

Use filter function:
val df1 = df.withColumn(
"column",
expr("filter(column, x -> x in ('A', 'Z', 'X', 'Y'))")
)

Related

SQL Presto Aggregate Table column values with another column values

Hi I want to do SQL Presto query for the data table (say user_data) looks like
user | target | result
-----------------------------
1 | b | {A: 1}
2 | a | {C: 2}
1 | c | {A: 2, B: 3}
2 | d | {A: 1}
1 | d | {C: 4}
With this data table, I would like to generate the following two outputs.
Output 1: Count the number of unique targets for each result for each user. For example, for user 1, this user has 2 targets (b and c) who have result A. And it has one target for each result B (target c) and C (target d).
user | result
-------------------
1 | {A: 2, B:1, C:1}
2 | {A: 1, C: 1}
Output 2: Aggregate the last column based on the targets of the user.
user | result
-------------------
1 | {A:[b,c], B:[c], C:[d]}
2 | {A:[d], C:[a]}
** Or Even better, can we make a one table that has both columns?
user | result 1 | result 2
--------------------------------------------------
1 | {A:[b,c], B:[c], C:[d]} | {A: 2, B:1, C:1}
2 | {A:[d], C:[a]} | {A: 1, C: 1}
Can anyone help me with it? I would really appreciate it.
I'm pretty new to SQL so I didn't even know how to start it.`
This can be achieved with map aggregate functions. Assuming that result originally is a map you can flatten it with unnest and then group by user and use multimap_agg and histogram functions:
-- sample data
WITH dataset(user, target, result) AS (
VALUES (1, 'b', map(array['A'], array[1])),
(2, 'a', map(array['C'], array[2])),
(1, 'c', map(array['A', 'B'], array[2, 3])),
(2, 'd', map(array['A'], array[1])),
(1, 'd', map(array['C'], array[4]))
)
-- query
select user, multimap_agg(k, target), histogram(k)
from dataset,
unnest(result) as t(k, v)
group by user;
Output:
user
_col1
_col2
2
{A=[d], C=[a]}
{A=1, C=1}
1
{A=[b, c], B=[c], C=[d]}
{A=2, B=1, C=1}

Aggregate columns containing dictionary in SQL presto

Hi I want to do SQL Presto query for the data table (say user_data) looks like
user | target | result
-----------------------------
1 | b | {A: 1}
2 | a | {C: 2}
1 | c | {A: 2, B: 3}
2 | d | {A: 1}
1 | d | {C: 4}
With this data table, I would like to generate the following two outputs.
Output 1: Aggregate the values of the {key:value} dictionary based on the user and regardless of target
user | result
-------------------
1 | {A:3, B:3, C:4}
2 | {A:1, C:2}
Output 2: Aggregate the last column based on the targets of the user.
user | result
-------------------
1 | {A:[b,c], B:[c], C:[d]}
2 | {A:[d], C:[a]}
Can anyone help me with it? I would really appreciate it.
Second one can be easily achieved with multimap_agg (add transform_values with array_distinct to remove duplicates if needed):
-- sample data
WITH dataset(user, target, result) AS (
values (1, 'b', map(array['A'], array[1])),
(2, 'a', map(array['C'], array[2])),
(1, 'c', map(array['A', 'B'], array[1, 2]))
)
-- query
select user, multimap_agg(k, target)
from dataset,
unnest(result) as t (k,v)
group by user;
Output:
user
_col1
1
{A=[b, c], B=[c]}
2
{C=[a]}
As for the first one - you can look into using map_union_sum if it is available in your version of Presto. Or use some magic with unnest and transform_values:
-- query
select user,
transform_values(
multimap_agg(k, v),
(k,v) -> reduce(v, 0, (s, x) -> s + x, s -> s) -- or array_sum if available
)
from dataset,
unnest(result) as t (k, v)
group by user;
Output:
user
_col1
1
{A=2, B=2}
2
{C=2}

Scala - Apply a function to each value in a dataframe column

I have a function that takes a LocalDate (it could take any other type) and returns a DataFrame, e.g.:
def genDataFrame(refDate: LocalDate): DataFrame = {
Seq(
(refDate,refDate.minusDays(7)),
(refDate.plusDays(3),refDate.plusDays(7))
).toDF("col_A","col_B")
}
genDataFrame(LocalDate.parse("2021-07-02")) output:
+----------+----------+
| col_A| col_B|
+----------+----------+
|2021-07-02|2021-06-25|
|2021-07-05|2021-07-09|
+----------+----------+
I wanna apply this function to each element in a dataframe column (which contains, obviously, LocalDate values), such as:
val myDate = LocalDate.parse("2021-07-02")
val df = Seq(
(myDate),
(myDate.plusDays(1)),
(myDate.plusDays(3))
).toDF("date")
df:
+----------+
| date|
+----------+
|2021-07-02|
|2021-07-03|
|2021-07-05|
+----------+
Required output:
+----------+----------+
| col_A| col_B|
+----------+----------+
|2021-07-02|2021-06-25|
|2021-07-05|2021-07-09|
|2021-07-03|2021-06-26|
|2021-07-06|2021-07-10|
|2021-07-05|2021-06-28|
|2021-07-08|2021-07-12|
+----------+----------+
How could I achieve that (without using collect)?
You can always convert your data frame to a lazily evaluated view and use Spark SQL:
val df_2 = df.map(x => x.getDate(0).toLocalDate()).withColumnRenamed("value", "col_A")
.withColumn("col_B", col("col_A"))
df_2.createOrReplaceTempView("test")
With that you can create a view like this one:
+----------+----------+
| col_A| col_B|
+----------+----------+
|2021-07-02|2021-07-02|
|2021-07-03|2021-07-03|
|2021-07-05|2021-07-05|
+----------+----------+
And then you can use SQL wich I find more intuitive:
spark.sql(s"""SELECT col_A, date_add(col_B, -7) as col_B FROM test
UNION
SELECT date_add(col_A, 3), date_add(col_B, 7) as col_B FROM test""")
.show()
This gives your expected output as a DataFrame:
+----------+----------+
| col_A| col_B|
+----------+----------+
|2021-07-02|2021-06-25|
|2021-07-03|2021-06-26|
|2021-07-05|2021-06-28|
|2021-07-05|2021-07-09|
|2021-07-06|2021-07-10|
|2021-07-08|2021-07-12|
+----------+----------+

check first dataframe value startswith any of the second dataframe value

I have two pyspark dataframe as follow :
df1 = spark.createDataFrame(
["yes","no","yes23", "no3", "35yes", """41no["maybe"]"""],
"string"
).toDF("location")
df2 = spark.createDataFrame(
["yes","no"],
"string"
).toDF("location")
i want to check if values in location col from df1, startsWith, values in location col of df2 and vice versa.
Something like :
df1.select("location").startsWith(df2.location)
Following is the output i am expecting here:
+-------------+
| location|
+-------------+
| yes|
| no|
| yes23|
| no3|
+-------------+
Using spark SQL looks the easiest to me:
df1.createOrReplaceTempView('df1')
df2.createOrReplaceTempView('df2')
joined = spark.sql("""
select df1.*
from df1
join df2
on df1.location rlike '^' || df2.location
""")

Creating a column in a dataframe based on substring of another column, scala

I have a column in dataframe(d1): MODEL_SCORE, which has value like nulll7880.
I want to create another column MODEL_SCORE1 in datframe which is substring of MODEL_SCORE.
I am trying this. It's creating column, but not giving expected result:
val x=d1.withColumn("MODEL_SCORE1", substring(col("MODEL_SCORE"),0,4))
val y=d1.select(col("MODEL_SCORE"), substring(col("MODEL_SCORE"),0,4).as("MODEL_SCORE1"))
One way for this is you can define a UDF that will split your column string value as per your need. A sample code be as follow,
val df = sc.parallelize(List((1,"nulll7880"),(2,"null9000"))).toDF("id","col1")
df.show
//output
+---+---------+
| id| col1|
+---+---------+
| 1|nulll7880|
| 2| null9000|
+---+---------+
def splitString:(String => String) = {str => str.slice(0,4)}
val splitStringUDF = org.apache.spark.sql.functions.udf(splitString)
df.withColumn("col2",splitStringUDF(df("col1"))).show
//output
+---+---------+----+
| id| col1|col2|
+---+---------+----+
| 1|nulll7880|null|
| 2| null9000|null|
+---+---------+----+