PySpark: Dataframe with nested fields to relational table - dataframe

I have a PySpark dataframe of students with schema as follows:
Id: string
|-- School: array
|-- element: struct
| |-- Subject: string
| |-- Classes: string
| |-- Score: array
| |-- element: struct
| |-- ScoreID: string
| |-- Value: string
I want to extract a few fields from the data frame and normalize it so that I can feed it in the database. The relational schema I expect consists of the fields Id, School, Subject, ScoreId, Value. How can I do it efficiently?

explode the array to get flattened data and then select all the required columns.
Example:
df.show(10,False)
#+---+--------------------------+
#|Id |School |
#+---+--------------------------+
#|1 |[[b, [[A, 3], [B, 4]], a]]|
#+---+--------------------------+
df.printSchema()
#root
# |-- Id: string (nullable = true)
# |-- School: array (nullable = true)
# | |-- element: struct (containsNull = true)
# | | |-- Classes: string (nullable = true)
# | | |-- Score: array (nullable = true)
# | | | |-- element: struct (containsNull = true)
# | | | | |-- ScoreID: string (nullable = true)
# | | | | |-- Value: string (nullable = true)
# | | |-- Subject: string (nullable = true)
df.selectExpr("Id","explode(School)").\
selectExpr("Id","col.*","explode(col.Score)").\
selectExpr("Id","Classes","Subject","col.*").\
show()
#+---+-------+-------+-------+-----+
#| Id|Classes|Subject|ScoreID|Value|
#+---+-------+-------+-------+-----+
#| 1| b| a| A| 3|
#| 1| b| a| B| 4|
#+---+-------+-------+-------+-----+

Related

How to extract embeddings generated from sparknlp WordEmbeddingsModel to feed a RNN model using keras and tensorflow

The bounty expires in 6 days. Answers to this question are eligible for a +100 reputation bounty.
Aiha wants to draw more attention to this question:
I need an embedding matrix which can directly be used in an RNN model
I have a text classification problem.
I'm particularly interested in this embedding model in sparknlp because I have a dataset from Wikipedia in 'sq' language. I need to convert sentences of my dataset into embeddings.
I do so by WordEmbeddingsModel, however, after the embeddings are generated I don't know how to prepare them to make ready as an input for an RNN model using keras and tensorflow.
My dataset has two columns 'text' and 'label', until now I was able to do the following steps:
# start spark session
spark = sparknlp.start(gpu=True)
# convert train df into spark df
spark_train_df=spark.createDataFrame(train)`
+--------------------+-----+
| text|label|
+--------------------+-----+
|Joy Adowaa Buolam...| 0|
|Ajo themeloi "Alg...| 1|
|Buolamwini lindi ...| 1|
|Kur ishte 9 vjeç,...| 0|
|Si një studente u...| 1|
+--------------------+-----+
# define sparknlp pipeline
document = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("document")
tokenizer = Tokenizer() \
.setInputCols(\["document"\]) \
.setOutputCol("token")
embeddings = WordEmbeddingsModel\
.pretrained("w2v_cc_300d","sq")\
.setInputCols(\["document", "token"\])\
.setOutputCol("embeddings")
pipeline = Pipeline(stages=\[document, tokenizer, embeddings\])
# fit the pipeline to the training data
model = pipeline.fit(spark_train_df)
# apply the pipeline to the training data
result = model.transform(spark_train_df)
result.show()
+--------------------+-----+--------------------+--------------------+--------------------+
| text|label| document| token| embeddings|
+--------------------+-----+--------------------+--------------------+--------------------+
|Joy Adowaa Buolam...| 0|[{document, 0, 13...|[{token, 0, 2, Jo...|[{word_embeddings...|
|Ajo themeloi "Alg...| 1|[{document, 0, 13...|[{token, 0, 2, Aj...|[{word_embeddings...|
|Buolamwini lindi ...| 1|[{document, 0, 94...|[{token, 0, 9, Bu...|[{word_embeddings...|
|Kur ishte 9 vjeç,...| 0|[{document, 0, 12...|[{token, 0, 2, Ku...|[{word_embeddings...|
|Si një studente u...| 1|[{document, 0, 15...|[{token, 0, 1, Si...|[{word_embeddings...|
|Buolamwini diplom...| 1|[{document, 0, 11...|[{token, 0, 9, Bu...|[{word_embeddings...|
+--------------------+-----+--------------------+--------------------+--------------------+
The schema of result is:
result.printSchema()
root
|-- text: string (nullable = true)
|-- label: long (nullable = true)
|-- document: array (nullable = true)
| |-- element: struct (containsNull = true)
| | |-- annotatorType: string (nullable = true)
| | |-- begin: integer (nullable = false)
| | |-- end: integer (nullable = false)
| | |-- result: string (nullable = true)
| | |-- metadata: map (nullable = true)
| | | |-- key: string
| | | |-- value: string (valueContainsNull = true)
| | |-- embeddings: array (nullable = true)
| | | |-- element: float (containsNull = false)
|-- token: array (nullable = true)
| |-- element: struct (containsNull = true)
| | |-- annotatorType: string (nullable = true)
| | |-- begin: integer (nullable = false)
| | |-- end: integer (nullable = false)
| | |-- result: string (nullable = true)
| | |-- metadata: map (nullable = true)
| | | |-- key: string
| | | |-- value: string (valueContainsNull = true)
| | |-- embeddings: array (nullable = true)
| | | |-- element: float (containsNull = false)
|-- embeddings: array (nullable = true)
| |-- element: struct (containsNull = true)
| | |-- annotatorType: string (nullable = true)
| | |-- begin: integer (nullable = false)
| | |-- end: integer (nullable = false)
| | |-- result: string (nullable = true)
| | |-- metadata: map (nullable = true)
| | | |-- key: string
| | | |-- value: string (valueContainsNull = true)
| | |-- embeddings: array (nullable = true)
| | | |-- element: float (containsNull = false)
The output I receive from:
result.schema["embeddings"].dataType is:
ArrayType(StructType([StructField('annotatorType', StringType(), True), StructField('begin', IntegerType(), False), StructField('end', IntegerType(), False), StructField('result', StringType(), True), StructField('metadata', MapType(StringType(), StringType(), True), True), StructField('embeddings', ArrayType(FloatType(), False), True)]), True)

how to flatten multiple structs and get the keys as one of the fields

I have this struct schema
|-- teams: struct (nullable = true)
| |-- blue: struct (nullable = true)
| | |-- has_won: boolean (nullable = true)
| | |-- rounds_lost: long (nullable = true)
| | |-- rounds_won: long (nullable = true)
| |-- red: struct (nullable = true)
| | |-- has_won: boolean (nullable = true)
| | |-- rounds_lost: long (nullable = true)
| | |-- rounds_won: long (nullable = true)
which I want to turn to this schema
+----+-------+-----------+----------+
|team|has_won|rounds_lost|rounds_win|
+----+-------+-----------+----------+
|blue| 1| 13| 10|
| red| 0| 10| 13|
+----+-------+-----------+----------+
I already tried selectExpr(inline(array('teams.*')))
inline array
but I don't have any idea to get the team to one of the fields? Thank you!
You can start by un-nesting the struct using * and then use stack to "un-pivot" the dataframe. Finally, un-nest the stats.
from pyspark.sql import Row
rows = [Row(teams=Row(blue=Row(has_won=1, rounds_lost=13, rounds_won=10),
red=Row(has_won=0, rounds_lost=10, rounds_won=13)))]
df = spark.createDataFrame(rows)
(df.select("teams.*")
.selectExpr("stack(2, 'blue', blue, 'red', red) as (team, stats)")
.selectExpr("team", "stats.*")
).show()
"""
+----+-------+-----------+----------+
|team|has_won|rounds_lost|rounds_won|
+----+-------+-----------+----------+
|blue| 1| 13| 10|
| red| 0| 10| 13|
+----+-------+-----------+----------+
"""

Update array of structs - Spark

I have the following spark delta table structure,
+---+------------------------------------------------------+
|id |addresses |
+---+------------------------------------------------------+
|1 |[{"Address":"ABC", "Street": "XXX"}, {"Address":"XYZ", "Street": "YYY"}]|
+---+------------------------------------------------------+
Here the addresses column is an array of structs.
I need to update the first Address inside array as "XXX", from the "Street" attributes value without changing the second element in the list.
So, "ABC" should be updated to "XXX" and "XYZ" should be updated to "YYY"
You can assume, I have so many attributes in the struct like street, zipcode etc so I want to leave them untouched and just update the value of Address from Street attribute.
How can I do this in Spark or Databricks or Sql?
Schema,
|-- id: string (nullable = true)
|-- addresses: array (nullable = true)
| | | |-- element: struct (containsNull = true)
| | | | |-- Address: string (nullable = true)
| | | | |-- Street: string (nullable = true)
Cheers!
Please check below code.
scala> vdf.show(false)
+---+--------------+
|id |addresses |
+---+--------------+
|1 |[[ABC], [XYZ]]|
+---+--------------+
scala> vdf.printSchema
root
|-- id: integer (nullable = false)
|-- addresses: array (nullable = true)
| |-- element: struct (containsNull = true)
| | |-- Address: string (nullable = true)
scala> val new_address = array(struct(lit("AAA").as("Address")))
scala> val except_first = array_except($"addresses",array($"addresses"(0)))
scala> val addresses = array_union(new_address,except_first).as("addresses")
scala> vdf.select($"id",addresses).select($"id",$"addresses",to_json($"addresses").as("json_addresses")).show(false)
+---+--------------+-------------------------------------+
|id |addresses |json_addresses |
+---+--------------+-------------------------------------+
|1 |[[AAA], [XYZ]]|[{"Address":"AAA"},{"Address":"XYZ"}]|
+---+--------------+-------------------------------------+
Updated
scala> vdf.withColumn("addresses",explode($"addresses")).groupBy($"id").agg(collect_list(struct($"addresses.Street".as("Address"),$"addresses.Street")).as("addresses")).withColumn("json_data",to_json($"addresses")).show(false)
+---+------------------------+-------------------------------------------------------------------+
|id |addresses |json_data |
+---+------------------------+-------------------------------------------------------------------+
|1 |[[XXX, XXX], [YYY, YYY]]|[{"Address":"XXX","Street":"XXX"},{"Address":"YYY","Street":"YYY"}]|
+---+------------------------+-------------------------------------------------------------------+

Scala Spark Dataframe: how to explode an array of Int and array of struct at the same time

I'm new to Scala/Spark and I'm trying to make explode a dataframe that has an array column and array of struct column so that I end up with no arrays and no struct.
Here's an example
case class Area(start_time: String, end_time: String, area: String)
val df = Seq((
"1", Seq(4,5,6),
Seq(Area("07:00", "07:30", "70"), Area("08:00", "08:30", "80"), Area("09:00", "09:30", "90"))
)).toDF("id", "before", "after")
df.printSchema
df.show
df has the following schema
root
|-- id: string (nullable = true)
|-- before: array (nullable = true)
| |-- element: integer (containsNull = false)
|-- after: array (nullable = true)
| |-- element: struct (containsNull = true)
| | |-- start_time: string (nullable = true)
| | |-- end_time: string (nullable = true)
| | |-- area: string (nullable = true)
and the data looks like
+---+---------+--------------------+
| id| before| after|
+---+---------+--------------------+
| 1|[4, 5, 6]|[[07:00, 07:30, 7...|
+---+---------+--------------------+
How do I explode the dataframe so I get the following schema
|-- id: string (nullable = true)
|-- before: integer (containsNull = false)
|-- after_start_time: string (nullable = true)
|-- after_end_time: string (nullable = true)
|-- after_area: string (nullable = true)
The resulting data should have 3 rows and 5 columns
+---+---------+--------------------+--------------------+--------+
| id| before| after_start_time| after_start_time| area|
+---+---------+--------------------+--------------------+--------+
| 1| 4| 07:00| 07:30| 70|
| 1| 5| 08:00| 08:30| 80|
| 1| 6| 09:00| 09:30| 90|
+---+---------+--------------------+--------------------+--------+
I'm using spark 2.3.0 (arrays_zip is not available). And the only solutions I can find is either for exploding two Arrays of String or one Array of struct.
Use arrays_zip to combine two arrays, then explode to explode array columns & use as to rename required columns.
As arrays_zip is not available in spark 2.3. Created UDF to perform same operation.
val arrays_zip = udf((before:Seq[Int],after: Seq[Area]) => before.zip(after))
Execution time with built in (spark 2.4.2) arrays_zip - Time taken: 1146 ms
Execution time with arrays_zip UDF - Time taken: 1165 ms
Check below code.
scala> df.show(false)
+---+---------+------------------------------------------------------------+
|id |before |after |
+---+---------+------------------------------------------------------------+
|1 |[4, 5, 6]|[[07:00, 07:30, 70], [08:00, 08:30, 80], [09:00, 09:30, 90]]|
+---+---------+------------------------------------------------------------+
scala>
df
.select(
$"id",
explode(
arrays_zip($"before",$"after")
.cast("array<struct<before:int,after:struct<start_time:string,end_time:string,area:string>>>")
).as("before_after")
)
.select(
$"id",
$"before_after.before".as("before"),
$"before_after.after.start_time".as("after_start_time"),
$"before_after.after.end_time".as("after_end_time"),
$"before_after.after.area"
)
.printSchema
root
|-- id: string (nullable = true)
|-- before: integer (nullable = true)
|-- after_start_time: string (nullable = true)
|-- after_end_time: string (nullable = true)
|-- area: string (nullable = true)
Output
scala>
df
.select(
$"id",
explode(
arrays_zip($"before",$"after")
.cast("array<struct<before:int,after:struct<start_time:string,end_time:string,area:string>>>")
).as("before_after")
)
.select(
$"id",
$"before_after.before".as("before"),
$"before_after.after.start_time".as("after_start_time"),
$"before_after.after.end_time".as("after_end_time"),
$"before_after.after.area"
)
.show(false)
+---+------+----------------+--------------+----+
|id |before|after_start_time|after_end_time|area|
+---+------+----------------+--------------+----+
|1 |4 |07:00 |07:30 |70 |
|1 |5 |08:00 |08:30 |80 |
|1 |6 |09:00 |09:30 |90 |
+---+------+----------------+--------------+----+
To handle some complex struct you can do,
Declare two beans Area(input) and Area2(output)
Map row to Area2 bean
import org.apache.spark.sql.catalyst.expressions.GenericRowWithSchema
import scala.collection.mutable
object ExplodeTwoArrays {
def main(args: Array[String]): Unit = {
val spark = Constant.getSparkSess
import spark.implicits._
val df = Seq((
"1", Seq(4, 5, 6),
Seq(Area("07:00", "07:30", "70"), Area("08:00", "08:30", "80"), Area("09:00", "09:30", "90"))
)).toDF("id", "before", "after")
val outDf = df.map(row=> {
val id = row.getString(0)
val beforeArray : Seq[Int]= row.getSeq[Int](1)
val afterArray : mutable.WrappedArray[Area2] =
row.getAs[mutable.WrappedArray[GenericRowWithSchema]](2) // Need to map Array(Struct) to the something compatible
.zipWithIndex // Require to iterate with indices
.map{ case(element,i) => {
Area2(element.getAs[String]("start_time"),
element.getAs[String]("end_time"),
element.getAs[String]("area"),
beforeArray(i))
}}
(id,afterArray) // Return row(id,Array(Area2(...)))
}).toDF("id","after")
outDf.printSchema()
outDf.show()
}
}
case class Area(start_time: String, end_time: String, area: String)
case class Area2(start_time: String, end_time: String, area: String, before: Int)

Spark SQL - select nested array values

I have a bunch of Parquet files containing the following structure:
data
|-- instance: array (nullable = true)
| |-- element: struct (containsNull = true)
| | |-- dataset: array (nullable = true)
| | | |-- element: struct (containsNull = true)
| | | | |-- item: array (nullable = true)
| | | | | |-- element: struct (containsNull = true)
| | | | | | |-- id: string (nullable = true)
| | | | | | |-- name: string (nullable = true)
| | | | |-- name: string (nullable = true)
| | |-- id: long (nullable = true)
and I want to do some data manipulations using Spark SQL.
I cannot do something like
data.select("data.instance.dataset.name")
or
data.select("data.instance.dataset.item.id")
because nested arrays are involved and I get an error:
Array index should be integral type, but it's StringType;
I can understand why it is that, but what is the way to traverse nested structures in Spark SQL?
I could read/deserialise it all into my own class and then deal with it, but it is a) slow and b) doesn't allow people who use things like spark notebook etc to work with the data.
Is there any way to do it with Spark SQL?