Spark SQL: Is there a way to distinguish columns with same name? - sql

I have a csv with a header with columns with same name.
I want to process them with spark using only SQL and be able to refer these columns unambiguously.
Ex.:
id name age height name
1 Alex 23 1.70
2 Joseph 24 1.89
I want to get only first name column using only Spark SQL

As mentioned in the comments, I think that the less error prone method would be to have the schema of the input data changed.
Yet, in case you are looking for a quick workaround, you can simply index the duplicated names of the columns.
For instance, let's create a dataframe with three id columns.
val df = spark.range(3)
.select('id * 2 as "id", 'id * 3 as "x", 'id, 'id * 4 as "y", 'id)
df.show
+---+---+---+---+---+
| id| x| id| y| id|
+---+---+---+---+---+
| 0| 0| 0| 0| 0|
| 2| 3| 1| 4| 1|
| 4| 6| 2| 8| 2|
+---+---+---+---+---+
Then I can use toDF to set new column names. Let's consider that I know that only id is duplicated. If we don't, adding the extra logic to figure out which columns are duplicated would not be very difficult.
var i = -1
val names = df.columns.map( n =>
if(n == "id") {
i+=1
s"id_$i"
} else n )
val new_df = df.toDF(names : _*)
new_df.show
+----+---+----+---+----+
|id_0| x|id_1| y|id_2|
+----+---+----+---+----+
| 0| 0| 0| 0| 0|
| 2| 3| 1| 4| 1|
| 4| 6| 2| 8| 2|
+----+---+----+---+----+

Related

Compare 2 dataframes and create an output dataframe containing the name of the columns that contain differences and their values

Using Spark and Scala
I have df1 and df2 as follows:
df1
+--------------------+--------+----------------+----------+
| ID|colA. |colB. |colC |
+--------------------+--------+----------------+----------+
| 1| 0| 10| APPLES|
| 2| 0| 20| APPLES|
|. 3| 0| 30| PEARS|
+--------------------+--------+----------------+----------+
df2
+--------------------+--------+----------------+----------+
| ID|colA. |colB |colC |
+--------------------+--------+----------------+----------+
| 1| 0| 10| APPLES|
| 2| 0| 20| PEARS|
| 3| 0| 10| APPLES|
+--------------------+--------+----------------+----------+
I need to compare these 2 dataframes and extract differences in a df3 that contains 4 columns: Column Names that contains a difference, Value from df1, Value from df2, ID
How can I achieve this without using the column names, I can only use the ID hard coded.
+--------------------+--------+----------------+-------------+-----
| Column Name |Value from df1. |Value from df2| ID |
+--------------------+--------+----------------+--------------+-----
| col B | 30| 10| 3. |
| col C | APPLES| PEARS| 2. |
| col C | PEARS| APPLES| 3. |
+--------------------+--------+----------------+---------------+----+
What I did so far is to extract the names of the columns that contain differences but I'm stuck on how to get the values.
val columns = df1.columns
val df_join = df1.alias("d1").join(df2.alias("d2"), col("d1.id") === col("d2.id"),
"left")
val test = columns.foldLeft(df_join) {(df_join, name) => df_join.withColumn(name +
"_temp", when(col("d1." + name) =!= col("d2." + name), lit(name))))}
.withColumn("Col Name", concat_ws(",", columns.map(name => col(name + "_temp")): _*))
You can try this way:
// Consider the below dataframes
df1.show()
+---+----+----+------+
| ID|colA|colB| colC|
+---+----+----+------+
| 1| 0| 10|APPLES|
| 2| 0| 20|APPLES|
| 3| 0| 30| PEARS|
+---+----+----+------+
df2.show()
+---+----+----+------+
| ID|colA|colB| colC|
+---+----+----+------+
| 1| 0| 10|APPLES|
| 2| 0| 20| PEARS|
| 3| 0| 10|APPLES|
+---+----+----+------+
// As ID column can be hardcoded, we can use it to exclude from the list of all the columns of the dataframe so that we will be left with the remaining columns
val df1_columns = df1.columns.to[ListBuffer].-=("ID")
val df2_columns = df2.columns.to[ListBuffer].-=("ID")
// obtain the number of columns to use it in the stack function later
val df1_columns_count = df1_columns.length
val df2_columns_count = df2_columns.length
// obtain the columns in dynamic way to use in the stack function
var df1_stack_str = ""
var df2_stack_str = ""
// Typecasting columns to string type to avoid conflicts
df1_columns.foreach { column =>
df1_stack_str += s"'$column',cast($column as string),"
}
df1_stack_str = df1_stack_str.substring(0,df1_stack_str.lastIndexOf(","))
// Typecasting columns to string type to avoid conflicts
df2_columns.foreach { column =>
df2_stack_str += s"'$column',cast($column as string),"
}
df2_stack_str = df2_stack_str.substring(0,df2_stack_str.lastIndexOf(","))
/*
In this case the stack function implementation would look like this
val df11 = df1.selectExpr("id","stack(3,'colA',cast(colA as string),'colB',cast(colB as string),'colC',cast(colC as string)) as (column_name,value_from_df1)")
val df21 = df2.selectExpr("id id_","stack(3,'colA',cast(colA as string),'colB',cast(colB as string),'colC',cast(colC as string)) as (column_name_,value_from_df2)")
*/
val df11 = df1.selectExpr("id",s"stack($df1_columns_count,$df1_stack_str) as (column_name,value_from_df1)")
val df21 = df2.selectExpr("id id_",s"stack($df2_columns_count,$df2_stack_str) as (column_name_,value_from_df2)")
// use inner join to get value_from_df1 and value_from_df2 in one dataframe and apply the filter
df11.as("df11").join(df21.as("df21"),expr("df11.id=df21.id_ and df11.column_name=df21.column_name_"))
.drop("id_","column_name_")
.filter("value_from_df1!=value_from_df2")
.show
// Final output
+---+-----------+--------------+--------------+
| id|column_name|value_from_df1|value_from_df2|
+---+-----------+--------------+--------------+
| 2| colC| APPLES| PEARS|
| 3| colB| 30| 10|
| 3| colC| PEARS| APPLES|
+---+-----------+--------------+--------------+

Spark SQL generate SCD2 without dropping historic state

Data from an relation database is loaded over into spark - supposedly daily but in reality not every day. Furthermore, it is a full copy of the DB - no delta loading.
In order to join the dimension tables easily with the main event data I want to:
deduplicate it (i.e. improves potential for broadcast join later)
have valid_to/valid_from columns so even though data is not available daily (inconsistently) it can still be used nicely (from downstream)
I am using spark 3.0.1 and want to SCD2 style transform the existing data - without loosing history.
spark-shell
import org.apache.spark.sql.types._
import org.apache.spark.sql._
import org.apache.spark.sql.expressions.Window
case class Foo (key:Int, value:Int, date:String)
val d = Seq(Foo(1, 1, "20200101"), Foo(1, 8, "20200102"), Foo(1, 9, "20200120"),Foo(1, 9, "20200121"),Foo(1, 9, "20200122"), Foo(1, 1, "20200103"), Foo(2, 5, "20200101"), Foo(1, 10, "20200113")).toDF
d.show
val windowDeduplication = Window.partitionBy("key", "value").orderBy("key", "date")
val windowPrimaryKey = Window.partitionBy("key").orderBy("key", "date")
val nextThing = lead("date", 1).over(windowPrimaryKey)
d.withColumn("date", to_date(col("date"), "yyyyMMdd")).withColumn("rank", rank().over(windowDeduplication)).filter(col("rank") === 1).drop("rank").withColumn("valid_to", nextThing).withColumn("valid_to", when(nextThing.isNotNull, date_sub(nextThing, 1)).otherwise(current_date)).withColumnRenamed("date", "valid_from").orderBy("key", "valid_from", "valid_to").show
results in:
+---+-----+----------+----------+
|key|value|valid_from| valid_to|
+---+-----+----------+----------+
| 1| 1|2020-01-01|2020-01-01|
| 1| 8|2020-01-02|2020-01-12|
| 1| 10|2020-01-13|2020-01-19|
| 1| 9|2020-01-20|2020-10-09|
| 2| 5|2020-01-01|2020-10-09|
+---+-----+----------+----------+
which is already pretty good. However:
| 1| 1|2020-01-03| 2|2020-01-12|
Is lost.
I.e. any values which occur again later (after an intermediary change) are lost.
How can I keep this row without keeping larger ranks such as:
d.withColumn("date", to_date(col("date"), "yyyyMMdd")).withColumn("rank", rank().over(windowDeduplication)).withColumn("valid_to", nextThing).withColumn("valid_to",
when(nextThing.isNotNull, date_sub(nextThing, 1)).otherwise(current_date)).withColumnRenamed("date", "valid_from").orderBy("key", "valid_from", "valid_to").show
+---+-----+----------+----+----------+
|key|value|valid_from|rank| valid_to|
+---+-----+----------+----+----------+
| 1| 1|2020-01-01| 1|2020-01-01|
| 1| 8|2020-01-02| 1|2020-01-02|
| 1| 1|2020-01-03| 2|2020-01-12|
| 1| 10|2020-01-13| 1|2020-01-19|
| 1| 9|2020-01-20| 1|2020-01-20|
| 1| 9|2020-01-21| 2|2020-01-21|
| 1| 9|2020-01-22| 3|2020-10-09|
| 2| 5|2020-01-01| 1|2020-10-09|
+---+-----+----------+----+----------+
Which is definitely not desired
The idea is to drop duplicates
But keep any historic changes to the data using a valid_to, valid_from
How can I properly transform this to a SCD2 representation, i.e. have a valid_from, valid_to but not drop intermediary state?
NOTICE: I do not need to update existing data (merge into, JOIN). It is fine to recreate / overwrite it.
I.e. Implement SCD Type 2 in Spark seems to be way too complicated. Is there a better way in my case where the state handling is not required? I.e. I have data originating from a daily full copy of a database and want to deduplicate it.
The previous approach only keeps the first (earliest) version of a duplicate. I think the only solution without a join for state handling is with a window function where each value is compared against the previous row - and if there is no change in the whole row it is discarded.
Probably less efficient - but more accurate. But this also depends on the use-case at hand i.e. how likely it is that a changed value will be seen again.
import org.apache.spark.sql.types._
import org.apache.spark.sql._
import org.apache.spark.sql.expressions.Window
case class Foo (key:Int, value:Int, value2:Int, date:String)
val d = Seq(Foo(1, 1,1, "20200101"), Foo(1, 8,1, "20200102"), Foo(1, 9,1, "20200120"),Foo(1, 6,1, "20200121"),Foo(1, 9,1, "20200122"), Foo(1, 1,1, "20200103"), Foo(2, 5,1, "20200101"), Foo(1, 10,1, "20200113"), Foo(1, 9,1, "20210120"),Foo(1, 9,1, "20220121"),Foo(1, 9,3, "20230122")).toDF
def compare2Rows(key:Seq[String], sortChangingIgnored:Seq[String], timeColumn:String)(df:DataFrame):DataFrame = {
val windowPrimaryKey = Window.partitionBy(key.map(col):_*).orderBy(sortChangingIgnored.map(col):_*)
val columnsToCompare = df.drop(key ++ sortChangingIgnored:_*).columns
val nextDataChange = lead(timeColumn, 1).over(windowPrimaryKey)
val deduplicated = df.withColumn("data_changes", columnsToCompare.map(e=> col(e) =!= lead(col(e), 1).over(windowPrimaryKey)).reduce(_ or _)).filter(col("data_changes").isNull or col("data_changes"))
deduplicated.withColumn("valid_to", when(nextDataChange.isNotNull, date_sub(nextDataChange, 1)).otherwise(current_date)).withColumnRenamed("date", "valid_from").drop("data_changes")
}
d.orderBy("key", "date").show
d.withColumn("date", to_date(col("date"), "yyyyMMdd")).transform(compare2Rows(Seq("key"), Seq("date"), "date")).orderBy("key", "valid_from", "valid_to").show
returns:
+---+-----+------+----------+----------+
|key|value|value2|valid_from| valid_to|
+---+-----+------+----------+----------+
| 1| 1| 1|2020-01-01|2020-01-01|
| 1| 8| 1|2020-01-02|2020-01-02|
| 1| 1| 1|2020-01-03|2020-01-12|
| 1| 10| 1|2020-01-13|2020-01-19|
| 1| 9| 1|2020-01-20|2020-01-20|
| 1| 6| 1|2020-01-21|2022-01-20|
| 1| 9| 1|2022-01-21|2023-01-21|
| 1| 9| 3|2023-01-22|2020-10-09|
| 2| 5| 1|2020-01-01|2020-10-09|
+---+-----+------+----------+----------+
for an input of:
+---+-----+------+--------+
|key|value|value2| date|
+---+-----+------+--------+
| 1| 1| 1|20200101|
| 1| 8| 1|20200102|
| 1| 1| 1|20200103|
| 1| 10| 1|20200113|
| 1| 9| 1|20200120|
| 1| 6| 1|20200121|
| 1| 9| 1|20200122|
| 1| 9| 1|20210120|
| 1| 9| 1|20220121|
| 1| 9| 3|20230122|
| 2| 5| 1|20200101|
+---+-----+------+--------+
This function has the downside that unlimited amount of state is build up - for each key ... But as I plan to apply this to rather small dimension tables I think it should be fine anyways.

Filtering rows in pyspark dataframe and creating a new column that contains the result

so I am trying to identify the crime that happens within the SF downtown boundary on Sunday. My idea was to first write a UDF to label if each crime is in the area I identify as the downtown area, if it happened within the area, then it will have a label of "1" and "0" if not. After that, I am trying to create a new column to store those results. I tried my best to write everything I can but it just doesn't work for some reason. Here is the code I wrote:
from pyspark.sql.types import BooleanType
from pyspark.sql.functions import udf
def filter_dt(x,y):
if (((x < -122.4213) & (x > -122.4313)) & ((y > 37.7540) & (y < 37.7740))):
return '1'
else:
return '0'
schema = StructType([StructField("isDT", BooleanType(), False)])
filter_dt_boolean = udf(lambda row: filter_dt(row[0], row[1]), schema)
#First, pick out the crime cases that happens on Sunday BooleanType()
q3_sunday = spark.sql("SELECT * FROM sf_crime WHERE DayOfWeek='Sunday'")
#Then, we add a new column for us to filter out(identify) if the crime is in DT
q3_final = q3_result.withColumn("isDT", filter_dt(q3_sunday.select('X'),q3_sunday.select('Y')))
The error I am getting is:Picture for the error message
My guess is that the udf I am having right now doesn't support the whole column as input to be compared, but I don't know how to fix it to make it work. Please help! Thank you!
A sample data would have helped. For now I assume that your data looks like this:
+----+---+---+
|val1| x| y|
+----+---+---+
| 10| 7| 14|
| 5| 1| 4|
| 9| 8| 10|
| 2| 6| 90|
| 7| 2| 30|
| 3| 5| 11|
+----+---+---+
Then you dont need a udf, as you can do the evaluation using the when() function
import pyspark.sql.functions as F
tst= sqlContext.createDataFrame([(10,7,14),(5,1,4),(9,8,10),(2,6,90),(7,2,30),(3,5,11)],schema=['val1','x','y'])
tst_res = tst.withColumn("isdt",F.when(((tst.x.between(4,10))&(tst.y.between(11,20))),1).otherwise(0))This will give the result
tst_res.show()
+----+---+---+----+
|val1| x| y|isdt|
+----+---+---+----+
| 10| 7| 14| 1|
| 5| 1| 4| 0|
| 9| 8| 10| 0|
| 2| 6| 90| 0|
| 7| 2| 30| 0|
| 3| 5| 11| 1|
+----+---+---+----+
If i have got the data wrong and still you need to pass multiple values to udf, you have to pass it as an array or a struct. I prefer a struct
from pyspark.sql.functions import udf
from pyspark.sql.types import *
#udf(IntegerType())
def check_data(row):
if((row.x in range(4,5))&(row.y in range(1,20))):
return(1)
else:
return(0)
tst_res1 = tst.withColumn("isdt",check_data(F.struct('x','y')))
The result will be the same. But it is always better to avoid UDF and go for spark inbuilt functions since spark catalyst cannot understand the logic inside the udf and cannot optimize it.
Try changing last line as below-
from pyspark.sql.functions import col
q3_final = q3_result.withColumn("isDT", filter_dt(col('X'),col('Y')))

How to SparkSQL load csv with header on FROM statement

Spark SQL FROM statement can be specified file path and format.
but, header ignored when load csv.
can use header for column name?
~ > cat test.csv
a,b,c
1,2,3
4,5,6
scala> spark.sql("SELECT * FROM csv.`test.csv`").show()
19/06/12 23:44:40 WARN ObjectStore: Failed to get database csv, returning NoSuchObjectException
+---+---+---+
|_c0|_c1|_c2|
+---+---+---+
| a| b| c|
| 1| 2| 3|
| 4| 5| 6|
+---+---+---+
I want to.
+---+---+---+
| a| b| c|
+---+---+---+
| 1| 2| 3|
| 4| 5| 6|
+---+---+---+
If you want to do it in plain SQL you should create a table or view first:
CREATE TEMPORARY VIEW foo
USING csv
OPTIONS (
path 'test.csv',
header true
);
and then SELECT from it:
SELECT * FROM foo;
To use this method with SparkSession.sql remove trailing ; and execute each statement separately.
I don't think a pure SQL solution is available in Spark 2.4.3 which is the latest version when writing this. This syntax is parsed using rule ResolveSQLOnFile which is always calling DataSource constructor with an empty options map.
I can verify that putting a break-point to DataSource constructor and modifying options to Map("header" -> "true") does the trick so obviously this is where it should be implemented.
You can try this:
scala> val df = spark.read.format("csv").option("header", "true").load("test.csv")
df: org.apache.spark.sql.DataFrame = [a: string, b: string ... 1 more field]
scala> df.show
+---+---+---+
| a| b| c|
+---+---+---+
| 1| 2| 3|
| 4| 5| 6|
+---+---+---+
A SQL way is below:
scala> val df = spark.read.format("csv").option("header", "true").load("test.csv")
df: org.apache.spark.sql.DataFrame = [a: string, b: string ... 1 more field]
scala> df.createOrReplaceTempView("table")
scala> spark.sql("SELECT * FROM table").show
+---+---+---+
| a| b| c|
+---+---+---+
| 1| 2| 3|
| 4| 5| 6|
+---+---+---+

Add aggregated columns to pivot without join

Considering the table:
df=sc.parallelize([(1,1,1),(5,0,2),(27,1,1),(1,0,3),(5,1,1),(1,0,2)]).toDF(['id', 'error', 'timestamp'])
df.show()
+---+-----+---------+
| id|error|timestamp|
+---+-----+---------+
| 1| 1| 1|
| 5| 0| 2|
| 27| 1| 1|
| 1| 0| 3|
| 5| 1| 1|
| 1| 0| 2|
+---+-----+---------+
I would like to make a pivot on timestamp column keeping some other aggregated information from the original table. The result I am interested in can be achieved by
df1=df.groupBy('id').agg(sf.sum('error').alias('Ne'),sf.count('*').alias('cnt'))
df2=df.groupBy('id').pivot('timestamp').agg(sf.count('*')).fillna(0)
df1.join(df2, on='id').filter(sf.col('cnt')>1).show()
with the resulting table:
+---+---+---+---+---+---+
| id| Ne|cnt| 1| 2| 3|
+---+---+---+---+---+---+
| 5| 1| 2| 1| 1| 0|
| 1| 1| 3| 1| 1| 1|
+---+---+---+---+---+---+
However, there are at least two issues with the mentioned solution:
I am filtering by cnt at the end of the script. If I would be able to do this at the beginning, I can avoid almost all processing, because a large portion of data is removed using this filtration. Is there any way how to do this excepting collect and isin methods?
I am doing groupBy on id two-times. First, to aggregate some columns I need in results and the second time to get the pivot columns. Finally, I need join to merge these columns. I feel that I am surely missing some solution because it should be possible to do this with just one groubBy and without join, but I cannot figure out, how to do this.
I think you can not get around the join, because the pivot will need the timestamp values and the first grouping should not consider them. So if you have to create the NE and cnt values you have to group the dataframe only by id which results in the loss of timestamp if you want to preserve the values in columns you have to do the pivot as you did separately and join it back.
The only improvement that can be done is to move the filter to the df1 creation. So as you said this could already improve the performance since df1 should be much smaller after the filtering for your real data.
from pyspark.sql.functions import *
df=sc.parallelize([(1,1,1),(5,0,2),(27,1,1),(1,0,3),(5,1,1),(1,0,2)]).toDF(['id', 'error', 'timestamp'])
df1=df.groupBy('id').agg(sum('error').alias('Ne'),count('*').alias('cnt')).filter(col('cnt')>1)
df2=df.groupBy('id').pivot('timestamp').agg(count('*')).fillna(0)
df1.join(df2, on='id').show()
Output:
+---+---+---+---+---+---+
| id| Ne|cnt| 1| 2| 3|
+---+---+---+---+---+---+
| 5| 1| 2| 1| 1| 0|
| 1| 1| 3| 1| 1| 1|
+---+---+---+---+---+---+
Actually it is indeed possible to avoid join using Window as
w1 = Window.partitionBy('id')
w2 = Window.partitionBy('id', 'timestamp')
df.select('id', 'timestamp',
sf.sum('error').over(w1).alias('Ne'),
sf.count('*').over(w1).alias('cnt'),
sf.count('*').over(w2).alias('cnt_2')
).filter(sf.col('cnt')>1) \
.groupBy('id', 'Ne', 'cnt').pivot('timestamp').agg(sf.first('cnt_2')).fillna(0).show()