How to calculate and creates new columns based on difference between rows in pyspark Data Drame - dataframe

My DF looks like below:
objid|gpstime | gpsspeed|
+------+-------------------+-----
|X |2018-04-03 11:00:40| 10|
|X |2018-04-03 11:00:47| 15|
|X |2018-04-03 11:00:50| 10|
|Y |2018-04-03 11:00:52| 30|
|Y |2018-04-03 11:00:59| 50|
The result should be like this below:
objid|gpstime| gpsspeed|timeDiff |speedDiff|
+------+-------------------+--------+---------+---------+
|X|2018-04-03 11:00:40| 10| -| |
|X|2018-04-03 11:00:47| 15| 7| 5|
|X|2018-04-03 11:00:50| 10| 3| -5|
|Y|2018-04-03 11:00:52| 30| 2| 20|
|Y|2018-04-03 11:00:59| 50| 7| 20|
So I need to create 2 new columns based on difference from existig ones but I have an issues. My code from one columns looks like below:
from pyspark.sql.functions import
from pyspark.sql.window import Window
df.withColumn("time_intertweet", datediff(df.gpstime, lag(df.gpstime, 1)
.over(Window.partitionBy("gpstime")
.orderBy("gpstime"))))
Any ideas how to fix it?

You should not partition by anything because you want to get the previous row without any partitioning. Also, if you want to get the time difference in seconds, you probably want to use unix_timestamp instead of datediff (which returns difference in number of days).
from pyspark.sql.functions import unix_timestamp, lag
from pyspark.sql import Window
df2 = df.withColumn(
"timeDiff",
unix_timestamp(df.gpstime) - unix_timestamp(lag(df.gpstime, 1).over(Window.orderBy("gpstime")))
).withColumn(
"speedDiff",
df.gpsspeed - lag(df.gpsspeed, 1).over(Window.orderBy("gpstime"))
)

Related

pySpark not able to handle Multiline string in CSV file while selecting columns

I am trying to load csv file which looks like following, using pyspark code.
A^B^C^D^E^F
"Yash"^"12"^""^"this is first record"^"nice"^"12"
"jay"^"13"^""^"
In second record, I am new line at the beingnning"^"nice"^"12"
"Nova"^"14"^""^"this is third record"^"nice"^"12"
When I read this file and select a few columns entire dataframe gets messed up.
import pyspark.sql.functions as F
df = (
spark.read
.option("delimiter", "^")
.option('header',True) \
.option("multiline", "true")
.option('multiLine', True) \
.option("escape", "\"")
.csv(
"test3.csv",
header=True,
)
)
df.show()
df = df.withColumn("isdeleted", F.lit(True))
select_cols = ['isdeleted','B','D','E','F']
df = df.select(*select_cols)
df.show()
(truncated some import statements for readability of code)
This is what I see when the above code runs
Before column selection (entire DF)
+----+---+----+--------------------+----+---+
| A| B| C| D| E| F|
+----+---+----+--------------------+----+---+
|Yash| 12|null|this is first record|nice| 12|
| jay| 13|null|\nIn second recor...|nice| 12|
|Nova| 14|null|this is third record|nice| 12|
+----+---+----+--------------------+----+---+
After df.select(*select_cols)
+---------+----+--------------------+----+----+
|isdeleted| B| D| E| F|
+---------+----+--------------------+----+----+
| true| 12|this is first record|nice| 12|
| true| 13| null|null|null|
| true|nice| null|null|null|
| true| 14|this is third record|nice| 12|
+---------+----+--------------------+----+----+
Here, second row with newline char is being broken down to 2 rows, output file is also messed up just like dataframe preview I showed above.
I am using apache Glue image amazon/aws-glue-libs:glue_libs_4.0.0_image_01 which uses spark 3.3.0 version. Also tried with spark 3.1.1. I see the same issue in both versions.
I am not sure whether this is a bug in spark package or I am missing something here. Any help will be appreciated
You are giving the wrong escape charactor. It is usually \ and you are specifing this to the quote. Once you change the option,
df = spark.read.csv('test.csv', sep='^', header=True, multiLine=True)
df.show()
df.select('B').show()
+----+---+----+--------------------+----+---+
| A| B| C| D| E| F|
+----+---+----+--------------------+----+---+
|Yash| 12|null|this is first record|nice| 12|
| jay| 13|null|\nIn second recor...|nice| 12|
|Nova| 14|null|this is third record|nice| 12|
+----+---+----+--------------------+----+---+
+---+
| B|
+---+
| 12|
| 13|
| 14|
+---+
You will get the desired result.

PySpark reassign values of duplicate rows

I have a dataframe like this:
id,p1
1,A
2,null
3,B
4,null
4,null
2,C
Using PySpark, I want to remove all the duplicates. However, if there is a duplicate in which the p1 column is not null I want to remove the null one. For example, I want to remove the first occurrence of id 2 and either of id 4. Right now I am splitting the dataframe into two dataframes as such:
id,p1
1,A
3,B
2,C
id,p1
2,null
4,null
4,null
Removing the duplicates from both, then adding the ones which are not in the first dataframe back. Like that I get this dataframe.
id,p1
1,A
3,B
4,null
2,C
This is what I have so far:
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName('test').getOrCreate()
d = spark.createDataFrame(
[(1,"A"),
(2,None),
(3,"B"),
(4,None),
(4,None),
(2,"C")],
["id", "p"]
)
d1 = d.filter(d.p.isNull())
d2 = d.filter(d.p.isNotNull())
d1 = d1.dropDuplicates()
d2 = d2.dropDuplicates()
d3 = d1.join(d2, "id", 'left_anti')
d4 = d2.unionByName(d3)
Is there a more beautiful way of doing this? It really feels redundant like this but I can't come up with a better way. I tried using groupby but couldn't achieve it. Any ideas? Thanks.
(df1.sort(col('p1').desc())#sort column descending and will put nulls low in list
.dropDuplicates(subset = ['id']).show()#Drop duplicates on column id
)
+---+----+
| id| p1|
+---+----+
| 1| A|
| 2| C|
| 3| B|
| 4|null|
+---+----+
Use window row_number() function and sort by "p" column descending.
Example:
d.show()
#+---+----+
#| id| p|
#+---+----+
#| 1| A|
#| 2|null|
#| 3| B|
#| 4|null|
#| 4|null|
#| 2| C|
#+---+----+
from pyspark.sql.functions import col, row_number
from pyspark.sql.window import Window
window_spec=row_number().over(Window.partitionBy("id").orderBy(col("p").desc()))
d.withColumn("rn",window_spec).filter(col("rn")==1).drop("rn").show()
#+---+----+
#| id| p|
#+---+----+
#| 1| A|
#| 3| B|
#| 2| C|
#| 4|null|
#+---+----+

Filtering rows in pyspark dataframe and creating a new column that contains the result

so I am trying to identify the crime that happens within the SF downtown boundary on Sunday. My idea was to first write a UDF to label if each crime is in the area I identify as the downtown area, if it happened within the area, then it will have a label of "1" and "0" if not. After that, I am trying to create a new column to store those results. I tried my best to write everything I can but it just doesn't work for some reason. Here is the code I wrote:
from pyspark.sql.types import BooleanType
from pyspark.sql.functions import udf
def filter_dt(x,y):
if (((x < -122.4213) & (x > -122.4313)) & ((y > 37.7540) & (y < 37.7740))):
return '1'
else:
return '0'
schema = StructType([StructField("isDT", BooleanType(), False)])
filter_dt_boolean = udf(lambda row: filter_dt(row[0], row[1]), schema)
#First, pick out the crime cases that happens on Sunday BooleanType()
q3_sunday = spark.sql("SELECT * FROM sf_crime WHERE DayOfWeek='Sunday'")
#Then, we add a new column for us to filter out(identify) if the crime is in DT
q3_final = q3_result.withColumn("isDT", filter_dt(q3_sunday.select('X'),q3_sunday.select('Y')))
The error I am getting is:Picture for the error message
My guess is that the udf I am having right now doesn't support the whole column as input to be compared, but I don't know how to fix it to make it work. Please help! Thank you!
A sample data would have helped. For now I assume that your data looks like this:
+----+---+---+
|val1| x| y|
+----+---+---+
| 10| 7| 14|
| 5| 1| 4|
| 9| 8| 10|
| 2| 6| 90|
| 7| 2| 30|
| 3| 5| 11|
+----+---+---+
Then you dont need a udf, as you can do the evaluation using the when() function
import pyspark.sql.functions as F
tst= sqlContext.createDataFrame([(10,7,14),(5,1,4),(9,8,10),(2,6,90),(7,2,30),(3,5,11)],schema=['val1','x','y'])
tst_res = tst.withColumn("isdt",F.when(((tst.x.between(4,10))&(tst.y.between(11,20))),1).otherwise(0))This will give the result
tst_res.show()
+----+---+---+----+
|val1| x| y|isdt|
+----+---+---+----+
| 10| 7| 14| 1|
| 5| 1| 4| 0|
| 9| 8| 10| 0|
| 2| 6| 90| 0|
| 7| 2| 30| 0|
| 3| 5| 11| 1|
+----+---+---+----+
If i have got the data wrong and still you need to pass multiple values to udf, you have to pass it as an array or a struct. I prefer a struct
from pyspark.sql.functions import udf
from pyspark.sql.types import *
#udf(IntegerType())
def check_data(row):
if((row.x in range(4,5))&(row.y in range(1,20))):
return(1)
else:
return(0)
tst_res1 = tst.withColumn("isdt",check_data(F.struct('x','y')))
The result will be the same. But it is always better to avoid UDF and go for spark inbuilt functions since spark catalyst cannot understand the logic inside the udf and cannot optimize it.
Try changing last line as below-
from pyspark.sql.functions import col
q3_final = q3_result.withColumn("isDT", filter_dt(col('X'),col('Y')))

Add aggregated columns to pivot without join

Considering the table:
df=sc.parallelize([(1,1,1),(5,0,2),(27,1,1),(1,0,3),(5,1,1),(1,0,2)]).toDF(['id', 'error', 'timestamp'])
df.show()
+---+-----+---------+
| id|error|timestamp|
+---+-----+---------+
| 1| 1| 1|
| 5| 0| 2|
| 27| 1| 1|
| 1| 0| 3|
| 5| 1| 1|
| 1| 0| 2|
+---+-----+---------+
I would like to make a pivot on timestamp column keeping some other aggregated information from the original table. The result I am interested in can be achieved by
df1=df.groupBy('id').agg(sf.sum('error').alias('Ne'),sf.count('*').alias('cnt'))
df2=df.groupBy('id').pivot('timestamp').agg(sf.count('*')).fillna(0)
df1.join(df2, on='id').filter(sf.col('cnt')>1).show()
with the resulting table:
+---+---+---+---+---+---+
| id| Ne|cnt| 1| 2| 3|
+---+---+---+---+---+---+
| 5| 1| 2| 1| 1| 0|
| 1| 1| 3| 1| 1| 1|
+---+---+---+---+---+---+
However, there are at least two issues with the mentioned solution:
I am filtering by cnt at the end of the script. If I would be able to do this at the beginning, I can avoid almost all processing, because a large portion of data is removed using this filtration. Is there any way how to do this excepting collect and isin methods?
I am doing groupBy on id two-times. First, to aggregate some columns I need in results and the second time to get the pivot columns. Finally, I need join to merge these columns. I feel that I am surely missing some solution because it should be possible to do this with just one groubBy and without join, but I cannot figure out, how to do this.
I think you can not get around the join, because the pivot will need the timestamp values and the first grouping should not consider them. So if you have to create the NE and cnt values you have to group the dataframe only by id which results in the loss of timestamp if you want to preserve the values in columns you have to do the pivot as you did separately and join it back.
The only improvement that can be done is to move the filter to the df1 creation. So as you said this could already improve the performance since df1 should be much smaller after the filtering for your real data.
from pyspark.sql.functions import *
df=sc.parallelize([(1,1,1),(5,0,2),(27,1,1),(1,0,3),(5,1,1),(1,0,2)]).toDF(['id', 'error', 'timestamp'])
df1=df.groupBy('id').agg(sum('error').alias('Ne'),count('*').alias('cnt')).filter(col('cnt')>1)
df2=df.groupBy('id').pivot('timestamp').agg(count('*')).fillna(0)
df1.join(df2, on='id').show()
Output:
+---+---+---+---+---+---+
| id| Ne|cnt| 1| 2| 3|
+---+---+---+---+---+---+
| 5| 1| 2| 1| 1| 0|
| 1| 1| 3| 1| 1| 1|
+---+---+---+---+---+---+
Actually it is indeed possible to avoid join using Window as
w1 = Window.partitionBy('id')
w2 = Window.partitionBy('id', 'timestamp')
df.select('id', 'timestamp',
sf.sum('error').over(w1).alias('Ne'),
sf.count('*').over(w1).alias('cnt'),
sf.count('*').over(w2).alias('cnt_2')
).filter(sf.col('cnt')>1) \
.groupBy('id', 'Ne', 'cnt').pivot('timestamp').agg(sf.first('cnt_2')).fillna(0).show()

extracting numpy array from Pyspark Dataframe

I have a dataframe gi_man_df where group can be n:
+------------------+-----------------+--------+--------------+
| group | number|rand_int| rand_double|
+------------------+-----------------+--------+--------------+
| 'GI_MAN'| 7| 3| 124.2|
| 'GI_MAN'| 7| 10| 121.15|
| 'GI_MAN'| 7| 11| 129.0|
| 'GI_MAN'| 7| 12| 125.0|
| 'GI_MAN'| 7| 13| 125.0|
| 'GI_MAN'| 7| 21| 127.0|
| 'GI_MAN'| 7| 22| 126.0|
+------------------+-----------------+--------+--------------+
and I am expecting a numpy nd_array i.e, gi_man_array:
[[[124.2],[121.15],[129.0],[125.0],[125.0],[127.0],[126.0]]]
where rand_double values after applying pivot.
I tried the following 2 approaches:
FIRST: I pivot the gi_man_df as follows:
gi_man_pivot = gi_man_df.groupBy("number").pivot('rand_int').sum("rand_double")
and the output I got is:
Row(number=7, group=u'GI_MAN', 3=124.2, 10=121.15, 11=129.0, 12=125.0, 13=125.0, 21=127.0, 23=126.0)
but here the problem is to get the desired output, I can't convert it to matrix then convert again to numpy array.
SECOND:
I created the vector in the dataframe itself using:
assembler = VectorAssembler(inputCols=["rand_double"],outputCol="rand_double_vector")
gi_man_vector = assembler.transform(gi_man_df)
gi_man_vector.show(7)
and I got the following output:
+----------------+-----------------+--------+--------------+--------------+
| group| number|rand_int| rand_double| rand_dbl_Vect|
+----------------+-----------------+--------+--------------+--------------+
| GI_MAN| 7| 3| 124.2| [124.2]|
| GI_MAN| 7| 10| 121.15| [121.15]|
| GI_MAN| 7| 11| 129.0| [129.0]|
| GI_MAN| 7| 12| 125.0| [125.0]|
| GI_MAN| 7| 13| 125.0| [125.0]|
| GI_MAN| 7| 21| 127.0| [127.0]|
| GI_MAN| 7| 22| 126.0| [126.0]|
+----------------+-----------------+--------+--------------+--------------+
but problem here is I can't pivot it on rand_dbl_Vect.
So my question is:
1. Is any of the 2 approaches is correct way of achieving the desired output, if so then how can I proceed further to get the desired result?
2. What other way I can proceed with so the code is optimal and performance is good?
This
import numpy as np
np.array(gi_man_df.select('rand_double').collect())
produces
array([[ 124.2 ],
[ 121.15],
.........])
To convert the spark df to numpy array, first convert it to pandas and then apply the to_numpy() function.
spark_df.select(<list of columns needed>).toPandas().to_numpy()