I am having a spark dataframe with below sample data.
+--------------+--------------+
| item_cd | item_nbr |
+--------------+--------------+
|20-10767-58V| 98003351|
|20-10087-58V| 87003872|
|20-10087-58V| 97098411|
|20-10i72-YTW| 99003351|
|27-1o121-YTW| 89659352|
|27-10991-YTW| 98678411|
| At81kk00| 98903458|
| Avp12225| 85903458|
| Akb12226| 99003458|
| Ahh12829| 98073458|
| Aff12230| 88803458|
| Ar412231| 92003458|
| Aju12244| 98773458|
+--------------+--------------+
I want to write a condition like for each item_cd which are having hypen(-) should do nothing and for which not having hypen(-) should add 4 trailing 0's to each item_cd. Then take duplicates on both columns(item_cd, item_nbr) into to one dataframe and unique into other dataframe in pyspark.
could anyone please me with this in pyspark?
Here is how it could be done:
import pyspark.sql.functions as F
from pyspark.sql import Window
data = [("20-10767-58V", "98003351"), ("20-10087-58V", "87003872"), ("At81kk00", "98903458"), ("Ahh12829", "98073458"), ("20-10767-58V", "98003351")]
cols = ["item_cd", "item_nbr"]
df = spark.createDataFrame(data, cols)
df.show()
df = df.withColumn("item_cd", when(~df.item_cd.contains("-"), F.concat(df.item_cd, F.lit("0000"))).otherwise(df.item_cd))
df.show()
unique_df = df.select("*").distinct()
unique_df.show()
w = Window.partitionBy(df.columns)
duplicate_df = df.select("*", F.count("*").over(w).alias("cnt"))\
.where("cnt > 1")\
.drop("cnt")
duplicate_df.show()
Input df (added duplicate):
+------------+--------+
| item_cd|item_nbr|
+------------+--------+
|20-10767-58V|98003351|
|20-10087-58V|87003872|
| At81kk00|98903458|
| Ahh12829|98073458|
|20-10767-58V|98003351|
+------------+--------+
Unique df:
+------------+--------+
| item_cd|item_nbr|
+------------+--------+
|Ahh128290000|98073458|
|20-10767-58V|98003351|
|20-10087-58V|87003872|
|At81kk000000|98903458|
+------------+--------+
Duplicates df:
+------------+--------+
| item_cd|item_nbr|
+------------+--------+
|20-10767-58V|98003351|
|20-10767-58V|98003351|
+------------+--------+
Related
I have dataframe df1:
+------+-----------+----------+----------+-----+
| sid|acc_term_id|first_name| last_name|major|
+------+-----------+----------+----------+-----+
|106454| 2014B| Doris| Marshall| BIO|
|106685| 2015A| Sara|Richardson| CHM|
|106971| 2015B| Rose| Butler| CHM|
|107298| 2015B| Kayla| Barnes| CSC|
|107555| 2016A| Carolyn| Ford| PHY|
|107624| 2016B| Marie| Webb| BIO|
I want to store the count of sid from this dataframe
c_value = current.agg({"sid": "count"}).collect()[0][0]
and use it for creating a prop column as shown in code below:
c_value = current.agg({"sid": "count"}).collect()[0][0]
stud_major = (
current
.groupBy('major')
.agg(
expr('COUNT(*) AS n_students')
)
.select('major', 'n_students', expr('ROUND(n_students/c_value, 4) AS prop'),
)
)
stud_major.show(16)
When I run the code I get error:
cannot resolve '`c_value`' given input columns: [major, n_students]; line 1 pos 17;
If I put numeric value 2055 instead of c_value everything ok like below.
+
-----+----------+------+
|major|n_students| prop|
+-----+----------+------+
| MTH| 320|0.1557|
| CHM| 405|0.1971|
| CSC| 508|0.2472|
| BIO| 615|0.2993|
| PHY| 207|0.1007|
+-----+----------+------+
Probably there are other ways to calculate but I need by storing count as variable.
Any ideas?
In jupyter Use pandas agg
j=df.agg({'sid':'count'})
df.groupby("major")['sid'].agg(n_students=(lambda x: x.count()), prop=(lambda x: x.count()/j))
major n_students prop
0 BIO 2 0.333333
1 CHM 2 0.333333
2 CSC 1 0.166667
3 PHY 1 0.166667
and pyspark
from pyspark.sql.functions import *
df.groupby('major').agg(count('sid').alias('n_students')).withColumn('prop', round((col('n_students')/c_value),2)).show()
Alternatively You could
c_value = df.agg({"sid": "count"}).collect()[0][0]
df.groupBy('major').agg(expr('COUNT(*) AS n_students')).selectExpr('major',"n_students", f"ROUND(n_students/{c_value},2) AS prop").show()
+-----+----------+----+
|major|n_students|prop|
+-----+----------+----+
| BIO| 2|0.33|
| CHM| 2|0.33|
| CSC| 1|0.17|
| PHY| 1|0.17|
+-----+----------+----+
I have a data frame that looks like this:
+--------------------+---------------------+-------------+------------+-----+
|tpep_pickup_datetime|tpep_dropoff_datetime|trip_distance|total_amount|isDay|
+--------------------+---------------------+-------------+------------+-----+
| 2019-01-01 09:01:00| 2019-01-01 08:53:20| 1.5| 2.00| true|
| 2019-01-01 21:59:59| 2019-01-01 21:18:59| 2.6| 5.00|false|
| 2019-01-01 10:01:00| 2019-01-01 08:53:20| 1.5| 2.00| true|
| 2019-01-01 22:59:59| 2019-01-01 21:18:59| 2.6| 5.00|false|
+--------------------+---------------------+-------------+------------+-----+
and I want to create a summary table which calculates the trip_rate for all the night trips and all the day trips (total_amount column divided by trip_distance). So the end result should look like this:
+------------+-----------+
| day_night | trip_rate |
+------------+-----------+
|Day | 1.33 |
|Night | 1.92 |
+------------+-----------+
Here is what I'm trying to do:
df2 = spark.createDataFrame(
[
('2019-01-01 09:01:00','2019-01-01 08:53:20','1.5','2.00','true'),#day
('2019-01-01 21:59:59','2019-01-01 21:18:59','2.6','5.00','false'),#night
('2019-01-01 10:01:00','2019-01-01 08:53:20','1.5','2.00','true'),#day
('2019-01-01 22:59:59','2019-01-01 21:18:59','2.6','5.00','false'),#night
],
['tpep_pickup_datetime','tpep_dropoff_datetime','trip_distance','total_amount','day_night'] # add your columns label here
)
day_trip_rate = df2.where(df2.day_night == 'Day').withColumn("trip_rate",F.sum("total_amount")/F.sum("trip_distance"))
night_trip_rate = df2.where(df2.day_night == 'Night').withColumn("trip_rate",F.sum("total_amount")/F.sum("trip_distance"))
I don't believe I'm even approaching it the right way. And I'm getting this error:(
raise AnalysisException(s.split(': ', 1)[1], stackTrace) pyspark.sql.utils.AnalysisException: "grouping expressions sequence is empty, and 'tpep_pickup_datetime' is not an aggregate function.
Can someone help me know how to approach this to get that summary table?
from pyspark.sql import functions as F
from pyspark.sql.functions import *
df2.groupBy("day_night").agg(F.round(F.sum("total_amount")/F.sum("trip_distance"),2).alias('trip_rate'))\
.withColumn("day_night", F.when(col("day_night")=="true", "Day").otherwise("Night")).show()
+---------+---------+
|day_night|trip_rate|
+---------+---------+
| Day| 1.33|
| Night| 1.92|
+---------+---------+
Without rounding off:
df2.groupBy("day_night").agg(F.sum("total_amount")/F.sum("trip_distance")).alias('trip_rate')\
.withColumn("day_night", F.when(col("day_night")=="true", "Day").otherwise("Night")).show()
(You have day_night in df2 construction code, but isDay in the display table. I'm considering the field name as day_night here.)
I loaded a csv into a DataFrame with pandas.
The format is the following:
Timestamp | 1014.temperature | 1014.humidity | 1015.temperature | 1015.humidity ....
-------------------------------------------------------------------------------------
2017-... | 23.12 | 12.2 | 25.10 | 10.34 .....
The problem is that the '1014' or '1015' numbers are supposed to be ID's that are supposed to be in a special column.
I would like to end up with the following format for my DF:
TimeStamp | ID | Temperature | Humidity
-----------------------------------------------
. | | |
.
.
.
The CSV is tab separated.
Thanks in advance guys!
import pandas as pd
from io import StringIO
# create sample data frame
s = """Timestamp|1014.temperature|1014.humidity|1015.temperature|1015.humidity
2017|23.12|12.2|25.10|10.34"""
df = pd.read_csv(StringIO(s), sep='|')
df = df.set_index('Timestamp')
# split columns on '.' with list comprehension
l = [col.split('.') for col in df.columns]
# create multi index columns
df.columns = pd.MultiIndex.from_tuples(l)
# stack column level 0, reset the index and rename level_1
final = df.stack(0).reset_index().rename(columns={'level_1': 'ID'})
Timestamp ID humidity temperature
0 2017 1014 12.20 23.12
1 2017 1015 10.34 25.10
So I have a table (sample)
I'm using pyspark dataframe APIs to filter out the 'NOC's that has never won a gold medal and here's the code I write
First part of my code
from pyspark.sql import SQLContext
from pyspark.sql.types import *
from pyspark.sql.functions import *
spark = SQLContext(sc)
df1 = spark.read.format("csv").options(header = 'true').load("D:\\datasets\\athlete_events.csv")
df = df1.na.replace('NA', '-')
countgdf = gdf.groupBy('NOC').agg(count('Medal').alias('No of Gold medals')).select('NOC').show()
It will generate the output
+---+
|NOC|
+---+
|POL|
|JAM|
|BRA|
|ARM|
|MOZ|
|JOR|
|CUB|
|FRA|
|ALG|
|BRN|
+---+
only showing top 10 rows
The next part of the code is something like
allgdf = df.select('NOC').distinct()
This display the output
+-----------+
| NOC|
+-----------+
| DeRuyter|
| POL|
| Russia|
| JAM|
| BUR|
| BRA|
| ARM|
| MOZ|
| CUB|
| JOR|
| Sweden|
| FRA|
| ALG|
| SOM|
| IVB|
|Philippines|
| BRN|
| MAL|
| COD|
| FSM|
+-----------+
Notice the values that are more than 3 characters? Those are supposed to be the values of the column 'Team' but I'm not sure why those values are getting displayed in 'NOC' column. It's hard to figure out why this is happening i.e illegal values in the column.
When I write the final code
final = allgdf.subtract(countgdf).show()
The same happens as illegal values appear in the final dataframe column.
Any help would be appericiated. Thanks.
You should specify a delimiter for your CSV file. By default Spark is using comma separators (,)
This can be done, for example, with :
.option("delimiter",";")
I am working with a dataframe like this:
DeviceNumber | CreationDate | Name
1001 | 1.1.2018 | Testdevice
1001 | 30.06.2019 | Device
1002 | 1.1.2019 | Lamp
I am using databricks and pyspark to do the ETL process. How can I reduce the dataframe in a way that I will only have a single row per "DeviceNumber" and that this will be the row with the highest "CreationDate"? In this example I want the result to look like this:
DeviceNumber | CreationDate | Name
1001 | 30.06.2019 | Device
1002 | 1.1.2019 | Lamp
You can create a additional dataframe with DeviceNumber & it's latest/max CreationDate.
import pyspark.sql.functions as psf
max_df = df\
.groupBy('DeviceNumber')\
.agg(psf.max('CreationDate').alias('max_CreationDate'))
and then join max_df with original dataframe.
joining_condition = [ df.DeviceNumber == max_df.DeviceNumber, df.CreationDate == max_df.max_CreationDate ]
df.join(max_df,joining_condition,'left_semi').show()
left_semi join is useful when you want second dataframe as lookup and does need any column from second dataframe.
You can use PySpark windowing functionality:
from pyspark.sql.window import Window
from pyspark.sql import functions as f
# make sure that creation is a date data-type
df = df.withColumn('CreationDate', f.to_timestamp('CreationDate', format='dd.MM.yyyy'))
# partition on device and get a row number by (descending) date
win = Window.partitionBy('DeviceNumber').orderBy(f.col('CreationDate').desc())
df = df.withColumn('rownum', f.row_number().over(win))
# finally take the first row in each group
df.filter(df['rownum']==1).select('DeviceNumber', 'CreationDate', 'Name').show()
------------+------------+------+
|DeviceNumber|CreationDate| Name|
+------------+------------+------+
| 1002| 2019-01-01| Lamp|
| 1001| 2019-06-30|Device|
+------------+------------+------+